about, cv
Alpha Compositing, OpenGL Blending and Premultiplied Alpha

Alpha Compositing, OpenGL Blending and Premultiplied Alpha


I've been implementing alpha blending in Papaya for the past few days, and figured that a blog post about alpha compositing with OpenGL might be useful for other developers in the future.

What is alpha compositing?

Alpha compositing is the process of combining an image with a background to create the appearance of partial or full transparency.

Essentially, it is the process of drawing two potentially transparent images on top of each other to create a resultant image. It is the equivalent of the image produced when a layer is blended with the "Normal" blend mode on top of another layer in Photoshop.

Conventions and nomenclature

To simplify the discussion here, we'll only talk about compositing two images at a time. More images can be composited in exactly the same way, one after the other.

Let us call the base image the destination image, since it is already present in the buffer we want to composite on.

Let us call the image to be "overlayed" the source image.

Any given pixel consists of the channels Red, Green, Blue and Alpha, denoted by \((R, G, B, A)\). The subscript of the channel name denotes the image it belongs to.

Any given pixel in the source image consists of channels \((R_s, G_s, B_s, A_s)\).

Any given pixel in the destination image consists of channels \((R_d, G_d, B_d, A_d)\).

Any given pixel in the final image consists of channels \((R_f, G_f, B_f, A_f)\).

For simplicity, we will assume that all channels lie in the range \([0, 1]\).

How OpenGL blending works

If blending is not activated explicitly, OpenGL overwrites the destination with the source image by default. In order to enable and control blending, there are three main function calls:

  1. glEnable(GL_BLEND): This activates blending.
  2. glBlendEquation(mode): This function is used to set the blend mode. The blend modes dictates what is done with the scaled source and destination values.

    e.g. The most common blend mode, GL_FUNC_ADD, evaluates channels by addition. So \(R_f = R_s k_s + R_d k_d\). Green, Blue and Alpha channels are computed similarly.

    GL_FUNC_SUBTRACT, on the other hand, evaluates by subtraction. So \(R_f = R_s k_s - R_d k_d\). You can read in detail about this function in the docs.

    If you're wondering what the \(k_s\) and \(k_d\) variables are, that leads me to the third function.
  3. glBlendFunc(\(k_s\), \(k_d\)): This function is used to set the values of the scaling factors \(k_s\) and \(k_d\) for source and destination respectively. For a full list of the values the function accepts, read the docs.

The common way of alpha blending

Commonly, you'll find blending set up like this:

    glBlendEquation (GL_FUNC_ADD);

The above glBlendFunc sets the scaling factors to the following: \[\begin{align} k_s &= A_s \\ k_d &= (1 - A_s) \end{align}\]

Combined with GL_FUNC_ADD, the full formula for the final alpha \(A_f\) becomes: \[\begin{align} A_f &= A_s k_s + A_d k_d \\ \therefore A_f &= A_s A_s + A_d (1 - A_s) \end{align}\]

Similarly, final values for red, green and blue \((R_f, G_f, B_f)\) become: \[(R_f, G_f, B_f) = (R_s, G_s, B_s) A_s + (R_d, G_d, B_d) (1 - A_s)\]

At first glance, this formula looks passable, but is in fact incorrect. The Wikipedia page for alpha compositing has the correct formulas. These formulas are: \[\begin{align} A_f &= A_s + A_d (1 - A_s) \\ (R_f, G_f, B_f) &= \frac{(R_s, G_s, B_s) A_s + (R_d, G_d, B_d) \mathbf{A_d} (1 - A_s)}{A_f} \end{align}\]

Note the complete lack of the \(\mathbf{A_d}\) in the incorrect formula. One critical case in which the incorrect approach falls apart is when the destination image is translucent, i.e. when \(A_d < 1\).

I suspect that 3D engines do this because the destination is usually some kind of game frame, which usually does not have transparency. In image processing, however, this is not the case.

The correct formula for \((R_f, G_f, B_f)\) seemingly breaks our OpenGL API. It needs two multiplications instead of one: \(A_d\) and \((1 - A_s)\), and one final division by \(A_f\). How do we achieve this?

Alpha premultiplication

If a color is given by \((R, G, B, A)\), that color when premultiplied, becomes \((R \cdot A, G \cdot A, B \cdot A, A)\). In simple terms, when a color is premultiplied, its color channels are multiplied by its alpha.

Looking back at our correct formula, \[\begin{align} (R_f, G_f, B_f) &= \frac{(R_s, G_s, B_s) A_s + (R_d, G_d, B_d) A_d (1 - A_s)}{A_f} \\ \therefore \color{blue}{(R_f, G_f, B_f) A_f} &= \color{blue}{(R_s, G_s, B_s) A_s} + \color{blue}{(R_d, G_d, B_d) A_d} (1 - A_s) \end{align}\]

Voila! The three parts highlighted in blue are essentially the premultiplied versions of the final, source and destination colors.

Note that the above formula is now in the OpenGL API form, with glBlendEquation set to GL_FUNC_ADD and glBlendFunc set to: \[\begin{align} k_s &= 1 \\ k_d &= (1 - A_s) \end{align}\]

This glBlendFunc corresponds to glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA).

Therefore, if the source and destination images are premultiplied, the above setup yields the resultant image, which is in premultiplied form too. We must reverse the premultiplication process if we want to use the image outside our internal graphics pipeline.


To sum everything up, our pipeline now operates like this:

  1. Render destination image onto frame buffer using premultiplication shader.
  2. Render source image onto frame buffer, again, using premultiplication shader, now with blending enabled, glBlendEquation(GL_FUNC_ADD), and glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA).
  3. Reverse the premultiplication process (I like to call this demultiplication), using the demultiplication shader.

The demultiplied image is the correctly alpha blended image.

Note that the simplest form of demultiplication might produce invalid color values when divided by zero alpha. This does not matter in usual cases because alpha is zero, but it is relevant when the texture is mipmapped.

Further reading

If you have any corrections or suggestions, please get in touch via email or Twitter (links in footer).