Efficient Rendering of Linear Brush Strokesâ€”my graphics research paper explained
Efficient Rendering of Linear Brush Strokesâ€”

my graphics research paper explained

*
14-Feb-2018
*
I wrote a graphics research paper that was published by the Journal of Computer
Graphics Techniques. This is the story behind the paper, a simple explanation
of it, and the things I learnt while writing it.
You can find the paper [here](http://www.jcgt.org/published/0007/01/01/) ([PDF](http://www.jcgt.org/published/0007/01/01/paper.pdf)).
---
# The story
I've worked on [Papaya](http://papaya.io) for over two and a half years---longer
than my full-time job at NVIDIA. Ever since, Papaya has consistently been
a fun but challenging project. I've found several instances of seemingly simple
tasks that turned out to be quite intricate once I started to implement them.
One such intricacy I found was while developing the brush tool. I realized
quickly that just like most other things in Papaya, I could move the task of
brush rendering to the GPU.
First, I looked at existing image editors.
During my initial tests, I found out that when you dragged the brush around with
the mouse, existing image editors computed the [Bresenham-esque
line](https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) joining the
last mouse position, and its current position. Then they stamped the circular
brush along each pixel of the line.
![ ](assets/PapayaBrushDimples.png width="40%")
I figured this was unnecessary, and wrote a pixel shader to compute the distance
of each pixel from the aforementioned mouse delta line segment. If the distance
was less than or equal to the brush radius, I filled the pixel in. If not, I
kept the pixel transparent. This gave me a hard stroke without anti-aliasing.
This is sufficient to implement the pencil tool, but the brush tool needs more
parameters such as hardness, anti-aliasing, opacity and flow. I wrote a [blog
post](http://apoorvaj.io/building-a-fast-modern-image-editor.html) declaring my
triumph.
I figured it would be trivial to add these parameters to my existing shader. I
was wrong.
My initial line-distance-based approach completely breaks down when you want to
support brushes with anti-aliasing, hardness or flow.
As the mouse moves around, strokes are chained together. The end of one stroke
overlaps with the beginning of the next one. This is how the user can draw
strokes of any curvature, even though the editor only draws one line segment of
a stroke in a frame.
If we use the distance from the line to compute the alpha value of the stroke,
the ends of strokes don't match up. Here are two soft horizontal strokes joined
up end-to-end:
This is why existing editors stamp the stroke periodically along the line, and
this is why my initial approach will not work.
I could do the same thing as existing image editors, and draw multiple
overlapping stamp meshes, blending them additively along the line. While this
approach is simple, it incurs the classic problem of GPU overdraw. This is
because each pixel is affected by several overlapping meshes, and so drawing the
stroke repeatedly involves writing to texture memory on the GPU, which is a
relatively slow operation.
I actually had a vague hunch about how to overcome this problem two years ago.
Then life happened, and I adjusted to my new job at NVIDIA, only working on
Papaya in my spare time, and this niche problem took a back seat. On the way, I
got better at graphics programming, and also a little better at maths. I started
looking at this idea again late last year, and it eventually turned into this
paper.
# A simple explanation of the paper
A brush stroke is controlled by four parameters:
1. The **diameter** of each circular brush stamp, in pixels.
2. The **hardness** controls how pixel alpha falls off with distance from the center for each stamp.
3. The **opacity** controls the maximum alpha of the brush stroke. No matter how much you drag the brush around, the value will not exceed the opacity, until you release the mouse and start a new stroke.
4. The **flow** controls how much each stamp adds to the stroke. If you have a low flow, the stroke will get more intense as you repeatedly drag the mouse over the same area. The maximum intensity is clamped to the value of opacity.
Here's a brush stroke with a low flow. Note how the left and the right pixels
are less intense because they are overlapped by fewer stamps. This is also true
to a lesser extent for the pixels on the upper and lower edges of the stroke.
![ ](assets/gimp_brush.png )
Observe that the intensity at each pixel on the stroke depends on how many
stamps overlapped it. The pixels on the far right and left of the stroke are
only overlapped by one stamp and hence are the faintest. The pixels along the
central stroke axis are stamped by a **diameter** amount of pixels.
The key insight of the paper is to model a stroke as if the stamp was
continuously slid across the central stroke axis, instead of the traditional way
of treating stamps as discrete additive blends.
In our new model, the stamp slides horizontally along the middle of the stroke.
So the stamp is always at a point $(x, 0.5)$, where $x$ varies as the stamp
slides.
For any given pixel $(X,Y)$ in our stroke, we compute the function $f(x, X, Y)$,
which gives us the instantaneous alpha value contributed to that pixel when the
center of the stamp is at $(x, 0.5)$.
As the stamp slides from left to right, any given pixel is only inside the
circular stamp for a finite amount of time. Stamps in the middle of the stroke
stay inside for the longest, and hence have the most intensity.
We can find the centers of the first and the last circular stamp that touched
any given pixel. Let's call the first (left-most) center $(X_1, 0.5)$ and let's
call the last (right-most) center $(X_2, 0.5)$. This means that our pixel
$(X, Y)$ was inside the stamp as the stamp moved horizontally from $X_1$ to
$X_2$.
Now, we can simply integrate over these two centers, and get the total alpha
(i.e. intensity) that a pixel experienced while it was inside the sliding stamp:
$$\alpha(X,Y) = \int_{X_1}^{X_2} f(x, X, Y).dx$$
The alphabets here can get a bit confusing, so here's a recap:
* $(X, Y)$ are the coordinates of a pixel. They correspond to the UV coordinates
in our pixel shader.
* $(x, 0.5)$ is the center of the sliding stamp. We integrate over $x$.
* $X_1$ and $X_2$ are constant for any given pixel. They can be computed. They
define the interval that we care about, and are the limits of our integral.
* $f(x, X, Y)$ is the function that gives us the instantaneous
intensity experienced by point $(X, Y)$ when the stamp is at $(x, 0.5)$.
By doing this, we can compute the intensity at a point in one go, without having
to repeatedly write to texture memory. This means we're doing additional
computations to reduce memory usage, which is usually a good thing.
With this technique, multiple strokes line up perfectly when blended additively
as seen below. There are two strokes here, but you cannot tell them apart when
they become the same color. This means we can get smooth, arbitrarily shaped
strokes using chained line segments, which was not possible using my initial
distance-based shader.
The paper describes how to compute these things when brush diameter, hardness
and flow vary along the stroke, as is required when using pressure-sensitive
input devices.
**The "Our Approach" section in the paper combines all of these concepts into
a succinct list of formulae that directly map to a pixel shader. If you find
the math scary, be sure to read the appendices; they are a lot friendlier.**
**To reiterate, you can find the paper
[here](http://www.jcgt.org/published/0007/01/01/) ([PDF](http://www.jcgt.org/published/0007/01/01/paper.pdf)).**
The paper has pretty comparison images. Be sure to check them out! Here are a
few teaser images comparing the discrete method (top) vs this approach (bottom):
![](assets/constant.png) ![](assets/variable_diameter.png)
# The things I learned while writing the paper
* Developing mathematical intuition is more important than rote-learning
formulae. I *highly* recommend the [3Blue1Brown YouTube channel](https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw) for this.
* It is often trickier to formulate a problem in terms of mathematics than it is
to solve the subsequent math problem.
* Not all applied math problems are going to be solvable like they were in
school. Research primarily consists of accepting this, and exploring enough
avenues without any guarantee of a final solution.
* Mathematical ability is learnable. I feel a lot more confident with these
kinds of problems now that I've spent significant effort solving this one.
* Wikipedia is a bad source for learning maths. It can be a decent source if you
want to refresh existing knowledge. Good sources are hard to come by.
* Pen and paper are the best tools to think about these kinds of problems.
Stepping away from a computer helped me think better. Computers are good for
conducting experiments to verify your on-paper hypotheses.
* If stuck, ask for help once due diligence is done. I got obsessed with trying
to find a function or approximation that was analytically integrable, and I
was unable to do so. I mailed [Nathan Reed](http://reedbeta.com/) for advice,
and he simply told me to take a step back and numerically integrate instead,
and I ended up doing just this.
* Publishing as a paper does require more effort, because your technique has to
be polished and rigorous. But this may be worth it---especially if, like me,
you've never published before---if only for the sheer street cred. ;)
* Academic papers are expected to be terse, but I believe there is value in
creating supplementary material that is easier to read for beginners who want
to understand your work. I have attempted to make the paper easy to read (it's
not rocket science in the first place), and I've written this blog post to
give further context and explanation. Hope this helped! :)

Email / Twitter / GitHub / CV