Nov 10, 2021

7 min read

Rendering text

There was a time where rendering text with a computer was the top of the available technology. I’m not even speaking about text on screen, but text on paper.
What we take for granted today with high res screens, crisp fonts and invisible contour was pure science fiction some decades ago.
And it’s even more fun to think that text rendering is still a domain of research nowadays. Because of more pixels, more mobility, more CPU/GPU power, rendering text evolves everyday.
Let me tell you a bit of history and explain a few algorithms that are used today for rendering glyphs.

Before the screen

In the early computer days, there was no screen. For us today, it’s impossible to think having a computer without visual feedback but it was pretty common.
First computers used blinking small lights to show content of registers. You had to convert manually binary to number. Keyboard didn’t exist in either. Move up and down switches to enter numbers.

French micro computer Micral.

With memory becoming cheaper and more efficient, things improved quickly.

The terminal

The terminals we use, everything like Bash or PowerShell, take their roots in the 60s and 70s. Enter values with a keyboard, press enter, get the result, scroll the page. Except that was done with something that looks like
a typewriter. I can’t imagine the noise those matrix dot printer would do in those giant rooms. Anyway, text was printed using small pins. For each character a matrix of pins would be up or down and put ink on the paper when up.

Type things and it prints the result. That’s just a terminal. Computer takes one whole room.

Characters could then be stored in the memory as small matrices of bits (let say 8x8). With ink on the paper for 1 and no ink for 0. A character in memory would only be 8 bytes. 1Kb for whole ASCII chart. This would be the standard for 20 or 30 years.

Then came the light

Same memory layout, same ASCII chart. Except instead of putting ink on a paper, a small dot on the screen will be lit. First monochrome monitors appeared. Text would scroll from the bottom up like before, but no more paper jams and less noise. Big improvement.

HP terminal

At home, early computers from the 80s could display 40 x 25 characters on screen with a 8x8 resolution. Making glorious 320x200 pixels displays. It stayed like that for some years during the 80s, from Commodore C64 to MS-DOS on your brand new Intel 386 PC.

Infamous C64 boot up screen

Some more characters appears to feel the remaining space in the ASCII chart (ASCII is 127 characters, more diverse glyphs appeared for localized text to get up to 255 characters). Computers were essentially made in the west. ASCII stands for America Standard Code for Information Interchange.


In 1991, Apple released the True Type Font format (.TTF) along Mac System 7. The font file contains vector information (curves and lines) to describe a glyph. It also introduced the possibility of getting more than 255 characters for more diverse and complex character sets (like Kanjis).
There were other vector font formats before but none that passed through the years like TTF. The Alto from Xerox in 1973 had everything from a “modern” computer of 1990. It was expensive are rare and like many technologies, cost was the main hurdle in the way of customer acceptation. Maybe people were not ready for such an amazing piece of Hardware and Software.

This thing is from 1973. 10 years before Apple Macintosh.

Needless to say, rendering 10s of Kilobytes of curves on a 8Mhz computer was difficult. Years passed and today, we can render 1000s of characters on 144Hz screens.


Video games always favor fast technics at the expense of quality. Having 30 or 60 frames per second is always more important to maintain great interactivity. From the NES to the PlayStation, text was more or less like Dot Matrix printers: 1 bit per pixel and limited character sets.
No antialiasing, no high res. Readable text that’s all.

Nes game sprites dump. See the characters top right.

With higher resolution, efficient GPUs, things improved with the XBox and PlayStation generation. Glyphs were pre-rendered in textures. With footprint depending on character size and precomputed antialiasing. Text was a succession of textured quads with the texture corresponding to the texture portion needed.

There are many bitmap font generator. Like this one for Mac.

In 2007, Chris Green from Valve released a paper titled “Improved Alpha-Tested Magnification for Vector Textures and Special Effects “
Instead of storing the glyph as is, in the texture, a signed distance field (each pixel is the smallest distance to a glyph curve) is computed offline and a bit more of a complex shader is applied when rendering. Thanks to the GPU hardware, it allows way more smoothed curve approximation, at higher resolution. It also opens the door for effects like different colors for the border, anti aliasing, … and for a smaller memory footprint.

Glyph packed into a distance field texture. The darker the pixel, the further from the glyph border.

It had a huge impact on the game industry. Trading a bit of GPU power for quality would be the trend for the coming years.

GPU rendering

With GPUs becoming even more powerful, could we improve the quality and speed even further?
I’ll detail 2 rendering technics. Both based on TTF curves. I’m sure more technics are available. I don’t know how text is rendered with Windows or MacOS systems. I guess it’s GPU based. Any info on other solutions is welcome.

Triangulated mesh

At the same time Chris Green released his Distance Field paper, Charles Loop and Jim Blinn released “Rendering Vector Art on the GPU”. Text and shapes are not pixel or texel based but the shape is rendered with triangles and a shader is used to determine if the curve is visible or not.

Curve outlines (a) are triangulated with green filled and curve filled (b) end result (c )

Inner shapes are rendered as a triangulated mesh. The mesh is computed offline. Triangles with curves are also rendered with a triangle but a pixel shader computes the curve. I won’t dive in details about quadratic curves but it’s basically a bilinear interpolation. By moving the control points in a texture and doing bilinear texture sampling, it’s possible to compute that curve.
IMHO, computing the triangulated mesh is the most difficult part. Moreover, because GPU computes 2x2 blocks and discards pixel that are not part of a triangle, edges are computed more than once in the shader. With small glyphs on screen, the overhead can be important. I think this technique shines when rendering SVG or for big on screen text. Red curve is computed thanks to GPU bilinear interpolation.

A nice paper on bilinear quadratic curve computation:

Curve intersection

10 years later, in 2017, Eric Lengyel released “GPU-Centered Font Rendering Directly from Glyph Outlines”. This gave us the Slug Library. The technic is different but the results are amazing. It’s also more complex and GPU intensive.

Each glyph is rendered using a few triangles. A generic solution is to use 2 triangles per glyph (to make a quad). A more efficient version is to balance between 1 and a few triangles to have a hull that is closer to the glyph shape. Too many triangles can be counter efficient (see previous 2x2 block computation).
Then, for each pixel, the algorithm determines if it’s inside or outside the shape made from curves. To do so, it computes the number of crossed curves in a particular direction.
The paper details algorithm to speed up that computation. I’ll show you a naïve version where for each pixel, the sign to the curve is used to check if inside or outside. The important info here is limited triangle mesh and computation per pixel. As it is per pixel and vector info, it looks good for any size. Glyph curves are stored in a texture. At runtime, each curve is tested to see if the texel in inside or outside the curve. 2 Triangles forming a quad is used here.

This has been a long way from the first displays to the technics with pixel shaders. I think the quest for newer font rendering systems will never end. Hardware continues to improve year after year. Maybe in the coming months, someone will come up with a faster, better, nicer algorithm running in a compute shader. We are standing on shoulders of giants.

Images courtesy of

Cedric Guillemet (@skaven_) / Twitter