16 min read

What font types should you use in your embedded GUI project?

What font types should you use in your embedded UI project

Thomas Fletcher, Co-Founder and VP of R&D, and Rodney Dowdall, Product Architect, talk about the font options product development teams have available to them when it comes to designing their embedded UI project in our Embedded GUI Expert Talks with Crank Software.

By watching a replay of Thomas and Rodney's live Youtube video or by following along the transcript below, you’ll learn the differences between bitmap images and rendered graphics font, and how text rendering affects the memory and quality of your embedded user interface.

Or, jump straight ahead to the question and answer portion of the live video.


Optimizing graphics memory in embedded systems

In one of our last live embedded talks, I talked about the nature of embedded images and how selecting the right image format can have a large impact on building out your embedded GUI from a performance perspective, but also in terms of what you can actually fit in the resources that you have in terms of RAM and Flash. One of the things that we can look back on is the fact that there's a lot of similarities between fonts and images.

Rendering a collection of glyphs AKA mini images

If you think about a string that's being rendered to a computer screen, that string is just a collection of glyphs. Glyph the technical term for the string, but you can also just think of them as mini images.

However, the string or mini images aren’t encoded together the same way you’d see a PNG image. With a font file, it’s just the data that's needed to render them as a mini image at runtime - in simple terms.

How many embedded font file formats are there?

With images, there are multiple formats including: PNG, JPEG, and Bitmap. With fonts, there's a whole bunch of different formats. The two main types are: true type fonts and open type fonts.

Both font file formats contain just the data for the fonts - if the files contained the images for the glyphs too, the font file would be massive.

There are two types of font files to remember: true type fonts and open type fonts. Both files only contain the data of the fonts.

What is in a font file?

The data. When you render a font, you would use a font engine. That font engine will take the data of the font file, construct the image at run time and then push it to the computer or embedded user interface screen.

When the font is rendering, the image gets cached in memory and that way, the next time you need to render that mini image again, it will grab it and push it to the screen.

Using a dynamic cache for GUI fonts and images

As mentioned, there is a cache involved. The cache goes back to the similarity between images and fonts. In the same way we would decode a PNG image or decode a JPEG, we'd have a local cache. For fonts, we would cache the glyph images. When you use it once, it goes into memory and then there's a look up performed for the next time the font file is used. This process allows the glyph image to work without computing that glyph data from scratch.

Caching the data uses RAM. If you don't have a lot of RAM, you can pre-render the glyph images, store them in what’s called a Bitmap image, and then draw those directly off of Flash or storage. In this case, a Bitmap image is just the alpha data. Because you want to colorize your fonts, you can store the alpha values needed to render the glyph image or mini image on the graphical user interface screen.

Don’t have a ton of RAM? To cache GLIF data: Pre-render GLIF images, Store images in a Bitmap image, Draw images off of Flash or other type of storage


Pre-rendering fonts will use memory

Once you've pre-rendered the image, just like an image, it can sit in Flash and doesn't require any RAM. In other words, the image requires no memory whatsoever at that point. An engine would draw directly from the storage device.

Pre-rendering saves development teams RAM and the actual computational cost of doing the glyph rendering in the first place. But the drawback becomes size.

The pre-rendering process means you are pre-rendering all of your glyph images ahead of time, that you're planning on drawing from when you run your embedded UI. So as an example, say you have the Roboto font (an open source front provided by Google) as a  70 point font, a 48 point font and an 18 point font. All of these scalable fonts have to be pre-rendered ahead of time, so the size gets really, really big really, really fast. With three different point sizes, you're looking at approximately 5.6 megabytes of storage to store that font file pre-rendered in Flash.

How should you preserve storage space when pre-rendering fonts for embedded devices?

1. Store less bits for the alpha map

The first way your embedded GUI development team can save storage space during pre-rendering is to store less bits for the alpha map. So, you could go 4-bit, 2-bit, and 1-bit, then that will decimate the size of what's being pushed to storage. This reduces the crispness of your fonts. The images, Rod is displaying in the video, are the comparison of the different bit depths of glyphs. On the left-hand screen in the upper right-hand corner, there’s TTF (true type font) or your font engine rendering right from the beginning, putting it into memory and caching the glyph as it goes through.

The TTF, or true type font, scenario is where you're using a font engine to actually render the font and caching the glyph in memory.

The other option, the 8-bit, 4-bit, 2-bit and 1-bit, are Bitmap scenarios. That's where you’re pre-rendering the glyph, and storing it in Flash. From there, you would render the glyph image directly from the storage device. Looking at the image in the live video, it's a little bit difficult to tell the difference between quality of images. But, if you zoom in and look at the 1-bit, you can see you’re starting to lose some of that crispness on your font around any of the rounded characters like your C, your O, your Q. If you take a look at them, you can see that they’re a little jaggy and that's because in the 1-bit scenario, you only have an “on or off switch”. Am I drawing that pixel or am I not? This makes doing circles really, really tricky. Whereas using an 8-bit, you have 255 values of alpha. With 255 values of alpha, you could really have a nice braiding as you're going through.

The difference between the 8-bit and the 1-bit in the Bitmap scenario is clear.

What’s the difference between the TTF and the 8-bit scenario? 

There is no difference between the TTF and 8-bit scenario. They are exactly equivalent because a TTF generates an 8-bit alpha map. When the font engine is reading data from the font file and creating the glyph image, it's actually creating an 8-bit alpha map of that specific glyph.

When it comes to smaller fonts or text, there are certain scenarios that are more prevalent in terms of where you see image quality loss or degradation. The quality loss depends on your embedded user interface, what size of font you're using, and what type of font you're using. It varies based on what you want to have happen in your UI.

There are 3 factors to consider when it comes to image quality:

  • What type of embedded GUI your GUI development team is creating;
  • What size of font is being used; and
  • What font type is being used.  

There are 3 factors to consider when it comes to image quality:  What type of embedded GUI your GUI development team is creating; What size of font is being used; and What font type is being used.  

For example, there is going to be a tradeoff in quality if your team is trying to save in space. If you go from 8-bit to 4-bit, you may not see too much of a tradeoff, but certainly as you go down to 2-bit and 1-bit. For the big space savings, there’s a quality trade-off.


2 - Minimize character counts to reduce storage space

The second way to mitigate storage space is minimizing character counts. Fonts come with a lot of characters, or a lot of glyphs in them for starters. If you were making a thermostat as your embedded UI project, you wouldn’t need to use all of the characters - only 0 to 9 for your really large font. That means you could actually just strip every other glyph out of that font and then that way you'd make the font a lot smaller and wouldn't have to render those nine characters.

The main reason for using Bitmaps and pre-rendering all of the characters, as option one, to reduce space is because we may not know what kind of text we’re dealing with. If you have a dynamic UI where you're going to be changing a lot of text or changing the point size, you must pre-render those ahead of time because the engine is not going to know what it will need while it's running.

If your team does know the text that you’re designing with in advance, then you can use this font decimating technique to just generate the content for the glyphs and the characters that your embedded project requires.

Using an open source tool, like Font Forage, allows you to go through and modify a font, create your own font, and take out what you don't need in your font file.

When it comes to fonts and our GUI development tool, Storyboard, we do not mix and match similar to images. That’s because it just allows us a much simpler path, a more optimized path for rendering the font glyphs.

Optimization for fonts in embedded GUI development projects

When it comes to optimizing fonts for embedded user interfaces, Bitmap fonts are much faster than rendering through an engine because there's no pre-computed stage. The load time and look up is a lot quicker.

A quick rule of thumb for your decision-making between choosing a font engine path, and when to go with pre-generated Bitmaps, is to ask yourself if you have more Flash or more RAM?

  • If you have more RAM: Go with a front engine scenario.
  • If you have more Flash: Go with your pre-computed Bitmap scenario.

A quick rule of thumb for your decision-making between choosing a font engine path, and when to go with pre-generated Bitmaps, is to ask yourself if you have more Flash or more RAM?

But the other consideration to think about are, how is your embedded GUI performing? Is it going to be using a lot of dynamic fonts? A lot of dynamics sizes? If that's the case, then the font engine approach is probably your better approach. If you have static text, and you know up front how your GUI will function, then the Bitmap scenario may be the way you want to go.

A glance into Storyboard and Metrics View

In our Storyboard embedded GUI development platform, the designer metrics will present the font metrics as well as general metrics for your embedded GUI. On the left hand side, seen in the video, we have the true type scenario (TTF) where the free type font engine is going to be used to render fonts and you can see a memory cost associated with that as well as a storage cost. The size of the file is 156 kilobytes. This will have to be stored and then to load that into memory, it’s going to take at least 138 kilobytes. Now, I say, at least because when we're in Storyboard, we're not 100% sure how the free type font engine is going to cache our glyphs. It does that management on its own so the memory will fluctuate as you're going through the development of your user interface.


To cap the resources being used, there is a Storyboard resource limit that allows you to say “only consume X amount of RAM for my cache”. And when you specify that, it won't grow unbounded.

The resource limit is the turn off between the CPU using additional CPU to constantly re-render those glyphs and caching it so that you have them ready to go. 

The second metrics view that you can see, in the video, is the 8-bit scenario where you’re pre-rendering the fonts as an 8-bit alpha map. In doing so, we’re using 5.63 megabytes of storage space but no RAM in this case. And that's for three point sizes: 48, 32, and 16. With these three font sizes, there's about 878 glyphs in the Roboto font. That’s approximately 5 megabytes of storage use.

Long story short: Each glyph in the font is stored at the three different point sizes and the bigger the point size, the bigger the storage cost.

In the last scenario, I took the Roboto font and stripped out everything except for the ASCII characters. I took out about 93 glyphs and you can see that the storage size for the same point sizes drops significantly. The storage size is now 537 kilobytes.

Comparing storage “costs”

Comparing Bitmaps and TTF costs, we’re not far off of the original cost in what I just completed above. And I've got a fully deterministic system where I know exactly what characters are going to be rendered and what that cost is - and I'm not consuming any RAM. This is at the 8-bit quality level - the same quality level we would have been using with the TTF rendering.

So, if you’re thinking about your own embedded development project, and the quality of the 4-bit scenario was reasonable to you in terms of point sizes and text type not degrading the visual presentation, you could chop that number in half again. Dividing it by 2 and putting your embedded user interface well below the original footprint size.

What about dynamic text? 

If your embedded GUI project is using dynamic text such as using internationalized languages, a variety of font types, or general dynamic behavior throughout your user interface, you couldn't pre-render it. If that’s the case, your best approach is using a font engine. In using a font engine, you will want to use the cache attributes to limit the amount of RAM so that you could still fit into small embedded devices.

Using rendering technologies to your advantage

In terms of rendering technologies, the reporting in Storyboard provides quick feedback  on image costs and choices. The same goes for font costs and choices - you can see exactly how to balance the resource usage of how much RAM being using, versus how much Flash being using.

Live Q&A on Embedded Fonts

Are there certain rendering technologies that we can take advantage of the instructions discussed and incorporate them directly to the engine?

Answer: I'm thinking in terms of different ways of rendering, different types of rendering technologies, OpenGL versus software rendering perhaps. In the OpenGL case, we will actually cache the glyph in a texture. So we're able to look them up there as opposed to having to rely on the free type glyphs cache in the software case. That's just to reduce the amount of memory in the software case. In OpenGL, because we have the texture already, we just use it so that we don't have to continue to create and destroy that texture. So, in that case, OpenGL is going to be a little more favorable towards using the font engine scenario because we will have that glyph in and texture and that's the data format that OpenGL is expecting its images to be in. In the software case, you're kind of either or. You're basically going to get the benefits that we talked about in the software case using the free type font engine. You've got this dynamic font scenario, that would be the best case to go. And then, if you know your font of choice and you want to go with a more static font scenario, then that will work just as well with the software rendering.

What about 16-bit versus 32-bit displays? I know when it comes to overall memory consumption, there's different image formats for a final display. Does that play any role here?

Answer: It doesn’t play much of a role here because the image itself is actually just an alpha map. So it's that 8-bit, 4-bit, 2-bit, - that conversion we're going to do that on the fly anyway as the engine is drawing the glyph. That's also because we need to colorize. If I want a red, green and blue font text, I'm not going to have three different copies of those glyphs. I only keep that one copy. And from there, we would calculate that color value as we're going through the individual pixels. So the 16-bit, or 24-bit, or 32-bit won’t matter here.

Does Storyboard support custom fonts or only certain font types?

Answer: When Storyboard is pre-generating fonts, Storyboard will: open the font file up, read the glyph data out of it, render it into an image and then push that out as a Bitmap font. So it's not really custom at that point. The input doesn't have to be custom. The output is just a Bitmap image type of the alpha map data for that font so that output is specific to our tooling, and our engine. The input can be a wide variety of literally anything that Storyboard supports. So true type, open type, etc.  

If I use one of the font tools, the same font tools that I use for eliminating glyphs like deleting is easy. I can delete in Photoshop really well but can't create an image. Is it the same sort of thing? If I wanted to create my own custom font, can these tools let me do it by drawing the font out or importing other assets?

Answer: Yes. You can pull things in by putting them into the font file. The typical scenario is emojis. For example, you can make an emoji font file and then push that out as either a true type font (TTF) that the font engine will understand or as pre-rendered Bitmap. The same goes for icons and other vector content too.

If you needed to have something with dynamic coloring, like an icon that can be colored green or red because there's an alert or an “okay” situation, you could totally go that route and make that a font file and then color it in the way you’d like.

Wrap Up

Ultimately, embedded fonts are flexible and similar to images in this way. Fonts do come at that unique intersection of not only being used for text, but also for being utilized for image design elements inside embedded GUIs.

For more Embedded GUI Expert Talks like this one, check out our YouTube playlist for on-demand videos and upcoming live dates.

New call-to-action


Originally published: Jul 15, 2020 11:07:45 AM, updated 07/27/20

Thomas Fletcher
Written by Thomas Fletcher

Thomas is a co-founder and VP of Research and Development. With 20+ years in embedded software development and a frequent presenter at leading industry events, Thomas is a technical thought leader on Embedded System Architecture and Design, Real-time Performance Analysis, Power Management, and High Availability. Thomas holds a Master of Computer Engineering from Carleton University and a Bachelor of Electrical Engineering from the University of Victoria.

Post a Comment

New call-to-action