4 min read

The low down on Hybrid Rendering and why you should include it in your next wearable or IoT product development project

By Brian Edmond on Aug 7, 2019 12:07:14 PM

If you haven’t heard the term hybrid rendering before, that’s because it’s pretty new. Broadly speaking, hybrid rendering generates graphics by using multiple distinct paths through silicon. One example is creating CGI movie frames using both a GPU and a CPU however we at Crank use the term for something that is much more useful for embedded developers and designers: rendering a user interface with multiple graphics accelerators.

If this feat seems intriguing or even confusing – read on.

In the embedded domain, hybrid rendering means creating a user interface with both a 3D GPU and a secondary graphics processor within the same application. The secondary graphics processor must be able to accelerate 2D graphics (this is typically a composition core but other options are possible). The application dynamically switches between accelerators at run-time, depending on what graphics need to be displayed.

An example of silicon with the right magic is the NXP 7ULP which has both an Open GL ES GPU as well as a 2D GPU accelerator/compositor. The 2D GPU is very powerful and can accelerate alpha blending, scaling, rotation/mirroring, overlays, bit blits, lines, rectangles, color space conversion, and more, with much less power consumption than its 3D equivalents.

Continue Reading