With devices like the iPhone and iPod Touch out there the bar has been raised on embedded graphics. Everyone expects to see smooth animations and screen transitions. The problem is that most of the devices lack the horsepower and graphics capabilities of the iPhone. Many of these devices have no graphics acceleration at all causing programmers to consume precious CPU during a transition only to discover the frame rate is too low.
On many devices the solution to this problem is to use layers. Many display controllers have multiple hardware layers which can be turned on and off and moved around the screen much like a window can in a traditional GUI. Devices such as the Freescale i.MX line. Think of these layers as distinct frame buffers with a position and size. The display controller then composites these layers together on the screen, possibly with alpha blending (transparency), and displays the final result to the user. Since the work is done by the display controller you are not consuming much CPU.
For example if you want to have one screen on your interface fade into another you could render a screen to each layer and on a timer set the alpha or blending value for the top layer to gradually blend the 2 layers. This will use almost no CPU and will give you the 30 fps (or whatever you are looking for).
Another great thing about layers is that many support the concept of a source and destination viewport. This means your layer can be larger than the actual display (lcd) and you can pan around it. If you wanted to have one screen ease (slide) into another you could create a layer that was twice the display size and render the two screens side by side. Then over time you can move the source viewport to display a different area to the user. This will also use a limited amount of CPU and give you a great frame rate. This can also be used for panning large maps and photos.
So if your hardware has 2 layers get using them, I have used them on many projects to get great effects with limited CPU costs.