Voice recognition technology continues to grow in both capacity and popularity. The original technology was rather primitive – you may remember struggling with it while attempting to navigate a business’s automated attendant trying to perform even the simplest tasks. Thanks to enhancements in speech recognition technologies and cloud-driven digital assistants from big players like Amazon, Apple, Google, and Microsoft, the state-of-the-art has come a long way. The current word error rate of a voice recognition engine is around five percent, which at this point is very nearly what humans achieve (four percent). We have made huge strides: by comparison, the lowest error rate 20 years ago for unconstrained speech was 43 percent. Highly stilted grammars are also a thing of the past; instead, we communicate with our devices using natural flowing speech.
4 min read
12 min read
Understanding and accounting for the different memory requirements of your embedded graphics application is critical. Your choice of system-level memory (heap, stack, static code), and hardware level can not only impact graphics performance, but also compromise the user experience, and thus market success of your embedded UI.
In this first of two videos on memory optimization that streamed live on the Crank YouTube channel on 6 May 2020, Thomas Fletcher, Crank Software's Co-Founder and VP R&D, talked about the different memory optimization options available for those building on MCUs and MPUs, and how to organize memory use for highest performance. You can watch the replay by clicking the video, or read the transcript pasted below.