5 min read

Why Touchless Gestures Should Matter to Embedded GUI Development Teams

Topics:
Featured Image

Crank-Software-Why-Touchless-Gestures-Should-Matter-to-Embedded-GUI-Development-Teams

Are your development and design teams ready to meet the shift in consumer demand towards touchless GUIs? While there will always be a market for touch-based controls, there is growing desire and need to forgo physical contact with devices in favor of touchless interactions, such as gestures in front of (or above) the screen.

“Over 70% of people think they will be likely to interact using touchless gesture control in the future.” - Ultraleap survey of UK and US consumers

For any embedded GUI team, the time is now to investigate and determine whether touchless gestures should be on the roadmap. Here, we’ll explain the reasons why supporting touchless interactions makes business sense and areas for designers and developers to consider in their projects.

The business case for touchless gestures

Touch-free interactions have been around for several years in embedded systems — the 2016 BMW 7-Series was the first production car with gesture recognition — as manufacturers found safety benefits in reducing the task load on users’ fiddling around with knobs and switches. Voice-based technologies have grown considerably, mainly due to improvements in their underlying AI and machine learning algorithms, but these techniques aren’t always suitable for the environment they’re in.

In 2020, the rapid progress of gesture recognition technology and the falling cost of sensors has enabled new solutions, while the global pandemic has driven consumer behavior towards contactless systems. With distance and hygiene top of mind, new interaction habits are forming and new competitive differentiators emerging as the number of touchless use cases grow:

  • High-traffic areas that aren’t easily cleanable or controllable
  • High-risk environments such as medical, industrial, and food services
  • Systems that must support touchless accessibility requirements for people living with disabilities
  • Solutions designed for speed and convenience, such as contactless payments
  • As mentioned above, vehicles and similar environments where minimizing task load is critical

“By 2023, 50% of all major business applications will include at least one type of no-touch experience, such as voice, augmented reality, or virtual reality.” - Gartner

According to Research and Markets, the global market for gesture recognition and touchless sensing reached $10.9 billion in 2018 and is expected to be $65.9 billion by 2027. This presents a significant growth opportunity for embedded device teams to evaluate their strategies and rethink their roadmaps across the next three to five years.

New call-to-action

Does it make sense to include touchless gestures for your GUI in embedded systems? Consider the $1.8 billion decline in global revenue for the biometric fingerprint market. As Dimitrios Pavlakis, Digital Security Industry Analyst, states:

“Hygiene concerns due to contact-based fingerprint technologies pummeled biometrics revenues forcing a sudden drop in fingerprint shipments worldwide.”

We explained different types of touchless technologies in a previous blog, now we’ll dig deeper into what designers and developers should think about.

Different types of touchless gesture technology

Design considerations for touchless gestures

There are many things to think about when creating or shifting to a touchless interface. UX and UI designers should understand how controls and layout must adapt to support in-air gestures and how feedback is given for voice commands.

Here are some examples of design decisions to make:

  • The WIMP model doesn’t necessarily apply - the “windows, icon, menu, pointer” interaction style commonly used by embedded GUI designers may not work for your touchless system, as hand and finger movements are inherently more “noisy” than a mouse or touchscreen.
  • Control size and shape - Given that there’s less resolution with in-air gestures versus touching a control on screen, you may have to make touch zones larger to support user interactions.
  • Reducing physical load - as touchless gestures involve actual movement of a user’s hands and arms (often in the air above a device), any interactions that are overtly complex or prolonged may tire people out.
  • Sensor performance and user variability - unlike a mouse or touchscreen, where inputs are rarely missed, in-air gestures may not be recognized fully by the hardware connected to your application. This could be due to the performance envelope of the sensor itself or the differences in how users perform gestures.
  • Accessibility - some users may not be able to perform certain gestures, so supporting additional interaction types may be necessary.

We also described the importance of creating a gesture-based dialog between human and machine in this blog:

“Using a small number of intuitive gestures ensures that devices – especially public ones – can be operated by casual users who haven’t been trained on the system’s recognizable actions.”

Developing touchless GUIs in embedded systems

Whether you’re building an application to support touchless gestures from scratch or updating existing software, there are implications across the development lifecycle. There may be new or changed requirements for functionality and performance and new test cases to be developed that ensure the application works correctly.

In addition to the embedded GUI best practices we described in this blog series, here are considerations specific to supporting touchless gestures:

  • Your application may need to include additional drivers to support touchless sensing hardware.
  • Event handlers may need to be updated to support new messages, formats, and data.
  • Will the latency between the touchless sensing hardware and GUI be acceptable for users?
  • Depending on the sensing hardware, you may need to factor in the user’s hands or fingers occluding each other in different gestures.

 

How Storyboard makes touchless GUI development easier

The look and usability of embedded GUIs are mainly in the hands of designers and UX experts, and this is where Crank Storyboard helps align their efforts with developers.

By streamlining the iteration process between designers, developers, and the tools they use, you can update the GUI’s appearance and behavior quickly. Storyboard lets you test the application directly on the hardware, so you can validate gesture performance immediately, and iterate fast between desktop and target — essential for driving product releases that meet the growing demand for touchless interactions.

Storyboard's iterative approach helps teams implement touchless gestures

Market forces will inevitably find the right balance between touch and touchless interaction types but one fact is certain: Even though human contact may be getting more expensive, it doesn’t mean developing our touchless GUIs has to be.

To dive deeper into the touchless market and available technologies, check out this webinar panel with NXP, Ultraleap, and Crank:

New call-to-action