It’s clear that COVID-19 has people rethinking long established cultural habits, such as shaking hands and standing close to others. It is also reshaping our technology habits, as people become less comfortable with touching the public surfaces of self-serve supermarket displays, information kiosks, and ATMs. Leading hand-tracking and haptics tech company, Ultraleap, recently published their 'The End of Public Touchscreens' study whitepaper with research that confirms this: people are rapidly changing their attitudes about public touchscreens due to the virus.
For those of us building embedded systems, this raises the important question: how do you best design a touchless public interface in a hygiene-sensitive society? What is the future of touchscreens in a world where a touch can mean anything from connection to bioweapon? While devices reserved for private use – those in the home or on one’s person – can be used without consideration of contamination, from here on out any devices used by the public may need to reconsider their embedded graphical user interface.
Creating a dialog with gesture-based GUIs
One way to create a touch-less GUI (or user interface, UI) is to use gesture recognition technology. A gesture-based UI creates a dialog between human and machine where the machine serves up screens full of information, and the human provides input through natural hand motions in front of (or above) the screen.
While gesture technology has been widely available since the Microsoft Kinect released in 2010, the most widespread public acceptance beyond gaming has been in automotive applications used by BMW and others. The benefits of a gesture controlled HMI are many: there are no hygiene issues with touching surfaces, there is no mechanical wear of buttons or switches, and it can be operated in areas with loud background noise.
If hand tracking systems for HMIs have one big drawback, it is on the human side. The acceptable “language” of recognizable gestures must be sufficiently simple and easily understood by users. Using a small number of intuitive gestures ensures that devices – especially public ones – can be operated by casual users who haven’t been trained on the system’s recognizable actions. A bonus to constraining the recognizable gestures is that this can improve the recognition accuracy of the vision system.
The full capability of gesture technology is still under exploration, but considerations to make in your design include:
- Understand your audience: age, technology familiarity, and gender.
- Leverage gestures that are easily understood and culturally neutral as possible.
- Select gestures that can be robustly recognized.
- Provide on-system training for unfamiliar users.
- Provide accessibility alternatives for users who are unwilling or unable to gesticulate in public.
- Protect privacy concerns generated by always-on video and machine learning data collection.
Voice recognition: the heavy hitters
Voice recognition interfaces are a big component of touchless HMI technologies because they’ve finally achieved recognition rates acceptable by the average user and the ability to understand natural language. Unlike what one might expect, they don’t need a consistent internet connection either. Our partner Snips (now Sonos) has successfully moved voice recognition technology with cloud performance into unconnected embedded systems, and Panasonic has demonstrated Amazon Alexa in a hybrid cloud/embedded configuration.
The drawback of voice systems now is that – somewhat ironically – they must be spoken to. There are many situations where voice controlled systems do not make sense, because it is socially discouraged (for example, in quiet areas like libraries or museums), where privacy is needed (when performing financial transactions or providing personal information), or where the surrounding environment is loud (places such as factory floors, airports, or subways). That’s why we recommend a multi-modal HMI strategy to allow for voice alongside other input mechanisms.
Expecting everyone to use smartphone apps isn't the answer
If someone uses their personal phone to control a public system, they can avoid touching any shared surfaces. This is only part of the solution, however, as a mobile phone solution almost always requires installing a new app and setting up an account. For repeated experiences – fast food purchases, parking meter payments, public transportation rides, and the like – installing a companion mobile app may be an acceptable way to avoid unwanted germ exposure for a percentage of users. However, the large amount of friction to the user experience will guarantee not all potential users will opt for a phone-based option – Ultraleap’s research indicates that more people would still prefer a human-operated clerk at the counter rather than a mobile app.
Proximity sensors – close-enough to touching
A close cousin to a gesture-controlled HMI for a touchless user experience is one based on proximity sensors. Most people have a proximity sensor – even if they’re unaware of it – as they’re in most current smartphones to support the phone’s ability to detect if it’s in someone’s pocket or held against the face. However, simple proximity for a screen on a fixed device doesn’t give you much other than “the user is there”.
For a much more capable solution, high-resolution proximity sensors use an array of ultrasonic, infrared, or capacitive sensors around a display to allow a user to indicate a specific screen location. This allows companies to replace a touchscreen with a touchless UI and may allow them to replace existing touchscreen designs with only minimal software changes.
A projection touch-screen from Bosch (which we covered at a past Smart Kitchen Summit), while not proximity-based, might be able to allow users to interact with a system using the surface of their choice. Such a system would best work in environments like workplaces that are more controlled, but it provides an additional means to avoid touching public surfaces.
Expect NFC/RFID to have an increasing role
While NFC (near field communication) and RFID (radio frequency identification) aren’t technologies that aim to replace a touchscreen GUI, they’re touchless technologies that will have an increasing role in future HMI design. COVID has proven the worth of using NFC for touchless interactions. Tap and pay systems have become widely embraced during the COVID era, even in the US where these systems have previously met with resistance.
What does this mean for HMI design of machines in publicly used spaces like fast-food kiosks, emergency room check-in screens, pay-as-you-go gym machines, or automated laundromats? The convenience of hovering your card – whether a credit, ID, health, or membership card – may become a much more accepted and common means to identify yourself, your needs, and your profile.
Eye tracking: not common but effective
Another possibility to replace the ubiquitous touch screen is eye tracking. While not as common as some of the other solutions mentioned here, it offers many of the same advantages as gesture recognition – such as language independence, imperviousness to noise, and consistent availability. Because of its limited exposure to the public, user familiarity is a current issue and systems must be designed to train the uninitiated user.
So - what's the future for public touchscreens?
Can we keep our tried-and-true touch screen and solve its hygiene image in other ways? Some people seem to think so. For example, by integrating a sanitizer dispenser into their kiosks, Corum Digital aims to make people using their products feel safer. This is reliant on people’s belief that the system will be clean – something that will probably only be borne out by in-field applications.
What about using your foot? Kiosk Innovations has a foot controller that is guaranteed to be awkward for most users (unless you’re a guitar player or adept at Dance-Dance-Revolution). But one thing we’re sure of is that under the right circumstances, the public can adapt to new technologies much faster than previously expected.