By Jesús Gorriti
|Today's devices and interfaces are collecting human data in innovative ways.
We've already moved away from point-and-click interactions to touchscreens; now we’re headed towards using our bodies as both the controller and the interface. Why type a password or stand in a checkout queue when you can use your fingerprint, retina, or a connected device instead? The interface as we know it will recede into the background as we use parts of our unique genetic makeup to get things done more efficiently and intuitively.
This will make the design of services essential as device makers address a deceptively simple question: what is the most efficient way of communicating information to and from a human? If you view bandwidth as a pathway into and out of the human body, the fastest input format is usually the eyes, while the fastest output is the voice – hence the paradigm adopted by Google Glass.
But “eyes in-voice out” is not the only game in town. The best uses of “human bandwidth” for input or output will vary according to context. For example, to convey a simple message to a runner, the vibration from an Adidas miCoach Smart Run is much better than a glance. However, to absorb a wider range of data, you’ll need to look at it. And of course, the device uses the skin for input via sensors.
Natural User Interfaces Proliferate
We’re starting to use our bodies more and devices less to facilitate new interactions. The screen is losing prominence in our daily lives as Natural User Interfaces (NUI) – aka our skin, eyes, and brains – take over.
This growing trend is evident with gesture-based technology. Primesense – the company behind the Microsoft Kinect sensor – has powered over 20 million devices was acquired in November 2013 by Apple, speculated to help lead a gesture-based Apple TV.
Beyond gestural innovation, bio-based technology is moving center stage to streamline our daily transactions, as seen with the iPhone 5S Touch ID fingerprint sensor.
While we're (thankfully) not yet living in a Gattaca or Minority Report world, we are doing things only previously possible in science fiction: PayPal is using facial recognition linked to credit cards to allow wallet-less transactions; MC10's BioStamp is a flexible microprocessor that can verify a person's identity; and the Reebok CheckLight is a mesh skullcap that can determine if an athlete has suffered a concussion. OMsignal's T-shirt can extract a whole range of health data from the skin without any interface effort.
The usual gatekeeping methods to protect personal information about ourselves and our identities or to access a service are being broken down. Some of these new technologies will even give us the illusion of having superpowers: NeuroSky's flagship product MindWave is an example of a headset with “mind control” capability— you can log into your computer using just your thoughts.
New Interaction Paradigms Will Emerge
As more things become connected and sensors become smaller, more interaction and development standards for gestural and wearable technology will be created. Design will surpass screens and focus on allowing us to complete tasks with even less friction. Many of the actions we currently need to initiate will also fade away as technology predicts our habits, routines, and behaviors. This allows brands to get out of the routine of catering to basic tasks and instead focus on higher value interactions with customers.
We are reaching a point at which human data is being measured in unexpected ways: dermatologists can use smartphone photos to make a diagnosis, and Qualcomm is exploring apps that can predict if you are developing Parkinson's disease from the sound of your voice.
And yet, for all this accelerated change, screens have become more omnipresent and expected than ever. The latest generation is growing up using touch screen devices that are always-on and enable multi-tasking. The screen will not disappear, but the things that you do on a device will change while other activities shift into the background.
Beware of Fragmentation
Users will always want to interact with things in the most simple and natural way possible. But businesses need to be cautious and choose carefully which systems they invent, and which they’ll need to work on with other companies to leverage. Adoption will not take off if users have to learn multiple, new sets of gestures devised by different service suppliers.
Many of the technologies and products out there right now are closed ecosystems – for example, you can't open up your Fitbit to use the data anywhere else. Yet, open systems and the ability for people to build around emerging standards will be critical for the sustainable growth of these technologies.
Companies need to leverage data from multiple sources to better understand their users and their behavior. Organizations with customer interactions need to start planning around NUIs and asking themselves whether they can use biometric feedback to redesign products and services. All the while, considerations of human bandwidth must inform design decisions.
Lastly, we’ll need to investigate differences between how digital natives and older users react to interfaces; kids will be NUI and brain-computer interface experts. The digital native segment also happens to be driving billions of dollars in shopping decision-making. “Kids will be able to understand quantum physics and the movements of subatomic particles because now they can play with them and feel how they move,” according to David Holz and Michael Buckwald, the leaders behind Leap Motion.
[Image credits: Adidas, Google, MC10, and Neurosky]
Jesús Gorriti works as VP of Design at Fjord, Accenture's design and innovation studio. Based in New York he works for clients such as Citi, Harvard Medical School, ESPN, AIG y Kohl's, managing a team of 30+ designers.