The Imminent (r)Evolution of the User Interface: our Brain as the Ultimate eXperience…

I’ve lost my grandfather more then 20 years ago, but I still vividly remember how scared he was of technology. In recent years, a fierce focus on the ultimate “User eXperience” (or “UX”) has already made a big difference.

The evolution of the user interface!

You probably know the quote: “A user interface is like a joke. If you have to explain it, it’s no good.” We have come a long way, and even a toddler can manipulate a smartphone these days. If you believe that this is the end game, you’re wrong…

The Command Line Interface

In the last 30 years, the interface of technology has made tremendous progress. I can still remember the Command Line Interface (or “CLI”) on the Sinclair ZX Spectrum, the Commodore 64 and later the 286, 386 and 486 personal computers.

The CLI was the primary means of interaction with most computer systems since the mid-1960s, and continued to be used throughout the 1970s and 1980s. As a user, the interface forced the use of commands in the form of successive lines of text (or “command lines”) to launch programs.

The Graphical User Interface

The Graphical User Interface (or “GUI”) introduced visual indicators and graphical icons to interact with computers. The GUI addressed the steep learning curve of the CLI, which require commands to be memorized and typed on the computer keyboard.

In my lifespan the GUI was introduced to me by Microsoft in Windows 3.x back in the early-1990s. As a user, it was now possible to manipulate graphical elements to launch and operate programs.

Note: We should skip the debate who invented the GUI, and simply state that neither Apple, nor Microsoft did. The first Graphical User Interface was originally developed by researchers at Xerox’s Palo Alto Research Center (PARC) in 1973.

The Natural User Interface

The Natural User Interface (or “NUI”) is the latest evolution of the interface. The ability “to touch”, is only the first (of four) natural interfaces. The evolution will be as follows: “to touch”, “to speak”, “to see” and finally “to think”.

To Touch…

b-20180127-2

As a user, it has become very natural to manipulate our electronic devices using single- or multi-touch gestures. The “touch” interface has been around for quite some time, but only really kicked into mainstream when we started to use the touchscreen on our smartphones.

You are probably reading this blog on a touch device. If you are not, there is still a high likelihood that you will have at least one touch device within arms length. As you are using this interface on a daily basis, I will not digress.

To Speak…

The ability “to speak” highlights a first glimpse of the upcoming innovation cycle; a transition from a Mobile-First to an AI-First world. In the last years, the Smart Speakers – like the Amazon Echo with Alexa, Google Home or Apple Homepod with Siri – are the fastest selling consumer devices.

In order to demonstrated the power of speech, let me propose a very simple experiment:

Part 1. You should first time how long it takes you to lookup and start the “Learning to Fly” song (by “Pink Floyd”) on your smartphone. You can use your preferred streaming app; like Spotify, Apple Music, Amazon Music, or any other service.

b-20180127-3b

Part 2. In the second step, you should time how long it takes you to start the same song on a Smart Speaker: “Hey Alexa, play me Learning to Fly by Pink Floyd”.

You will finish the first part of the experiment in over 20 seconds, and use at least 10 clicks (depending on how quick the auto complete feature proposes the song). In the second part, it will take you a mere 5 effortless seconds to start the same song.

In launching directive instructions, the “speech” interface has a huge advantage over the “touch” interface. I personally believe that in five years, any technology will be useless if you can’t have a conversation with it.

Note: The “speech” interface is not perfect for every situations. In case of an instruction where you would like to browse through available “Pink Floyd” songs, the “speech” interface is not well suited at all. This explains why the second evolution of Smart Speakers is being complemented with a screen, and a touch interface.

To See…

The next frontier will tap into our natural ability “to see”; using eye tracking. This technology will enable eye gesture cues to navigate menus and make selections.

This will enable a dialog between our mind and the outside world, using our eyes as the primary interface. This is an advanced interface that has the potential to augment our human intelligence.

b-20180127-4In the current implementations, eye tracking enables users wearing head-mounted Virtual or Augmented Reality glasses to use their eyes as a mouse and making selections only with their eye movements.

Note: The technology has other more technical use cases like “foveated rendering“, which allows high-density displays to selectively choose areas of the screen to display images at lower-resolution based on where you’re focus actually is on the display.

In this advanced field of the Natural User Interface, the leading startup “Eyefluence” was acquired by Google in 2016. At Google, the startup will continue to advance their “eye-brain connection” to expand human potential and empathy on an even larger scale.

The following Youtube video (“The Next Frontier of AR/VR/MR and Unlocking Human Potential”) starts at the section where Jimm Margraff (the CEO of “Eyefluence”) demonstrates their technology.

Note: The eye tracking technology was first introduced at the beginning of this century, and enabled “Locked In Syndrome” (or “LIS”) patients – a condition in which a patient is aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles in the body – to communicate by blinking.

To Think…

The ultimate leap is a direct interconnection with our brain, and the capacity “to think” to control an interface. This idea reminds me of the “Learning Program” scenes, in the Science Fiction movie “The Matrix”.

The entrepreneur, engineer, investor and inventor “Elon Musk” turned this concept into science fact, by founding the company “Neuralink“. This neurotechnology company is focused on the developement of an implantable Brain–Computer Interface (or “BCI”), also referenced by the company as “Neural Lace”.

b-20180127-5The company aims to make devices to treat serious brain diseases in the short-term, but the ultimate goal is to merge man and machine.

Musk argues that humans will be unable to keep pace with advances in artificial intelligence. The intention is to fuse human intelligence with artificial intelligence, and to bring humanity up to a higher level of cognitive reasoning.

It is not entirely clear how far this technology is at this point in time, but eventually this technology should enable us to manipulate an interface using only our brain. It might even enable us to upload information directly into our brain, or download our thoughts into the cloud.

Conclusion

The user interface has moved from commands (CLI), to graphics (GUI) onto our natural (NUI) abilities. We already master the natural interface “touch”, and are starting to control “speech”. The next frontier of the Natural User Interface will be “to see”, and ultimately – as man merges more and more with technology – “to think”.

If my grandfather would still be with us, he might be a little more comfortable with the technology that we have at our disposal now. On the other hand, it would probably freak him out if explained him that the (r)evolution of the user interface is only going to get crazier, and that science fiction will become science fact.



Categories: Blog

Tags: , , , , , ,

Leave a Reply

%d bloggers like this: