Live from IBM World of Watson: UX meets Cognitive Computing
As the director of User Experience at Rocket Software, I spend most of my time thinking about how people interact with technology. And as cognitive computing becomes more and more prevalent I find myself trying to figure out how cognitive – essentially human thought done by a computer – will interface with actual humans.
What does the convergence of UX/UI and cognitive computing look like? My belief is that we should live in a world where UX professionals spend their time actually training systems because the interfaces between humans and machines are so seamless. All you need to do is look at IBM’s Watson – which can do everything from predict the weather to win on Jeopardy – to see what this “invisible connection” looks like.
The central question that UX experts try to answer is, “how do we simplify systems to provide real value for real people.” That also happens to be at the heart of cognitive computing, which is why there is such a natural convergence between the two.
So what should a good UI include? For starters, voice recognition platforms (like Alexa and Siri) need to be factored in because the speech/machine interface is such a core element of cognitive computing. Eye tracking is also going to play an important role in the evolution of cognitive – if this seems far-fetched to you, remember what people thought when they first saw a mouse in the 1980’s. Mix in natural language processing, and we’re looking at a future where there is no single UI because everyone has a unique interface based on his or her voice, facial gestures, and even body movements.
Most software is overly complex, and a lot of people think of UX as the tool to simplify computing. That’s not quite right, though. As an example, most systems intentionally make it difficult to delete files so that users don’t lose their valuable data by mistake. So it’s not really about simplicity: it’s about figuring out the best ways for technology to best support how people live and work in the real world.
Of course, we’re not there yet. If you’ve ever tried to use Apple’s Siri or Amazon’s Echo, you know that sometimes the human/machine link works really well and sometimes it doesn’t work at all. And companies are still trying to figure out how to really leverage the Internet of Things in a meaningful way. The reality is that convergence is still complicated to achieve. I want to see even more simplification to make it easier to grasp so that there are fewer and fewer barriers between technology and the people who use it.