I’ve been lucky enough to sit in on a number of interesting conversations and presentations this week at World of Watson that deal with the underpinnings of cognitive technology–most notable, artificial intelligence (AI for short). And interestingly, something that came up in each of those conversations was the usual Hollywood portrayal of AI as the harbinger of a dystopian future, with Skynet and the Matrix being the most common references. We’ve been thinking about AI since the beginning of the information age, but only recently has it started to become a reality. So why now?
Interestingly, AI developed in the opposite way of human intelligence. Consider the way a baby learns–he starts with recognition and perception, learning about the world through the five senses. Later comes language development, then reasoning. AI systems started with reasoning, but were unable to manage recognition–mostly due to hardware limitations. Deep Blue could win at chess, but could not identify a chess piece. There are a few things that have come together to start driving real AI advancements.
Hardware advances in the past six years have made it possible for software to begin to perceive and recognize things. New AI systems like Watson are able to combine perception and reasoning, combining sensor data with AI to create cognitive systems that learn and adapt. These hardware advances have been coupled with a new generation of programmers, who approach the process of building AI and cognitive systems from an approach that’s actually based on research in neurosciences. Deep neural networks are now based on the way human neurons work, and work on visual perception systems is leveraging something similar to the mechanism the brain uses to convert signals from eye into images that the brain can perceive.
So getting back to that Skynet thing…scientists and researchers are thinking hard about the ethical implications of AI development, and decisions being made now will have far-reaching implications. Predictions are that general human-level AI is at least 40 years away, and on average, it currently takes 100 hours to teach an AI system what a reasonably intelligent person can learn in one. However, super-intelligence in narrow fields is already here, and is already being leveraged in fields such as medicine, where the combination of human and artificial intelligence is producing results that far outstrip what either is capable separately. In fact, many researchers are starting to think of AI as augmented intelligence, not artificial.
And here’s the thing–as the creators, we get to invent the way that AI develops. Why would we develop technologies that become belligerent? Why not work out something approaching Asimov’s three rules of robotics? The future will be about systems that improve the human experience, not replace it or overthrow it.
So am I afraid of the AI boogeyman? No. On the contrary, I’m eagerly anticipating the changes that will come over the next 5-10 years and cognitive systems become more mainstream. AI has the potential to make our lives safer, healthier, and more productive. Does that sound so scary?
Latest posts by Matthew West (see all)
- Live from IBM World of Watson: don’t be afraid of the AI boogeyman - October 27, 2016
- Live from IBM World of Watson: could a cognitive system have written this for me? - October 27, 2016
- Live from IBM World of Watson: the cognitive Internet of Things (IoT) - October 25, 2016