About a year back, Mark Cuban remarked that people without AI skills “are going to be a dinosaur within three years.” Mark is not alone in believing that AI takeover is imminent. I have heard similar opinions from people around me just as often. Some are even frantically picking up MOOC courses on Machine Learning lest they become unemployable.
Is Mark’s statement the inevitable truth, or just a meaningless hyperbole?
Looking at the state of AI today, it’s unquestionably the latter. A couple of reasons why:
a) Deep Learning is Actually Pretty Limited
Last few years have been an inflection point in AI’s capabilities. AlphaGo defeated world’s sharpest Go players, speech recognition achieved near-human accuracy, and self-driving cars are only inches away from Level Five Automation. But even with these remarkable achievements, we are far from mirroring human intelligence.
A computer that sees a picture of a pile of doughnuts piled up on a table and captions it, automatically, as “a pile of doughnuts piled on a table” seems to understand the world; but when that same program sees a picture of a girl brushing her teeth and says “The boy is holding a baseball bat,” you realize how thin that understanding really is, if ever it was there at all.
Feeding billions of images to the neural network can achieve magical results, but that magic is more a proficient pattern recognition than actual intelligence. Training a computer to recognize a panda means feeding the neural network >1000 images of the animal, whereas a human will rarely need more than a few.
Even with the successful applications of deep learning, we are realizing that there are diminishing returns for ingesting more data.
This is a fact of which Norvig, just like everybody else in commercial AI, seems to be aware, if not dimly afraid. “We could draw this curve: as we gain more data, how much better does our system get?” he says. “And the answer is, it’s still improving—but we are getting to the point where we get less benefit than we did in the past.”
Deep learning can betray signs of real intelligence, but we are nowhere in the vicinity of the AI assistant shown in Her.
b) Machine Learning Isn’t a Magic Bullet
Consider a consultant whose job, as a part of an M&A process, is to prepare a due diligence report after painstaking analysis, and talking to the various stakeholders of the company. Could there be a time where an AI robot can figure out all the stakeholders, conduct interviews with them, collect all the relevant numbers, and compile a neat, coherent report? Maybe, but as of today, ML is of little help here.
Indeed, in the broad scope of problems, the applications of ML are quite narrow—making predictions from massive amounts of data. It can help you automate the process of captioning images, but there is little it can do when the problems are human-centric, rather than data-centric.
Even in problems related to data, ML can’t always be a useful solution. At a company I worked, we envisioned a product to create, test, and execute Ad variations entirely on its own. No human intervention was needed besides the initial setup. But we soon realized that it’s not something that the users would’ve preferred. They want to be in control and are reluctant to rely on a complete behind-the-scenes execution.
c) There are Tons of Problems Before ML Becomes Imperative
Overstating the role of AI is blatantly ignorant of how ancient the world currently is in matters of technology. The software in my hospital is a crude VB6 application running on Windows XP. Besides the horrible UX, there is no way to add an entry for a procedure other than looking it up in a printed booklet and typing its numeric code.
If it was a public hospital, I wouldn’t have been surprised to see a DOS software doing the job. Most of the world software isn’t tracking fitness or showing cat pictures. It’s keeping track of inventory, creating invoices, taking orders, and managing different processes of businesses. These softwares are inefficient, bloated, and painful to use. Some organizations are even more backward where paper and forms are still the sole way to get anything done.
While we laud the latest achievements of ML, we are quick to forget that most of the world’s software hasn’t adapted to the tooling that came decades back. And some organizations are yet to take their first step to digitalization.
It’s difficult to believe that AI transformation could come without a transitional phase, where sensible design and code replaces the cruft.
ML has shown immense potential, but its capabilities are nowhere close to making everyone redundant. Rather than an abrupt takeover, the progress of AI would be piecemeal. It’ll help us in fulfilling our responsibilities before it begins supplanting them. For instance, customer service agents can hugely benefit from an AI software that can take care of the most common questions, but it’s difficult to contemplate that they could be replaced entirely in one go.
A single technology that addresses problems of every scale is a fanciful idea. But, it misses that humans have always been exceptional at a) underestimating problems that there are b) inventing new problems when the existing ones have been solved.
AI won’t replace you anytime soon, because it’s a tool with limited capabilities that has barely begun to solve a small set of problems.