On February 16, 2011, a Computer named “Watson” defeated two champions in a three day televised contest of the game-show “Jeopardy”. Whether or not this was a monumental achievement depends on who you ask. On the one side, there were the people who were by turns impressed, and concerned by this accomplishment. Impressed by the sheer technical challenge of solving a game which requires the ability to parse natural language and find answers to complicated questions. Concerned because of the implications of people losing their jobs, or as others noted, probably only partially joking, that this was the first step towards artificially intelligent terminator robots being sent back from the future to destroy us all.
The other side came from people who felt that this was a straightforward exercise in data lookup, which has been demonstrated by computers for years, and perhaps brought much more firmly into the public consciousness by Google. One commenter on the New York Times made the following criticism of this as a milestone in artificial intelligence:
“...There is … a risk in considering this a test of ‘intelligence’. When ‘Big Blue’ beat Kasparov in chess 20 odd years ago people correctly realized that chess is ultimately reducible to a mathematical equation, albeit a very long one. While Watson certainly seems a giant leap in a computer's ability to process more complicated language, it is still committing an analysis of terms, and I question whether it can truly comprehend communication.”
This is an interesting point, and it hinges on the question of what is intelligence, and whether “real” artificial intelligence really matters.
In 1950, the English Mathematician Alan Turing proposed an experiment called the Turing Test. In this experiment, a neutral observer would sit at a computer terminal and have a typed conversation with an unknown party. The question is whether a computer could consistently fool this observer into thinking that he or she was conversing with a human.
Many people think that this was designed as a test for artificial intelligence. In reality, Turing deliberately sidestepped this question. His question was not whether a computer could be intelligent. His question was whether a computer could imitate intelligence. Which is an interesting point – if something passes every test you can imagine for intelligence, does it matter whether or not it is actually intelligent? Perhaps this is relevant for a philosopher consumed with the volume of trees falling unseen in forests, but from a practical perspective, the answer would seem to be “no”. If a terminator comes back from the future to kill you, it’s not much consolation that it is only a very precise mimic of an intelligent assassin. Some people might point out that this is actually a crucial issue: If it only mimics intelligence, then you can exploit that weakness and figure out the limits of its capabilities to defeat it. But this ignores the essence of the experiment. If you can figure out the limits of its imitation, then it’s not a successful imitation: the Turing test has failed. If it’s a true mimic, then you can’t tell the difference, and you’ve got a seriously dangerous AI robot on your hands, whose only limitation seems to be a thick Austrian accent.
So let’s go back to our commenters point, and admit that Watson really just succeeded at solving a big equation that was modeled to solve the game of Jeopardy better than the humans could. It didn’t truly represent artificial intelligence. Our next question is: what is the limit of problems that can be modeled as mathematical equations? People have been wrestling with this problem for a long time – much longer than the beginning of the computer age. In fact, we might reasonably take this back to Pythagoras in 500BC. He came up with a fairly startling proposal: everything is number. Or perhaps in more modern terms, everything can be expressed as numbers. Exactly what drew him to that conclusion is unclear. Perhaps it was the discovery that you can increase the pitch of a note by exactly an octave by reducing the length of a vibrating string by half. While this might seem straightforward to us today, the discovery that something as emotional as music had a mathematical foundation must have been profound, even unsettling. Certainly his observation has proved true in ways that would have seemed unfathomable then. Today, anybody passingly familiar with digital music or images knows that any symphony or artwork can be expressed as a mathematically, whether as a formula or a series of ones and zeros. And if we can express anything static digitally, why not anything dynamic? Why not a process? Why not an incrementally improving process? Why not innovation itself?
To be sure, there are real limits to what we can get computers to do today. We can’t program a computer to be an original musical genius, though it’s worth noting that we can program it to compose derivative works. There is software that can create original compositions that remind one vaguely of Mozart – not one of his best works, perhaps, but certainly of that period. But before we get too caught up on this point, it’s worth noting that we don’t know how to raise a human being to be Mozart either. It just happens. We’d be hard pressed to even define what a genius is, or how to consistently recognize it. After losing his chess match to deep blue, Kasparov accused IBM of cheating because of the “deep intelligence and creativity” he saw in Deep Blue’s moves. Assuming IBM did not cheat (and I’m willing to give them the benefit of the doubt), was Deep Blue a “genius”? If we can program a genius at chess, and Jeopardy, what’s next?
And perhaps more importantly, what does this mean to us? As noted before, there are two big concerns arising from this trend: jobs, and accent impaired killer robots. The robot question is perhaps the bigger one, and for which an answer was proposed many decades ago by Isaac Asimov. In his science fiction worlds, he eliminated the danger of robots to humans by building three rules deeply into their operating systems:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
On the surface, this looks like it would solve the problem. In practice, there are some real issues. The first is, how do we even begin defining this? You first need to program AI to recognize humans, and then understand all the ways that humans might come to harm. How far do you carry this? Does a robot stop you from jumping out of a window to save yourself from a burning building? Does it prevent you from smoking, because you’re damaging your lungs? How about preventing you from driving to work, because of the long term damage you’re causing to the atmosphere?
Leaving aside those questions, I think it would be a great next experiment for IBM. Build a robot capable of doing, say, basic factory work, but which is absolutely incapable of harming a human. A self driven forklift which can spot if a person gets in it’s path and comes to a halt. Points given to how intelligently it can work to proactively prevent that human (or perhaps a crash dummy to start) from coming to harm. Figure out how you can build these guidelines into any program you design. That would generate huge positive publicity for IBM and be a great step forward.
The problem is, it isn’t likely to catch on, because not everybody buys into the concept. The military has no interest in unmanned drones which can’t harm humans. They’re putting billions of dollars into designing robots which can kill people at a distance and with less and less human operation required. In the short term, this is a great idea, because it saves the lives of our soldiers. In the slightly longer term, it should be noted that we’re zipping quickly along the path to a Terminator future. Maybe we’ll get to a future where we only program our military robots to destroy our enemy’s military robots. Whoever has an automated army still standing by the end of the conflict wins. But call me a skeptic.
The other issue is jobs. There are two conflicting points worth noting here. The first is that machines have a long history of eliminating jobs. This is always painful and sometimes devastating for the people involved. And yet, the next generation never has any interest in going back. New jobs arise, and few people miss the old ones. I personally have no interest in giving up shipping my packages via UPS in favor of having them lugged about by the pony express, and I don’t care how many pony riders that impacts. There are more jobs now as truck drivers and logistics coordinators than were ever in the pony express.
However, that brings up our disturbing second point. As they reluctantly admit in Wall Street, past performance is not a guarantee of future results. Just because a trend has continued in the past, doesn’t mean it must, or will, continue. Real estate prices in the United States always went up over time – until they didn’t, and we had a massive crash that almost turned into a global depression. The question we must ask is what are the underlying drivers of a trend, and are those drivers still applicable?
In the past, machines have automated more and more physical activity, so people have increasingly turned to mental activity. We’re now a nation of cube workers, because our jobs don’t require us to move around. We’ve got machines to do all that. But as computers get increasingly good at solving problems that used to require mental activity, there may be nowhere else left to go. Perhaps if we get these laws of robotics down, we’ll be ok. After all, we may eliminate the need to work completely, the cost of living will drop to zero, and we’ll all live the lives of Roman Emperors. That’s the optimistic scenario. Granted, many people will turn into the blobs depicted in the movie Wall*E without the structure of a job to keep them busy, but that will be self-inflicted destruction, and not one I’m going to worry about personally. I’m sure I can keep myself productively busy in a permanent retirement, and I can’t drive myself crazy worrying about the people who can’t.
And though it’s not a societal level answer, I think that philosophy is the best way to handle the danger to jobs in general. It’s possible for a creative and nimble person to stay employed even in a shrinking industry. The number of jobs in the music industry is declining every year, but there’s still opportunity for people willing to be creative, clever, do a lot of self-marketing, and watching constantly for not-traditional venues. If you’re looking to get a paying job in an orchestra, good luck. If you can leverage yourself providing music for video blogs and do live gigs that are fun and engaging, you just might carve out a career out of that.
So ask yourself what you really do for a living. How are you providing value? Don’t ask whether your job could disappear. Nobody wants to think that it could, and you’ll give yourself a falsely positive answer. Instead, ask yourself what are the conditions that would be required to make it disappear. Then figure out what you’ll do when (not if) that happens. And start doing it long before it does.
And it doesn’t hurt to keep some huge tankers of liquid nitrogen around, just in case you have to deal with those killer robots.
No comments:
Post a Comment