A doe had
the misfortune to lose one of her eyes, and could not see anybody approaching her on that side. So to avoid danger she always used to feed on
a high cliff near the sea, with her sound eye looking towards the land. By this
means she could see whenever the hunters approached her on land, and often
escaped by this means. But the hunters found out that she was blind in one eye,
and hiring a boat, rowed under the cliff where she used to graze and shot her
from the sea.
-Aesop's
Fables
Anybody
paying attention to advances in artificial intelligence has noticed that
significant milestones are being crossed
more and more frequently. Deep Blue defeated world chess champion Gary Kasparov in 1997. Watson took the crown in the much more
flexible game of Jeopardy on 2011. And
while Dr. Fill disappointed many observers by only placing in the near the
bottom of the top quartile at the America Crossword Puzzle Competition, few
expect it to take much longer before computers reign supreme in this area as
well. Jeopardy Champion Ken Jennings
summarized the ambivalence of many when he used his terminal to display the line “I for one welcome our new robot overlords.”
We've all
seen movies like The Terminator and The
Matrix. We're all wondering which
massive collection of computers is going to go Skynet on us and achieve the
critical mass needed to wake up and start making its own decisions. Will it be Watson? Dr. Fill?
Or perhaps Google's seemingly infinite collection of server farms,
running in unmarked, undisclosed locations spread across the world? We pontificate endlessly about what
safeguards we need to keep these colossal systems in check. Can we build in kill switches? Keep them fire-walled off from the control
software running power plants and weapons systems?
Build in Asimov's three laws of Robotics in the hope that the newly
awakened system will serve our needs instead of their own?
I have to
wonder if, like the one-eyed doe, we're all looking in the wrong direction.
For all
the intricacy of a Watson or Dr. Fill, these are highly monolithic programs,
designed to do one thing and do it well.
I haven't heard any reports that Deep Blue or any other chess program has
been getting bored and asked to try its hand at tennis, or Parcheesi. Adapting Dr. Fill to to do some task other than complete crossword puzzles would be a monumental undertaking.
Probably easier to throw everything out and start from scratch.
Moreover, these programs have no survival instinct. They have
no ability or inclination to replicate, or to try to thwart the intentions of
anyone who would prevent these activities. They cannot rewrite their own code to
avoid detection and adapt to a new environment.
Modern malware has all of these attributes.
The
sobering fact is that it's becoming increasingly difficult to come up with a good
definition of life that does not include malware. Wikipedia lists the following criteria to consider something "alive":
- Undergoes metabolism. While this traditionally refers to chemical reactions that sustain an organism, there's no intrinsic reason why it couldn't refer to the processes of a functioning program.
- Maintains homeostasis. Similar to metabolism.
- Possesses a capacity to grow. True, though currently limited. (But see below)
- Respond to stimuli. Absolutely. Many worms and viruses will watch what is happening in the operating system and take actions accordingly.
- Reproduce. Yes, and then some.
- Through natural selection, adapt to their environment in successive generations. Limited again, but not for long.
The two
points above that are weakest today are the ability of malware to grow and
adapt to its environment. In malware
terms, this most closely translates to polymorphism, where a virus will modify
its own code. In today's world, these are
generally very minor modifications, designed to make the virus more difficult
to detect by an anti-malware program looking for a specific code signature. A given unit of malware doesn't have the
ability to spontaneously change itself in order to discover and take advantage of a new
zero day exploit.
Not yet.
There's
no reason why its not possible. The
technique involved is called a genetic algorithm. It involves replicating evolutionary
techniques by introducing random variations into code to see if it
improves. It has minimal usefulness in
many programming applications due to the high level
of computational power required, and the difficulty of measuring improvements
from one generation to the next. When
the computing power is provided by infected computers on the internet, and
effectiveness is measured by the ability to survive and propagate, both these
limitations go away for malware.
We are
then left with the question of how fast a self-replicating, self-modifying worm
in the wild could improve using genetic algorithms could improve. I see no reason why it could not improve very quickly indeed. The field of medicine
has recently seen the introduction of "super-bugs", bacteria which
has acquired immunity to many or most antibiotics over time. A bacteria attempting to infect humans has
faced a very difficult environment since the introduction of antibiotics. What we're only beginning to appreciate is
how a difficult environment leads to much more rapid evolution. With an internet full of
anti-malware programs and researchers dedicated to stamping it out, malware
must be very good to survive for long.
Many or most strands will be identified and wiped out. Those that survive will be scary indeed.
I don't
know when we're going to get the first malware in the wild that can truly
modify its own capabilities, rather than just its signature. Maybe its already out there. How complex is it getting? At what point is it going to exceed its
creators wildest expectations? At what
point will it begin exhibiting behaviors that will appear to demonstrate
creativity and innovative problem solving?
A what point does it become self aware?
Whenever
that happens, I don't know if we'll know what to do. We're going to need help.
Maybe we
can ask Watson.
nice
ReplyDelete