Monday, December 19, 2011

The End of PCs?


You are most likely reading this blog article on a machine that would have been considered inconceivably powerful for most of the scope of human history.  You can easily communicate with people around the world in the blink of an eye.  You can effortlessly solve mathematical problems that would have confounded Euclid and Archimedes.  You can access an information repository greater than the Library of Alexandria.  If you transported this machine back a thousand years or so (and lets pretend it would have had access to the necessary battery life and internet knowledge bases), wars would have been fought to possess it.

You hold in your hands the power of the Gods.

"With great power comes great responsibility." (Spider-man comics)

Spider-man learned this lesson to his cost when his inaction lead to the death of his beloved Uncle.  And Spider-man's powers were insignificant next to the powers of a modern personal computer.  This power is literally in your grasp.  Are you ready to accept the mantle of responsibility?

You probably think I'm stretching a point here.  If you're like most people, you just want to use your computer to read the news, gossip with friends on Facebook, and maybe watch some videos on Youtube.  You're just minding your own business.  You have no intention whatsoever of, say, joining a ring of criminals in Eastern Europe and participating in a scheme to extort money from e-commerce sites.  Except that, unless youve been extraordinarily careful with your super-powers, you probably already have.

"All that is necessary for the triumph of evil is that good men do nothing." (Edmund Burke)

The most common form of cyber-crime involves collecting a group of PCs to form a botnet.  This involves infecting each of these PCs with malware, which quietly turns that PC into a slave of the botnet owner, rather than the PC owner.  If the malware is at all clever at its task (which most of them are), it leaves the PC owner oblivious to the fact that anything has changed.  You still think that you're just minding your own business.  You don't realize that you've started engaging in criminal activity.

If it's any consolation, you're not the only inadvertent criminal out there.  You're in the company of millions of others.  Tens of millions.  Possibly hundreds of millions.  When you're talking about this magnitude of numbers, its hard to suggest a lack of personal ethics or failure of responsibility from any particular individual.  What we have is a systemic problem.  Systemic problems require systemic solutions.

"We have met the enemy and he is us." (Pogo cartoon strip)

It is possible to keep a PC free of malware.  You need to keep up to date with your patches.  Not just your operating system patches, however.  Also your browser patches.  And your Adobe patches.  And Java.  And any of the tens or hundreds of other programs you have installed on your computer.  And you need to make sure that you have a detailed understanding of any peer to peer software you run, in order to ensure that it's configured correctly.  And know how to configure your NAT router or firewall correctly.  And understand how to create good passwords.  And understand how to spot false links in emails.

The list goes on and on.  It can be done.  But its a full time job just to keep up with it.

There is an alternative.  It is to realize that not everybody has what it takes to be Spider-man, and not even to try.  This means something that makes most people cringe.  It means the end of PCs.

As revolutionary as this sounds, it's not actually a new idea.  We've already started using iPads, which are not PCs.  Not in the traditional sense.  They are extraordinarily limited.  Theres only one way to get additional software on them.  They don't have an exposed file system.  You can't connect them to all the cool USB devices that make your PC so flexible.  They are limited.  They are restricted.  They are, in a word, much safer than PCs.

"PCs are going to be like trucks.  They're still going to be around.  They're still going to have value.  But they're going to be used by one out of x people." (Steve Jobs)

Steve Jobs had an interesting vision: most people don't need PCs.  Most people don't need the level of power and flexibility that a full-blown computer provides.  You don't need a PC to browse the internet, check your email, and watch Youtube.  And Apple isn't simply filling this need with iPads.  They're moving in that direction with Macs as well.  In March of 2012, Apple will be implementing sandboxing for all applications sold through the Mac store.  This means that every application must request the specific permissions it will require before it is sold by Apple.  And Apple will have to approve it.  This will just be the online store.  At first.  But if Apple has its way, I suspect that it won't be long before the online store becomes the only way to purchase applications for a Mac.

This is going to slow down innovation.  People won't be able to write and release new and interesting applications nearly as fast as they could in the past.  If this had been the model back in the 80s, personal computers might never have gotten off the ground.  But we're no longer in the 80s.  Maybe it's ok for us to finally slow down a tad.

Of course, this doesn't impact Microsoft in the least.  Not yet.  But it seems that even Microsoft is realizing that unlimited power and flexibility in the operating system is not always such a good thing.  In 2001, feeling a surge of Unix envy, Microsoft released a feature called "raw sockets" into Windows XP.  Raw Sockets are cool.  They're powerful.  You can do all sorts of interesting things with them.  Maybe a little too interesting.  Some hackers leveraged them to perform some sophisticated attacks, some against Microsoft itself.  Raw sockets were quietly removed a few service packs later.

We may not yet be at the end of the PC era.  But maybe we should be.  Because most people simply don't need them.  Most people are unable or unwilling to spend the time and energy to use them safely.  And that's OK.  Not everybody needs to be Spider-man.

Monday, December 12, 2011

Bruce's Law of Technology


Once upon a time, Information Technology consisted primarily of Mainframes, sold by IBM and one or two small time competitors.  Oh sure, there were a few Unix and VAX minicomputers around if you looked hard enough, but these were being used mostly by scientists and engineers, and nobody paid them much mind.  Mainframes were running the corporations.  Mainframes were where the action was.

Then Apple (and quickly followed by several others, including IBM) released computers that would fit onto a single desktop.  They were small.  They didn't have much power.  They lacked the basic functionality needed to do any kind of real computing, such as job scheduling.  The IT professionals gave them a quick look, realized they were toys, and then promptly ignored them, to get back to the real business of doing IT.

Once upon a time, music was played on physical media.  Changes in technology involved migrating from one media to the next.  LPs gave way to tape cassettes (with a short detour through 8 track tapes), and then to Compact Disks.  Record labels looked forward to these changes, because it meant they could resell all their old titles in the new media.

Then, somewhere in the nineties, people started talking about music files called MP3s, which could play directly on computers, or even on dedicated devices.  These files took up a large percentage of the hard drive space that was available at the time.  Their sound quality was lousy.  The record labels took a look at MP3s, realized they couldn't compete with CD quality music, and then promptly ignored them, to get back to the real business of selling music.

Once upon a time, books consisted of sheets of paper bound together between two covers. People liked their paper books a lot.  So did authors and publishers.  They were cheap and portable.  The format had survived relatively unchanged for roughly a thousand years.  They seemed likely to be the dominant format for the next thousand years.

Then it became possible to store books on digital media.  It was lousy.  Nobody liked reading a CD Rom on their computer desktop.  Portable book readers had poor screens, terrible battery life, and were all incompatible with each other.  Publishers took a quick look at these eBooks, realized they'd never catch on, and then promptly ignored them, to get back to the real business of publishing paper books.

Clearly, we have a pattern here.  A pattern that was consistently missed by the people who most desperately needed to spot it.  They failed to understand what was happening, because they ignored Bruce's Law of Technology (which is perhaps understandable, as it is being published here for the first time).  Bruce's Law of Technology is as follows:

"New technology sucks.  Until, suddenly and unexpectedly, it doesn't."

It seems obvious.  But based on the anecdotes above (and I could add many more), people consistently ignore it's implications.  They know that technology improves.  But they dont want to think about the possibility that it might overturn the world they know and understand and love.  They see its limitations, and refuse to see its possibilities, until it's far too late to do anything about it.

Somewhere out there, theres technology that has the potential to turn your world upside down.  When you discover that technology, dont discount it because it sucks.  Plan ahead for what the new world will look like once it stops sucking.  Think about how you will need to reinvent yourself, no matter how unpleasant that prospect may be.  Remember that not reinventing yourself will be much worse.  Remember Bruces Law of Technology.

Monday, December 5, 2011

The Limits of Reason


In the late eighteenth and early nineteenth centuries, a device known as the Mechanical Turk toured Europe and the United States, astounding it’s audiences by playing chess with an impressive degree of skill.  Many leading thinkers and engineers of the day were utterly convinced that it was a true chess playing machine, although there were persistent suspicions that perhaps the device (which included a large cabinet, apparently filled with gears and cogs) hid a real human chess player, possibly a dwarf, in its interior.

One of the most convincing exposes on the subject was written by Edgar Allen Poe in 1836.  His article was printed in the Southern Literary Messenger, and elicited responses from numerous newspapers and magazines including the New Yorker and the Baltimore Gazette.  It was well written, and completely correct in it’s conclusions – the Mechanical Turk was eventually revealed as a fraud.  However, it’s interesting to note that some of the most critical arguments employed by Poe were utterly incorrect.

His primary point was that as a mechanical device, it must necessarily be 100% fixed and determinate.  From this foundation, he makes several observations about the nature of the machine: “The moves of the Turk are not made at regular intervals of time, but accommodate themselves to the moves of the antagonist – although this point (of regularity), so important in all kinds of mechanical contrivance, might have been readily brought about by limiting the time allowed for the moves of the antagonist.”

He goes on to make a very interesting argument about the nature of the machine's purported algorithm: “The Automation does not invariably win the game.  Were the machine a pure machine, this would not be the case – it would always win.  The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game; a further extension would enable it to win all games – that is, to beat any possible game of an antagonist.  A little consideration will convince any one that the difficulty of making a machine beat all games is not in the least degree greater, as regards the principle of the operations necessary, than that of making it beat a single game.”

I’m not aware that any of Poe’s contemporaries had any disagreements with these points.  However, from our modern perspective, it’s easy to see that he was utterly incorrect.  We have chess playing machines today.  And what we know about the principles of making them play chess did not inevitably lead to an understanding of how to make them play perfect chess – it took decades of improvement (mostly on the hardware side) to evolve from the first primitive chess program to Deep Blue, which finally defeated world chess champion Gary Kasparov.  Even now that we've left human players far behind, nobody would claim that any computer in the foreseeable future could play a “perfect” game of chess (a feat which would require calculating every conceivable move on the chess board – a trivial exercise with a game like tic-tac-toe, but infeasible on a 64 square chess board).  Nor has our success at developing computer chess programs led to similar success teaching computers to master the much more fluid game of Go.

What would it have taken to convince Poe that he was wrong, especially when his ultimate conclusion was correct?  We could try to introduce him to the concept of heuristics, or allowing variable response times through the use of triggers, or even introducing the concept of indeterminacy to a system by seeding a complex algorithm with an external (and completely random) value.  Maybe this would have worked.  But none of this would be nearly as effective as simply having him grow up in a world where computers play chess all the time, and nobody thinks very much about it.

We like to think of ourselves as living in the age of reason and critical thinking.  We like to point at our technological progress and say “Here is where logic and reason have taken us.  They will solve any problem.”  We forget how much of what we know is built not on reason, but on trial and error, iterative improvement, and pure luck.

Monday, November 28, 2011

Do We Really Need Technology Patents? Part II


OK, I really didn't mean to return so soon to the subject of patents.  It's not really on my my list of the most pressing issues facing society.  But sometimes something grabs your attention, and just won't let go until you take an opportunity to vent.

When I previously questioned the basis of patents, it was based on their value to society.  Today, Id like to explore a different but related concept: Are they fair?

Of course, theres no simple answer to that.  Fair is a cultural concept, and different people have widely different definitions.  Most people would have trouble in providing a useful and consistent definition of fair, but instead fit into the Ill know it when I see it camp.  If enough people have a similar reaction, then thats a reasonable approximation of fair, and Ill go with that for now.

On 11/27/2011, the New York Times profiled an enterprising high school student named Katherine Bomkamp.  Inspired by visits with amputees at the Walter Reed Army Medical Center, she set about to build a prosthetic limb that would treat phantom pain.

Her idea, to treat the stump with heat (under the principle that the same treatment works on strained muscles) is brilliant in its simplicity.  This is not a new problem, and her solution does not involve new technology.  Its simply a new idea that somehow eluded generations of doctors and inventors.  Assuming the tests confirm its effectiveness, and the patent search confirms no pre-existing technologies, then I will heartily agree that her case is exactly the sort of situation that patents were designed for.  I would consider it criminal if a major medical device maker simply took her idea without compensating her.

A patent in this situation fits my definition of fair.

But now, imagine youre a talented electrical engineer named Elisha Gray.  You've spent an enormous amount of time and energy creating a revolutionary new device that is capable of sending the sound of somebodys voice over electrical wires.  The implications are staggering.

You triumphantly take your idea to the patent office, only to discover that the same idea was filed a matter of hours before yours.  Not being awarded a patent is frustrating.  But infinitely worse is that you are now legally prohibited from using your own invention without paying royalties to somebody else.  Ill leave the accusations of theft and conspiracy out of this for the moment.  People may differ, but I classify this as not fair.

If multiple people are inventing the exact same idea at the exact same time, then its not clear that its fair or beneficial to restrict one persons rights in favor of anothers.  It's simply an idea whose time has come, and it would be hard to argue that a third or fourth person wouldnt stumble onto the same idea in a fairly short period of time.  The question is, how frequently does this happen?  Aside from the invention of the telephone (and calculus), do we really have an avalanche of competing ideas all coming to light at the exact same moment in time?

I say yes, but dont take my word for it.  Instead, lets look at a recently passed, and heavily lobbied, law called the America Invents Act (passed on September 26, 2011).

One of the key provisions of this act is that it switches the U.S. patent system from a "first to invent" to a "first to file" system.  Many people have argued about which of these methodologies is more fair, but it certainly has the advantage of being simpler to manage.  Figuring out exactly when the process of invention occurred is an exercise in mind numbing frustration.  I'm personally not in the habit of keeping a full diary of every thought that crosses my brain as I shower each morning.  Trying to settle a dispute between multiple inventors, all of whom have spotty record keeping but huge financial incentives to win, is going to be arbitrary and capricious.  At least a filing at the patent office comes with a reasonably accurate timestamp.  Fair or not, at least it's closer to objective.

This was a very hot topic, and a lot of lobbying money was spent to get it passed.  The question is: would anybody have cared if most patents went uncontested?  Of course not.  The only reason why this becomes a hot button issue is that these types of collisions happen all the time.  We are living in a fast moving age, surrounded by ideas whose time has come.  Granting multi-year exclusivity to one person or organization because they were a day or two faster to file the patent application than somebody else with the exact same idea is not in the best interests of society.

And more than that, its just not fair.

Monday, November 21, 2011

Hackable Everything - Part II


Last week, I titled my blog post "Hackable Everything" and stated that everything is potentially hackable.  Some people asked me if perhaps I was being a bit melodramatic.  Just because there have been some recent news about security flaws, does that really mean that everything is vulnerable?

An interesting question.

First of all, let me point out that encryption algorithms today are actually very good.  There was a time when government agencies such as the NSA could apply massive computing power to break widely used encryption (typically 56 bit DES).  During the 1990s, as the Internet grew in popularity and interest in encryption became more widespread, the government tried to figure out how to keep their ability to decipher electronic communications.  The Clinton administration famously (or infamously) tried to mandate the use of the Clipper chip, which would provide encryption, but also provide backdoor access to government agencies.

They failed.  Today, we routinely use encryption algorithms and keys which are beyond the capability of any known computer or collection of computers to break.

How sure are we of this?  Couldnt the NSA have some massive computer buried in a government bunker that blows away our estimates?

To grossly oversimplify things, let's note that for a well-designed, properly implemented encryption algorithm, the difficulty in breaking it is a function of the key size.  DES, once the most commonly used algorithm, used a 56 bit key.  Over time, computers grew in power to be able to defeat this using a brute force attack - that is, trying every possible combination until they found the key by pure luck.

How do you make a 56 bit key twice as hard to crack?  Double the key length to 112?  Nope.  You just have to add one bit to make it a 57 bit key.

As key sizes grow, the numbers grow so fast as to make your head spin.

An 8 bit key has 256 possibilities.  A child could crack this in minutes using pen and paper.
A 16 bit key has 65536 possibilities.  A pretty big number, but you can probably visualize it if you try.
A 32 bit key has 4.3 Billion possibilities.  This is roughly the number of seconds in 136 years.
A 64 bit key has 18.4 Quintillion possibilities.  This is roughly 468 million times greater than Warren Buffets fortune.
A 128 bit key has 340 Undecillion possibilities.  This is roughly 340 trillion times greater than the estimated number of stars in the Universe.

128 bits is pretty much the minimum key length used in symmetric encryption these days.  In 2008, 56 bit DES was demonstrated to be crackable within a day.  Assuming we could get this down to a second, cracking a 128 bit key would still take 149 trillion years.  Im comfortable that the NSA doesnt have a computer 149 trillion times more powerful than the state of the art, which could crack this in a year.  Bump the key size up to 256 or 512 bits just for fun, and you cant even come up with metaphors to express the difficulty.  You can knock this down significantly by extrapolating Moore's law will continue developing more and more powerful hardware over the next several decades, but assuming you're not trying to keep data secure for a century, you're good.

So why then do I say that anything can be hacked?

First of all, note the requirement that algorithms be well designed and properly implemented.  The problem is, you never know whether this is the case, except in hindsight.  WEP was once considered to be unbreakable wireless security.  Then it was noticed that the very powerful algorithms it uses were implemented in a sloppy fashion, making it easy to step right around them.  Today, a script kiddy with minimal technical knowledge can download free programs to break WEP using a standard laptop.

OK, thats a challenge, but with care and lots of testing, you can implement a pretty solid encryption algorithm with a high degree of confidence.  We have a number of algorithms and products that have been closely scrutinized by thousands of people.  Theyre probably pretty good.

The second challenge, however, is more difficult to solve.  All security is built upon trusting something.  (Ask yourself how secure an encrypted transaction with Bernie Madoff would have been.)  Anything you have to trust is a potentially vulnerable point in your security infrastructure.

For example, most security on the Internet depends on "Certificates", which enable a person to unambiguously assert their identity, encrypt their data, and make sure any messages they send can be tied accurately back to themselves.  Certificates are the foundation upon which most everything else is based.  Having your certificate be compromised is like opening the back door to the castle - it simply doesn't matter how thick your walls are, or how deep your moat is, if people can enter freely.  Recently, a number of Certificate Authorities have been hacked, including Diginotar and KPN.  Once the Certificate Authority is breached, some Certificates (perhaps all) issued by that Authority are no longer secure.

Here's the scary part: check your browser, and see how many Certificate Authorities it considers to be "trusted".  The answer is close to 600.

600 companies, any of which might have a weak password, or a poorly implemented algorithm, or a single open port on a server, or a pissed-off employee who didn't get the raise they really thought they deserved.  Every one of which your browser is trusting 100% to keep you secure.  Do you have the detailed technical and organizational knowledge to know if this trust is justified?  Have you even heard of Izenpe S.A. (which I just found in my Certificate list in Firefox)?  Diginotar didn't tell anybody about their breach for many months.  Would you know if others have already been breached?

Are you feeling safe now?

The third point is even more difficult to come to terms with: data leaks.  No matter how secure the transmission is, it doesn't matter if somebody can read your data before it's encrypted at the end points.  Who cares if you use 1024 bit encryption if there's a keystroke logger installed on your machine which captures everything you do before it can be encrypted?
Or maybe they don't even need a keystroke logger.  Try sitting in a room where somebody else is typing.  Close your eyes, and listen to the sound of their keystrokes.  Do you notice how they don't all sound quite the same?  (If you're not convinced, ask them to touch type for a few minutes, then hit the same key over and over again with one finger.)  Depending on the location in the keyboard, each key strike has a slightly different pitch and timbre.  Researchers at Georgia Tech recently demonstrated how the accelerometer in an iPhone 4 could determine what was being typed on a nearby keyboard with 80% accuracy.  This was considered a much more interesting demonstration than simply using the microphone, because the accelerometer is much less secure you usually get notified when the microphone is turned on.  How is  encryption going to save you from that?

Now granted, this iPhone exploit is not easily replicable - they needed the phone to be perfectly positioned, on the right type of table, and all sorts of other controllable factors.  But technology always gets better, and more pervasive.  How long before an iPhone can do the same type of detection from 10 feet away?  How long before somebody figures out how to do it using a laser microphone against your window from 300 feet away?  How long before the current proliferation of cameras and microphones in consumer, industrial and municipal devices means that you're always within range of some camera, somewhere?

When any one of them can potentially be hacked, how will you ever know that anything you say or type won't be monitored?

This all sounds like the stuff of spy movies.  You're probably thinking, "Sure, this could happen in theory.  But who's going to take the time and effort to go after me?"  That's probably true.  Until technology makes it so simple to do that your neighbor's kids can buy the necessary gear for less than $10.  Counterfeiting money was once the exclusive domain of organized crime.  Then we had a new generation of printers and copiers which could churn out perfect copies of dollar bills.  You'll notice our currency has gone through some significant redesigns in the last twenty years, adding many new security measures.  This wasn't to stop organized crime.  This was to stop the average consumer for whom temptation had become just a little too hard to resist.

This is the point where I'm supposed to editorialize, and point out that only with immediate action right now can we avoid calamity.  But I don't have any answers on this one.  If you do, I'd be interested to hear about it.  But first, find a venture capitalist and start a company to implement it, because security concerns are going to be one of the hottest topics of the twenty first century.

Monday, November 14, 2011

Hackable Everything - Part I


The internet was created on a dream.

What if computers could talk to each other?

It's easy to lose sight of what a revolutionary dream that once was.  There was a time when most computers were not sold with modems or network connections of any sort.  You transferred files by putting them on floppy disks.  If you were especially tech savvy, you hooked two PCs together through their parallel ports and were able to transfer files directly from one to the other.  It seemed like magic at the time.

Then Al Gore invented the internet, and suddenly computers all over the world could talk to each other.  This happened so suddenly that nobody knew what to do with it.  You think I'm kidding, but I'm not.  The first corporate websites in the 90s looked like they should be hanging on the walls of a third grade art class.  Take a look at some of these if you don't believe me.

Then we upgraded everybody to broadband, and figured out what to use the Internet for: just about anything you could do on a computer.  You could browse.  You could shop.  You could communicate.

Anything you could do on a computer.

What if we could connect to the internet without a computer?

Between shrinking chip sizes and mobile protocols such as wi-fi and bluetooth, this dream was barely formulated before it came to life.  Email on your cell phone?  Check.  Emergency service and navigation in your car?  No problem.  Bluetooth connectivity for your insulin pump?  Why not?

Maybe we should have tried a little harder to answer that last question.

Because the inventors were not the only ones dreaming.

What if any device with a network connection could be hacked?

Finding unintentional uses for computers is a past-time as old as computers itself.  One of the first demonstrations ever of a personal computer was done on a machine lacking a monitor and printer.  Lacking a formal method of output, the programmer timed the cycles of the CPU just right to cause the radio interference generated to play some simple songs from the static of a nearby radio.  Computers weren't designed to leak radio signals.  It was simply possible, and a really clever person figured out how to exploit it.

The world is chock full of really clever people.  Not all of them have good intentions.

The problem is, we still don't really understand our connected, online devices, any more than we really understood the internet back in the 90s.  We still expect them to act like old fashioned devices, just better.  Hacking an insulin pump?  Whoever heard of such a ridiculous notion?  When security researcher Jerome Radcliffe demonstrated that he could issue unauthorized commands to his insulin pump over bluetooth, the manufacturer, Medtronic, just laughed.  They issued a dismissive statement saying: "...there has never been a single reported incident of wireless tampering outside of controlled laboratory experiments in more than 30 years of use."  Because we haven't seen this before, it couldn't happen now.  Go away, and trust us.

Then McAfee reproduced the hack.  And improved it, so it could work from 300 feet away.  And demonstrated how easy it would be to request the pump to deliver a lethal dose of insulin.  Medtronic isn't laughing anymore.

On November 14th, the New York Times published an article discussing Google's top secret research labs, where researchers are figuring out, among other things, how to put just about anything on the internet.  Garden planters.  Coffee pots.  Refrigerators.

What happens if Google succeeds?  Could a clever hacker figure out how to shut your freezer off for a few days while you were away from home, then turn it back on, causing you to unknowingly eat spoiled and possibly lethal food?  How about turning on your furnace full blast in the middle of an August heat wave?  And God help us if they ever figure out how to hack one of Google's driverless cars.

We live in a brave new world.  Everything is going online.  Everything is potentially hackable.  Unimaginable opportunity.  Unimaginable risk.

Anybody who claims to know how this will play out is selling something.

Monday, November 7, 2011

Technology Dreaming


Dreaming about technology can be extremely seductive, because it can make the impossible suddenly very possible.  I'm not even talking about the big dreams like healing the sick, or feeding the hungry.  I'm talking about the very mundane details of how to run a business, which are anything but mundane if it happens to be your business.

Imagine if we could deliver a package anywhere in the world overnight.
Imagine if customers could withdraw money from the bank without needing a teller.
Imagine if we could order inventory just in time and massively shrink our warehouse.

The trouble with dreaming about technology is we forget that it is not always possible to achieve the impossible.  History is littered with failed ideas that seemed just within our reach.

Imagine if we could achieve a paperless office.
Imagine if we could predict the stock market.
Imagine if we could automatically deliver baggage in the Denver International Airport.

Netflix is a great example of a company that was swept up by the promises and perils of technology dreaming.  It started with a brilliant idea.

Imagine if we could rent videos without video stores.

This is a killer concept.  I've never reviewed the financial statements of Blockbuster (remember them?), but I'm pretty confident that if I had, I would have found the vast majority of their capital was tied up in real estate.  Other than a couple big warehouses and data centers, Netflix has no real estate.

Netflix combined this killer idea with superb execution, and became a juggernaut in an insanely short period of time.  David Pogue raved about their service even after he'd given up his membership.  They attracted global attention when they offered a ten million dollar bounty for an algorithm to improve their video recommendations, saving themselves many times that in R&D costs.  It was almost a perfect business.  They had almost no need for capital, except for their inventory of DVD disks.

Those pesky, irritating disks.

Imagine if we could get rid of the disks.

It's such a fine line between stupid and clever.

But let's get things straight.  Starting a movie streaming business to give your customers better and faster options is clever.  Positioning yourself for a future when disks may go out of fashion is clever.

Jettisoning a popular business because you like the idea of streamlining your operations, without regard to how you're enraging your customers, is stupid.

Many customers registered their displeasure, and cancelled their subscriptions.  Many investors registered their displeasure, and sold their stock, wiping out roughly $8 Billion in market capitalization. (It has since regained a bit of ground, but nothing close to what it lost.) The fact is that technology isn't yet ready for totally disk free viewing for everybody.  Movie studios aren't yet willing to license all their content for streaming.  Broadband connectivity can be flaky.  And even when it's reliable, many people are still on DSL, which provides image quality about equivalent to a VHS - not as good as DVD, and a long shot from Blu-Ray.  And sometimes you're not completely done when the movie is over, and want to see some of the specials, which are not yet available on streaming.

It's interesting to ask why Netflix is still in the doghouse.  They've cancelled their unpopular plan to spin off the DVD business.  And their prices are still competitive to where they were before the streaming business ever existed, and Netflix was still wildly popular.  The problem is that customers also dream.  They dream about a company that treats them right, and puts their interests over short term profits.  For a while, Netflix seemed to be that company.  Then the dream was shattered.  It will take a long time to rebuild that dream.

So go ahead and dream about technology.  Dream up the next killer idea that will transfer your business.   Transcend the impossible.

Just try not to cross that fine line that separates stupid and clever.

Monday, October 31, 2011

IP Everything

A few weeks ago, I noticed (barely) that the New York public radio station, WNYC, now has it’s own iPhone App.  Listeners now have the option of hearing a live stream of the station online, and podcasts of recent shows.  This is such a “me-too” story that it barely qualifies as news.  I’m still not sure why I took the time to read it.  Everything is already podcasted these days, isn’t it?

Still though, it made me stop and think a bit.

What exactly is radio?  The World English Dictionary (as available at dictionary.com) defines it as “an electronic device designed to receive, demodulate, and amplify radio signals from sound broadcasting stations”.  If you’re listening to it over IP instead of over radio signals, is it still really radio?  Or is it a podcast?

A pedantic and pointless question.  At least if we leave it at that.

Suppose your transcribe it and put it online.  Is that a web page (or blog)?  Or is it still radio?

Suppose you have a speech to text program, and you automatically put your radio program online.  Now suppose you have a text reader for your website.  Are you reading a radio program, or listening to your website?

Suppose you have an electronic book.  It’s a legal rather than a technical limitation whether your kindle can read that book to you aloud.  What is the difference between your book and a radio podcast?  And what if your book includes multimedia such as sound and video, as is slowly starting to happen.  Is it still a book?  Or has it become a movie, or TV show, with lots of text?  Or a radio show with visual augmentation?

There used to be very clear lines drawn between movies, TV shows, books, newspapers, and radio.  Maybe a small amount of blurring of the lines, such as when somebody published a newspaper in book format to make a point, but it was usually pretty clear what you were talking about.  But as everything goes online, all our previous classifications of media are going to merge into an indistinguishable mass.  We might keep some conventions about timing and primary format due to historical considerations, but we’ll have a really hard time explaining to our kids what the point of those conventions were.  And so much of our language is going to have to evolve as media becomes fluid and interchangeable.  There’s a certain subset of the population that gets  upset when people say they’ve “read” an audiobook.  Am I allowed to call Felicia Day a TV star?  Or must I refer to her as a star of online media?  Perhaps at some point we won’t even know what the original source of media was, because we’ll so seamlessly move back and forth from audio to visual in multiple formats.

We live in interesting times.

Monday, October 24, 2011

Feature Creep, or Sour Grapes?


Last week, Andy Rubin, Senior Vice President of Google’s Mobile division aimed some criticism at Apple’s new Siri software, stating “I don’t believe your phone should be an assistant.  Your phone is a tool for communicating.  You shouldn’t be communicating with the phone; you should be communicating with somebody on the other side of the phone.”  Is this a legitimate criticism, or just an attempt to fling mud at Android’s chief competitor?

Mr. Rubin does have a point that is often valid.  Products often get worse as you add things to them, instead of better.  If you’ve spent any time on the Long Island Sound, you may have seen a hybrid vessel that is half sailboat, half motorboat.  I am not alone in the opinion that they combine the worst attributes of each, and the advantages of neither.  Fred Brooks in his landmark book The Mythical Man-Month shares the observation that great architecture comes not from adding feature upon feature, but sticking to a spare elegance.  Bruce Lee expressed similar sentiments when he developed the core principles of Jeet Kune Do.

Mr. Rubin adds emotional depth to his argument when he points out the proper recipient of communication is not the device itself, but people through the device.  This conjures up images of armies of socially isolated users, using their phones as a pathetic stand-in for real human interaction, presumably because they have no friends and can’t get a date.  Most of us know some people for whom this image has some resonance.  The idea that it might spread like wildfire to all iPhone customers is chilling.

Does his point make sense here?

In order to analyze this more objectively, we need to take a closer look at what Siri actually does, and figure out if those are truly features that belong on our smartphones, or if they are examples of inappropriate feature creep.  Siri is versatile, so it’s difficult to pin down the features with any level of precision.  However, we can get a reasonable view by taking a look at the examples that Apple provided in the “Introducing Siri” video that was released with the iPhone 4S (http://www.apple.com/iphone/features/siri.html).  In this video, we see people:
-Checking messages
-Sending texts/emails
-Setting a reminder
-Playing music
-Checking traffic
-Checking weather
-Looking up basic facts
-Setting a timer

In almost every case, this is functionality which already existed in the iPhone, via the Operating system or the browser.  They also exist in Android.  The counter-examples might be the reminder feature, which is a recent addition to iOS 5, and the timer.  However, there has been a thriving market for personal organization software for iOS for quite some time – it’s the main reason I bought my iPad, so the reminder feature is really nothing new.  I’ll go out on a limb and say that neither is the timer.  So I feel justified in saying that Siri doesn’t really add any new functionality to the iPhone.  What it adds is a new interface.  The video also points out that this new interface makes the iPhone much more usable to the visually impaired.

So Mr. Rubin is really criticizing is the addition of a new interface to existing functionality.  It’s hard to make a serious case that this detracts from the elegance or usability of the phone.  There are certainly times when it won’t be of use, especially in areas with high background noise, or where speaking aloud is socially unacceptable.  But there are also numerous times when a user’s hands are otherwise occupied, and this becomes a way to use the iPhone when it would not have otherwise been usable.

So sorry Andy, I’m not buying it.  Adding a new, optional interface to my device doesn’t mean I’m spending all my time talking to my phone, it means I’m accomplishing the same tasks I was before in a more convenient way.  Now if only Apple would release Siri for my iPad…

Monday, October 17, 2011

And then what?

On October 10th, the hacker group Anonymous failed to take down the NYSE website.  Whether they ever made a serious go at it is unknown.  While there are some advantages to being a fully decentralized organization, it's limits include a lack of ability for anybody, including itself, to ever know its full agenda or action plan.  Insofar as it ever existed, the planned attack was apparently an attempt to show solidarity for the "Occupy Wall Street" protesters.

I will confess that I remain confused over the ultimate aims of the Occupy Wall Street protesters, and especially about how anybody (including themselves) will know if they've "won".  But I'm especially confused about the purpose and presumed benefits of executing a denial of service attack on the NYSE website.  What, exactly, is the point?  If it succeeded, NYSE might be forced to buy a couple extra servers to beef up their capacity.  Maybe they'll dream up some additional security measures, although there's really not too much you can do against this type of attack.  Some system administrators would be mildly inconvenienced (although they'd also gain some additional job security), and life would go on.

I'm not opposed to protests and revolutions.  But I'm a big picture guy, and a systems thinker to boot.  I'm wildly unpopular at cocktail parties, because I refuse to concede that the ills of the world can be accurately summarized in a few sweeping generalities.  If you knock down an existing system or institution, that immediately raises the question "And then what?"   If you believe that our country is being compromised by a network of good ole boys in a system of "crony capitalism", then I can appreciate that.  But if you think that you can change that system by taking down the website of the NYSE, then that's pathetic.  What you need is to have a deep understanding of the economic, social and psychological factors that have created that system.  Certainly there is much that might be done by addressing legal issues such as corporate governance, director accountability, accounting standards, capitalization ratios, and financial transparency.  You might choose to tinker with the minimum wage, or campaign finance law.  But a denial of service attack?  Come on.

The American revolution was won on the battlefield.  But it's a critical mistake to believe that victory in war created a new nation.  The nation was born in the painstaking hours of drudgery spent in Philadelphia and elsewhere as delegates from across the colonies argued complex points of philosophy, law and history to create the compromise that became the legal framework of the United States of America.

So tell me your manifesto, what you think the problem is, and prove to me that you understand the complex system that has created it and perpetuates it.  I may or may not join you.  But I'm not going to believe youre worth joining unless you can, at a minimum, answer the question "And then what?"

Monday, October 10, 2011

Everybody wants to be Apple

Like many others, I was deeply saddened to read of the death of Steve Jobs this last week, even though I don't consider myself a hard-core Apple fan.  I love my iPad and my iPod, but I get frustrated by the simple things they are unable to do.  For example, I can't get my iPod to list all the titles of my Audiobooks, or my iPad to put all my martial arts documents (books and videos) in a single folder.  And I haven't used an Apple computer since I finally, and reluctantly, gave up on my Apple IIe sometime around 1990.  But I have enormous respect for what Apple, and Steve Jobs, has done and become in the last 14 years or so.  And it's interesting to see how Apple’s business model is being emulated by some very different players in the technology Industry.  Two that I'd like to call specific attention to are Oracle and Amazon.

It’s worth asking what makes Apple so special.  There's no single factor - they do an awful lot of things very well, and a misstep in any of them would dilute many of the others.  Most people say that Apple's secret is to make insanely great products.  This is true, but hardly sufficient.  The Flip was an insanely great product, for many of the same reasons that Apple products are great.  It ignored conventional wisdom about how to make a video camera (pack in loads of features).  Instead it focused on delivering the core features that people wanted, and packaging it in a small, elegant, easy to use package.  It was tremendously successful, but not successful enough - for reasons known only to themselves, Cisco pulled the plug, and stopped making them.  I'm not sure what I'm going to do when I need to replace mine.

Apple's real secret sauce is that it doesn't limit itself to making a single product in isolation.  It asks what experience it is trying to deliver.  Then it builds a product to create that experience.  But products don't deliver experiences by themselves - what else do you need?  Apple figures that out, and makes sure that those pieces are in place as well, either by building them, or controlling the delivery.  Ipods aren't nearly as useful without iTunes software to manage the library, or the iTunes store to sell media.  The Mac and iPad both work well because Apple doesn't look at hardware or software in isolation, but makes sure it owns both sides and integrates them seamlessly.

This isn't happening in a vacuum.  In one of the biggest non-Apple product releases in recent years, Amazon.com recently announced what may be the first serious competitor to the iPad, the Kindle Fire.  The naysayers point out that this new Kindle is doomed because it is both physically smaller and less capable than the iPad.  This is true, but it misses the point that these are the exact same reasons people used to claim the iPad would fail when it first launched.  Netbooks were available that were also small, and had much greater power and flexibility.  These critics failed to realize that many purchasers and would-be purchasers of netbooks had little interest in running all the Windows software a Netbook will run.  They wanted something light, easy to turn on, that would allow them to browse the web, read email, and perhaps use some applications.  IPad met the bill, and sales have been explosive.  Time will tell whether there is a similar niche at the lower end of the tablet market that Amazon will be able to dominate with an inexpensive competitor.

The Kindle Fire makes great use of the Amazon store, allowing it to be a seamless portable platform for consuming books, music and movies.  But Amazon's great conceptual leap lies in the leveraging of their cloud technology to seamlessly accelerate the performance of their new web browser, Silk.  Amazon has been one of the top players in the new Cloud marketplace, selling unused computing power from their massive server farms for low, hourly rates.  Now, harnessing that same horsepower will allow Kindle users to view the complicated web sites that proliferate these days much faster than would otherwise be possible with the hardware available at the Kindle’s price point.  Amazon cleverly made this an optional optimization, so people don't have to feel tied to Amazon's infrastructure if they want to browse the web directly, albeit at a slower speed.  I personally have no interest in the Kindle (the screen is too small for my needs), but I applaud Amazon for realizing that the web surfing experience doesn't need to be constrained to a single device, and figuring out creative new ways to solve the problem.  And although privacy advocates will howl, it certainly doesn't hurt Amazon's marketing analysts to gain a portal into the full web browsing experience of Kindle Fire users.  My guess is that most people will gladly trade privacy for convenience, as we've already seen in many areas of online life.

The other interesting innovator we've seen recently is Oracle.  When the number one database company purchased Sun Microsystems in 2009, the common speculation was that Oracle was after Java, MySQL, and Sun’s patent library, and would basically milk the hardware business dry.  But lately, Larry Ellison has been talking all about the hardware, crowing about how a tight integration between hardware and software will provide unprecedented levels of performance.  Sound familiar?

The funny thing is, what we now call the Apple strategy wasn't invented by Apple.  Not even close.  That award probably goes to IBM.  They wrote their own operating systems and software to run on their mainframes because they were the only game in town.  Only gradually did it occur to people that you might get greater flexibility by adopting a “best-of-breed” model.  At the time, this multi-vendor approach was considered a huge leap forward, mostly because IBM and the IT departments that bought from them had grown sluggish and inward focused.  Now we seem to be coming full circle.  IBM has moved on to selling software and services (and doing basic research, as they are happy to periodically demonstrate against the Chess and Jeopardy champions of the world) while Apple leads the charge back to the world of fully integrated solutions that just work.

Farewell, Mr. Jobs.  May your greatest legacy be not the products you invented, but the path you blazed which will lead Apple and many other companies into the future.

Monday, October 3, 2011

Anarchists, Lunatics, and … Politicians?


It is not an original observation that the Internet is a chaotic place.  With the ability for people to express outlandish opinions in relative anonymity, and find other like minded people who might share these opinions, we’ve seen the emergence of all sorts of new types of social structures emerge, ranging from dating sites specifically geared Ayn Rand fans, to flash mobs that come together to dance in silent unison in the London Underground.
The question is, why haven’t we seen this taken the next logical step?  Now that everybody can find their own group, no matter how unusual it may be, why don’t we see this spilling over into real world power structures?  Sure, we’ve seen some politicians start to harness the power of the web in basic ways, but where’s the real fringe candidate or party coming to power based on a collection of eccentrics bound together by an online interest?
Answer: Germany
It finally happened – not only did a group of hackers form an organization called the Chaos Computing Club (not so unusual), they then leveraged that to form a political party, which they called the “Pirate Party” just so everybody would know exactly how serious they were (a bit more unusual).  Then they went and won 8.5% of the vote in the Berlin State elections (ok, now THAT’s unusual).  Their platform is all about intellectual property: reforming patent law, strengthening individual privacy rights, and increasing the transparency of state administration.
If this happened in the US, that would give them some headlines, and maybe a few politicians would say a few empty words about being more inclusive and working to address their legitimate concerns.  And then promptly ignore them.  In Germany, their voting results were enough to give them 15 seats in the legislature – maybe not enough to take over the government, but enough make everybody sit up and take notice.
So what happens now?  Is this a one time fringe event to be recorded in the history of oddball politics?  Or does this represent a first step in a fundamental reshaping of the political map world-wide, as new voices and new power structures start to emerge?
And does this have you excited, or terrified beyond belief?  Or maybe just a little bit of both?
Whenever I see things appear to go off the rails, I reflect that some of the earliest writing we have is from Ancient Egypt, circa 2000 BC, complaining that civilization is going completely to the dogs, and everything was so much better a few decades ago.
I also remember that no matter how often they are wrong, someday the doomsayers will be right, and civilization will really come to an end.

Monday, September 26, 2011

Do we really need technology patents?


In April of this year, Apple, not satisfied with having proven they can run circles around their competition through technological innovation and execution, decided they needed to add lawsuits to their mix.  They sued Samsung for patent infringement, claiming that a number of Samsung's phone and tablet offerings resembled the iPhone and the iPad too closely.  Samsung, realizing the gloves had come off, dug into their own patent libraries and found reasonable grounds to sue Apple for infringement of some of their wireless technology patents.  Victories have been won and lost, lawyers made money, and the saga spins out some new headlines every few weeks for those interested in this sort of thing.

Is anybody winning this fight?

It's worthwhile to ask why we have patent law in the first place.  Patents have a long history, stretching back to Italy in the 1400s (or earlier, depending on your definition).  In the United States, the foundation for patents was laid into the constitution, and implemented into law in 1790.  Those were heady times for technology innovation.  The Constitutional Convention in Philadelphia adjourned early one day to watch the trials of John Fitch's steamboats on the Delaware river.  It seemed a reasonable step to allow the innovative geniuses who were creating these novel and unheard of technologies to have some exclusive profit for a period of time.  Perhaps it made sense then.

Does it still today?

In today's world of technology advancement, it's pretty hard to come up with a widely agreed upon definition of what is new and innovative enough to merit a patent.  One common rule of thumb is that an innovation must be non-obvious to somebody trained in the field.  But the problem is, which person are you measuring against?  The quality of people in software development varies widely.  If I invent a clever new algorithm, it's likely that hundreds of thousands of IT professionals won't have thought of it, and by that standard it is non-obvious.  But the field is also filled with thousands and thousands of geniuses in Silicon Valley and elsewhere that not only find it obvious, but they probably already thought of it in the shower and never thought it was worthy of following up on.  So patents are not so much a protection of innovation as an intellectual land-grab.  Whoever files the (somewhat expensive) paperwork gets the rights.

The patent proliferation is so bad that it's not only possible, it's virtually inevitable that any large and complex code will inadvertently infringe upon dozens of pre-existing patents.  This isn't a theoretical concern.  The problem is bad enough that Google was willing to spend $12.5 Billion dollars on Motorola, not because they had a burning desire to run a cell phone company, but purely because they wanted a war-chest of patents to protect their Android software.  Not all of those patents will apply, but just having them serves as leverage against any other technology company that may choose to sue, just as Samsung has started to do with Apple.  Patents are not innate.  We the people have chosen to allow them so as to stimulate innovation.  Is this really where we want Google, and every other technology company, to be spending their money?

What can we point to that suggests that technology patents actually stimulates innovation?  Do you really believe that if we abolished patent law today, most or all of the consumer device gadgets would stop making cell phones and tablets and DVD players and TVs?  Do you really think we're better off as a society if Apple gets twenty years to make tablets with no competition?

I would answer "no" to the above.  And if Apple starts focusing more on patent litigation than on making "insanely great" products, I think we might mark this point in future history books as the moment when they blew one of the great turnaround stories of modern times.

Sunday, September 18, 2011

When will they ever learn?

If there's one lesson in terms of scandal management that everybody seems to agree on, it's that the initial crime or screw-up isn't nearly as fatal as the cover-up which follows.

Why is this so difficult for people to learn? And in this day and age of transparency, why isn't it more patently obvious that the truth is going to get out, sooner or later?

Our latest contender for the crown prize in idiotic crisis management is DigiNotar. A Certificate Authority located in the Netherlands, DigiNotar is one of the trusted firms that is supposed to guarantee the integrity of information on the internet. One would think that this awesome responsibility would weigh heavily on those who carry it, and would cause them to think through their "what if" scenarios very carefully.

Or then again, maybe not. As you already know if you follow this type of tech news, DigiNotar was hacked, and hacked badly. I don't really blame them for this. Internet technology is a massively complicated affair, and people are notoriously susceptible to social engineering. So I think any firm is susceptible to being hacked (though I do scratch my head and wonder what they were thinking when they set their production admin password to "pr0d@dm1n"). But once this happened, one would hope for just a trace of transparency and accountability. Warn the world of what has happened. Recall the tainted certificates. Put an immediate halt on issuing new certificates until you've figured out the full extent of the problem and figured out how to fix it. And no, I don't mean just changing a stupid password to one marginally less stupid - we need a complete technology and process overhaul.

But DigiNotar failed at each of these tasks, and has thus been removed from the trust of all the major browsers. Barring having their corporate headquarters get struck by an asteroid made of platinum, they're out of business. Some of their competitors who were also hacked took full responsibility and disclosed everything, and will likely emerge stronger and more trusted than ever.

Some day, people will learn. But it's apparently not this day.

Monday, September 12, 2011

The Courage of your Convictions


It's not too often that we talk about Corporations displaying courage.  The stories that make it to the newspapers are usually concerning bureaucratic incompetence or uncaring actions, both of which I could argue largely arise from group-think, and the failure of any individual to stand out and take chances outside of the system.  But I'd like to call attention today to Netflix, which made a startlingly intelligent and risky decision.

During a recent outage of Amazon Cloud services, Netflix was one of the very few Amazon's customers which felt minimal impact from the incident.  This was because of an internal tool at Netflix called (don't laugh) "Chaos Monkey".  It seems that Netflix decided early on that it was important to have redundant systems, and they wanted to make sure any single failure wouldn't take down their environment.

So far so good.  Most companies make the same decision, and it seems a safe and easy decision for a CIO to put a bit more money into his budget for redundancy.  If it doesn't work, hey, don't look at me, I tried!  But Netflix went enormously further.  They built "Chaos Monkey", designed to take down their own servers, anytime, anywhere.  This isn't in a controlled sandbox.  It isn't even their development environment.  This is production.  Developers who don't design 100% redundant systems find out about it REAL fast.

I've worked in many corporations, some of them quite good, and have yet to work in an environment which would have the stomach for such a radical proposal.  And yet what could be more effective at achieving their goals?

What are you convictions?  And what would you be willing to risk to live them?

Sunday, February 20, 2011

Thinking about thinking

On February 16, 2011, a Computer named “Watson” defeated two champions in a three day televised contest of the game-show “Jeopardy”.  Whether or not this was a monumental achievement depends on who you ask.  On the one side, there were the people who were by turns impressed, and concerned by this accomplishment.  Impressed by the sheer technical challenge of solving a game which requires the ability to parse natural language and find answers to complicated questions.  Concerned because of the implications of people losing their jobs, or as others noted, probably only partially joking, that this was the first step towards artificially intelligent terminator robots being sent back from the future to destroy us all.

The other side came from people who felt that this was a straightforward exercise in data lookup, which has been demonstrated by computers for years, and perhaps brought much more firmly into the public consciousness by Google.  One commenter on the New York Times made the following criticism of this as a milestone in artificial intelligence:

“...There is … a risk in considering this a test of ‘intelligence’. When ‘Big Blue’ beat Kasparov in chess 20 odd years ago people correctly realized that chess is ultimately reducible to a mathematical equation, albeit a very long one. While Watson certainly seems a giant leap in a computer's ability to process more complicated language, it is still committing an analysis of terms, and I question whether it can truly comprehend communication.”

This is an interesting point, and it hinges on the question of what is intelligence, and whether “real” artificial intelligence really matters.

In 1950, the English Mathematician Alan Turing proposed an experiment called the Turing Test.  In this experiment, a neutral observer would sit at a computer terminal and have a typed conversation with an unknown party.  The question is whether a computer could consistently fool this observer into thinking that he or she was conversing with a human.

Many people think that this was designed as a test for artificial intelligence.  In reality, Turing deliberately sidestepped this question.  His question was not whether a computer could be intelligent.  His question was whether a computer could imitate intelligence.  Which is an interesting point – if something passes every test you can imagine for intelligence, does it matter whether or not it is actually intelligent?  Perhaps this is relevant for a philosopher consumed with the volume of trees falling unseen in forests, but from a practical perspective, the answer would seem to be “no”.  If a terminator comes back from the future to kill you, it’s not much consolation that it is only a very precise mimic of an intelligent assassin.  Some people might point out that this is actually a crucial issue: If it only mimics intelligence, then you can exploit that weakness and figure out the limits of its capabilities to defeat it.  But this ignores the essence of the experiment.  If you can figure out the limits of its imitation, then it’s not a successful imitation: the Turing test has failed.  If it’s a true mimic, then you can’t tell the difference, and you’ve got a seriously dangerous AI robot on your hands, whose only limitation seems to be a thick Austrian accent.

So let’s go back to our commenters point, and admit that Watson really just succeeded at solving a big equation that was modeled to solve the game of Jeopardy better than the humans could.  It didn’t truly represent artificial intelligence.  Our next question is: what is the limit of problems that can be modeled as mathematical equations?  People have been wrestling with this problem for a long time – much longer than the beginning of the computer age.  In fact, we might reasonably take this back to Pythagoras in 500BC.  He came up with a fairly startling proposal: everything is number.  Or perhaps in more modern terms, everything can be expressed as numbers.  Exactly what drew him to that conclusion is unclear.  Perhaps it was the discovery that you can increase the pitch of a note by exactly an octave by reducing the length of a vibrating string by half.  While this might seem straightforward to us today, the discovery that something as emotional as music had a mathematical foundation must have been profound, even unsettling.  Certainly his observation has proved true in ways that would have seemed unfathomable then.  Today, anybody passingly familiar with digital music or images knows that any symphony or artwork can be expressed as a mathematically, whether as a formula or a series of ones and zeros.  And if we can express anything static digitally, why not anything dynamic?  Why not a process?  Why not an incrementally improving process?  Why not innovation itself?

To be sure, there are real limits to what we can get computers to do today.  We can’t program a computer to be an original musical genius, though it’s worth noting that we can program it to compose derivative works.  There is software that can create original compositions that remind one vaguely of Mozart – not one of his best works, perhaps, but certainly of that period.  But before we get too caught up on this point, it’s worth noting that we don’t know how to raise a human being to be Mozart either.  It just happens.  We’d be hard pressed to even define what a genius is, or how to consistently recognize it.  After losing his chess match to deep blue, Kasparov accused IBM of cheating because of the “deep intelligence and creativity” he saw in Deep Blue’s moves.  Assuming IBM did not cheat (and I’m willing to give them the benefit of the doubt), was Deep Blue a “genius”?  If we can program a genius at chess, and Jeopardy, what’s next?

And perhaps more importantly, what does this mean to us?  As noted before, there are two big concerns arising from this trend: jobs, and accent impaired killer robots.  The robot question is perhaps the bigger one, and for which an answer was proposed many decades ago by Isaac Asimov.  In his science fiction worlds, he eliminated the danger of robots to humans by building three rules deeply into their operating systems:       
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    On the surface, this looks like it would solve the problem.  In practice, there are some real issues.  The first is, how do we even begin defining this?  You first need to program AI to recognize humans, and then understand all the ways that humans might come to harm.  How far do you carry this?  Does a robot stop you from jumping out of a window to save yourself from a burning building?  Does it prevent you from smoking, because you’re damaging your lungs?  How about preventing you from driving to work, because of the long term damage you’re causing to the atmosphere?

    Leaving aside those questions, I think it would be a great next experiment for IBM.  Build a robot capable of doing, say, basic factory work, but which is absolutely incapable of harming a human.  A self driven forklift which can spot if a person gets in it’s path and comes to a halt.  Points given to how intelligently it can work to proactively prevent that human (or perhaps a crash dummy to start) from coming to harm.  Figure out how you can build these guidelines into any program you design.  That would generate huge positive publicity for IBM and be a great step forward.

    The problem is, it isn’t likely to catch on, because not everybody buys into the concept.  The military has no interest in unmanned drones which can’t harm humans.  They’re putting billions of dollars into designing robots which can kill people at a distance and with less and less human operation required.  In the short term, this is a great idea, because it saves the lives of our soldiers.  In the slightly longer term, it should be noted that we’re zipping quickly along the path to a Terminator future.  Maybe we’ll get to a future where we only program our military robots to destroy our enemy’s military robots.  Whoever has an automated army still standing by the end of the conflict wins.  But call me a skeptic.

    The other issue is jobs.  There are two conflicting points worth noting here.  The first is that machines have a long history of eliminating jobs.  This is always painful and sometimes devastating for the people involved.  And yet, the next generation never has any interest in going back.  New jobs arise, and few people miss the old ones.  I personally have no interest in giving up shipping my packages via UPS in favor of having them lugged about by the pony express, and I don’t care how many pony riders that impacts.  There are more jobs now as truck drivers and logistics coordinators than were ever in the pony express.

    However, that brings up our disturbing second point.  As they reluctantly admit in Wall Street, past performance is not a guarantee of future results.  Just because a trend has continued in the past, doesn’t mean it must, or will, continue.  Real estate prices in the United States always went up over time – until they didn’t, and we had a massive crash that almost turned into a global depression.  The question we must ask is what are the underlying drivers of a trend, and are those drivers still applicable?

    In the past, machines have automated more and more physical activity, so people have increasingly turned to mental activity.  We’re now a nation of cube workers, because our jobs don’t require us to move around.  We’ve got machines to do all that.  But as computers get increasingly good at solving problems that used to require mental activity, there may be nowhere else left to go.  Perhaps if we get these laws of robotics down, we’ll be ok.  After all, we may eliminate the need to work completely, the cost of living will drop to zero, and we’ll all live the lives of Roman Emperors.  That’s the optimistic scenario.  Granted, many people will turn into the blobs depicted in the movie Wall*E without the structure of a job to keep them busy, but that will be self-inflicted destruction, and not one I’m going to worry about personally.  I’m sure I can keep myself productively busy in a permanent retirement, and I can’t drive myself crazy worrying about the people who can’t.

    And though it’s not a societal level answer, I think that philosophy is the best way to handle the danger to jobs in general.  It’s possible for a creative and nimble person to stay employed even in a shrinking industry.  The number of jobs in the music industry is declining every year, but there’s still opportunity for people willing to be creative, clever, do a lot of self-marketing, and watching constantly for not-traditional venues.  If you’re looking to get a paying job in an orchestra, good luck.  If you can leverage yourself providing music for video blogs and do live gigs that are fun and engaging, you just might carve out a career out of that.

    So ask yourself what you really do for a living.  How are you providing value?  Don’t ask whether your job could disappear.  Nobody wants to think that it could, and you’ll give yourself a falsely positive answer.  Instead, ask yourself what are the conditions that would be required to make it disappear.  Then figure out what you’ll do when (not if) that happens.  And start doing it long before it does.

    And it doesn’t hurt to keep some huge tankers of liquid nitrogen around, just in case you have to deal with those killer robots.