Monday, January 23, 2012

Invert, always invert


Charlie Munger, the less famous Billionaire partner of Berkshire Hathaway, is famous for his critiques of human psychology.  He constantly warns of common mistakes that people make, and suggests simple ways you can improve your thinking capability.

One of his basic maxims is: "Invert, always invert".  The idea is that by always looking at something in the same way, you get stuck in basic assumptions.  By inverting your perspective, you can often find radically new approaches to problem solving.

Let's take a look at how this might be applied to the world of computer security.

Many years ago, while reading the otherwise excellent DAK Catalog, I found a description of one of the most poorly designed security products I can imagine.  It was a password protection system designed for a dial-up phone system (yes, at risk of dating myself, this was pre-Internet).  Concerned about the possibility that an intruder would attempt a brute force attack, the designers of this product came up with what they thought was a very clever idea.  The first incorrect character that was entered would immediately disconnect the call.  No longer would it be possible for an intruder to try  hundreds or thousands of passwords at a go.  One strike and you're out, dial back and try again.

Let's think about what this means.

Suppose this product used a ten character password.  Further assume it can use upper and lower case letters, and numbers.  That gives you a total of 62 options per character, and 62 to the tenth power possible passwords, or roughly 839 quadrillion possible passwords.  On average, a brute force attack will guess it correctly after 419 quadrillion tries.  Assuming you could try ten times a second (which is a pretty aggressive assumption for a slow speed dial-up system), hacking this system using a brute force attack would probably require 1.3 billion years.

However, this system accidentally leaked critical information to the intruder.  As soon as you guessed the first character correctly, IT TOLD YOU THAT YOU WERE CORRECT.  That is, it failed to disconnect you immediately.  This means that on average, you can guess each character in 31 attempts.  Guessing the entire ten character password would likely take you 310 attempts.  Assuming you could try one combination per minute (much slower than before, as it forces you to redial each time), hacking this system would require a bit over 5 hours.

Ouch.

This is why it's difficult to build secure systems just right.  If you're not really careful, it's very easy to leak all sorts of unintentional information, sometimes using the very mechanisms that you think are making your system more secure.

This is a great example of a security flaw, because it provides an interesting clue about how we might invert our entire approach to security.

Let's go back to the original idea of a ten character password.  Let's assume that we don't leak any information to the would-be intruder to let them know how much progress they're making on cracking that password.  We still suffer from the same basic weakness: we tell the intruder when they have successfully guessed the password.

This sounds ridiculously obvious.  How can a legitimate user of the service function if they don't get access to their data when they type the correct password?  As soon as the data comes up, you know the password is correct.

Unless.. you invert the assumption.

What if every user id and password combination provided access to the system?  Note I didn't say "legitimate access", just access.  An incorrect id and password combination would take the intruder straight into a bogus screen, a concept known as a honeypot.  If this was a banking system, then it would provide access to an account with an imaginary name, holding imaginary sums of cash, and an imaginary history of transactions.

Could an intruder tell that all this was fake?  Probably.  Eventually.  They could research that name, and see if it was a legitimate person.  Look up that name at the account's address.  There's all sorts of things they could do.

It would take minutes at the best.  It might take hours.  Or days.  What a waste of time.  How is an intruder ever going to do a brute force attack if they have to spend minutes or hours or days on each guess, using human intelligence rather than a machine algorithm to detect if they've found a legitimate combination?  And all the while, alarm bells should be going off in the bank, warning administrators and police that somebody is attempting to snoop where they shouldn't be.  Choosing one wrong id and password is understandable.  Ten in a row is clearly a criminal.  And finding a legitimate account will take trillions (or more) of tries.

Clearly, this idea is more applicable in some contexts than in others.  It takes time and effort to create effective algorithms to generate realistic looking data (and especially to make sure that real data doesn't leak through).  For a bank system, which has highly structured, very valuable, and easily generated data, this makes sense.  For an online newspaper, not so much.  So it's not a silver bullet.  Still though, it's an interesting concept that can be combined with existing security technologies to drastically raise the difficulty for intruders, simply by inverting a basic concept.

Always remember to invert!

Monday, January 16, 2012

Contemplating the Personal Data Warehouse


A few weeks ago, my friend Sivakumar suggested that we could improve human life by creating a Personal Data Warehouse, that tracked the development and abilities of people.  Imagine if you could compare the age at which your children developed certain language or other cognitive skills relative to their parents or grand-parents.  We might be able to detect and correct developmental problems early.

He then makes an interesting observation about a trait in his own family, as the mental math skills of each generation are slightly less well developed than the previous one.  He speculated that perhaps this was the result of an increasing reliance upon technology.

Is this true?

If it's true, is it a problem?

The idea that people will degenerate due to increasing reliance on technology (or something that functions like it, such as slavery) is an old one, and has been put forth as one of the possible causes of the fall of the Roman Empire.  More recently, it was brilliantly illustrated in Pixar's film Wall*E, where the human race is depicted as having depended on their robotic creations for so long that they have descended into a race of barely animate blobs, largely unaware of their surroundings, and motivated by nothing but an increasingly desperate search for entertainment.

It's also a very old idea that civilization is on the verge of collapsing.  Some of the earliest examples of writing we have, stemming from ancient Egypt thousands of years before the birth of Christ, complain about how much better things were in the good old days, and how everything since then has been going to the dogs.

How is it that everything has been sliding out of control for so many thousands of years, and yet there are still people around to complain about it?

I would argue that the crux of the issue is the constancy of change.  Sometimes things change rapidly, sometimes things change slowly, but it never comes to a complete stop.  Political revolutions challenge the old power structures, waged by youth who wear outlandish clothing, and completely ignore the accepted methods of political engagement and boldly protest and rebel in brand new ways.  And lest you think I'm talking about the 1960s, or perhaps the Occupy Wall Street movement, I actually had in mind Julius Caesar's rise to power in the late Roman Republic.  Maybe some things never change.

Change inevitably seems destructive to the previous generation.  The old values are thrown out the window.  It seems like the world is coming to an end.  And it is - their world.  What they (understandably) fail to appreciate is that when one world comes to an end, another one rises up to take it's place.  And so the cycle continues.

As much as we may be annoyed by the ghastly and garish fashions that the next generation inevitably adopts, we can presumably agree that fashion is in the eye of the beholder, and that one generation's tastes cannot be objectively demonstrated to be inferior to another's.  (At least if we ignore the 1970s.)  But some things can be measured.  Doesn't a decline in mathematical ability across the generations indicate a genuine loss?  This can be objectively measured.  So can knowledge of critical facts about the world.  Or the ability to react effectively to a crisis.  Aren't there objective yardsticks by which we can measure gain and loss?

The Fall of the Roman Empire once again provides an interesting case study.  The population of the city of Rome had, fairly objectively, lost their physical and psychological ability to wage effective war.  They were pushovers for the rising powers of the Goths and the Huns.  Surely this is an objective example of civilization going to the dogs?

Maybe not.  Sometimes lost in the discussion is the fact that the Roman Empire never exactly fell, not in the sense of having the entire continent of Europe sitting happily as a thriving metropolis on one day, to be replaced by a smoldering crater the next. The city of Rome was sacked, to be sure.  Some of the large scale economic trade routes and complex industries disappeared, true.  But for many of the inhabitants of the Empire, the fall of the Empire was barely noticeable.  One day they were citizens of Rome.  The next day, somebody came along and told them that for the last five years, they'd been citizens of the Ostrogothic Kingdom.  Taxed continued to be paid, official corruption continued, and everybody complained about how much better things used to be, when everybody wore togas and spoke proper Latin, instead of this degenerate Italian which seems to be spreading everywhere.

And why was it that nobody seemed interested in learning to speak proper Latin anymore?  It's not that people had become stupid, or otherwise incapable of learning the language of their forebears.  It simply wasn't useful anymore.  What was the point of learning a language that you couldn't do trade in, woo a girl with, or use to gripe with your friends?  It might be useful if you wanted to read a bunch of really old books, but that didn't really describe most of the population.

So returning to our previous question, is the loss of mathematical ability the sign of decline?  Or is it simply a reflection that those skills no longer make any sense?  What's the point, when your computer, your phone, your tablet, and maybe even your watch can calculate any mathematical product you can type, faster than you can type it?

Critics of this perspective will point out that it represents a loss of independence.  If all our computers, phones, tablets, and digital watches go simultaneously on the fritz, then we'll all be sorry we never learned to do math properly.

Or not.  More likely, we'll wish we had spent more time learning how to fight off mutant zombies using home made spears and improvised explosives, because the zombie apocalypse (or something like it) is the most likely scenario which will result in a complete breakdown of technology.

This brings us back to our original question: is there value in creating a Personal Data Warehouse, which can be used to measure our skills and relative progress across our lives and across the generations?  In theory, I'm a keen supporter of any way that  technology can be used to improve our individual lives.  I like the concept.

But a Data Warehouse is intrinsically a structured repository of data.  It allows you to organize vast amounts of data to spot complex patterns.  It's a great way to see sales trends, or weather patterns, or traffic flow, among reams of data that would otherwise be unintelligible.  The challenging part of designing a Data Warehouse is understanding what types of questions you may want to ask, which influences the structure of your data.

So what are your values?  What things are you going to measure?  And what makes you think those values will continue to be valued by the next generation?  My forebears might have created their own version of a data warehouse to measure skill in archery and driving horse-drawn chariots, skills at which I would fail miserably.  But I'm pretty good at navigating a Subaru through snow covered roads, a skill I find highly valuable.  My ability to re-materialize following an inter-dimensional trans-warp is nonexistent, a fact that bothers me not at all, but which might make me a laughingstock if I'm still around in two or three generations when that's the only way to travel.  I might snarl at these kids who don't know the first thing about a stick shift, but I suspect they won't care.

So if we're going to try to measure ourselves through time, it's going to be a trick to do so in a way that holds up over time.  Mind you, I'm not saying it can't be done, but I'll have to be convinced that whatever measures we choose are truly useful over time, and not simply a reflection of our current tastes.  The closest thing we have right now for this type of human record is the memoir.  It's a useful document, highly flexible in it's ability to track lots of variable data, but it lacks something in terms of analytical and comparative analysis.  At least it lasts.

Now, should I publish mine on paper or ebook...?

Monday, January 9, 2012

When will the future finally get here?


It's interesting how insanely wrong predictions of the future tend to be.  Even people who dream about the future all the time have terrible track records.  It's not that they predict too aggressively, or not aggressively enough.  They just focus on the wrong things.

One of my favorite examples of this is Robert Heinlein, widely considered to be one of the Godfathers of science fiction, along with Isaac Asimov and Arthur C. Clarke.  Heinlein dreamed about the future all the time, but consistently got two things wrong:
·         He vastly overestimated progress in physical science and engineering, especially regarding space travel.
·         He vastly underestimated progress in computers.

The implications of this can be comical.  In Starman Jones, he envisions a world in which we have  technology that can beat the speed of light by manipulating the fabric of space, but computers which are incapable of performing simple mathematical calculations, so astrogators spend their time on flights looking up figures in printed tables and keying them into the navigation systems by hand.

Even when Heinlein speculates specifically about computers, he gets it all wrong.  In The Moon Is a Harsh Mistress, he envisions a supercomputer so powerful that it wakes up and becomes artificially intelligent.  It had so much computational ability that it can calculate the odds of the Lunar colonists launching a successful revolution.  And yet this massive supercomputer, capable of self-programming and orders of magnitude more powerful than anything we have today, is just about maxed about by doing a simple video rendering of a human torso.

Part of the challenge in predicting the future is that technology doesn't improve at regular intervals.  In a given area, it might barely move for years or centuries, then explode forward in the blink of an eye.  Five years ago, electronic books were a joke.  Companies had been noodling with the concept for decades, but nothing they came up with was any good, and it seemed like it might be decades before anything might catch on.

Then the Kindle appeared, and the iPad, and the Nook, and numerous smaller competitors.  Now the question isn't how fast ebooks will grow, it's how long and in what form paper books will survive.  (I think they will survive for the long term, but at a fraction of their previous prominence).

So we've got ebooks.  Are we in the future yet?

Overall, I'd say no.  We're almost at 2015, and the world still seems a far cry from the vision presented in the movie Back to the Future 2.  It got a few parts right, like the prominence of large screen TVs and the tendency for kids (and adults!) to excessively multitask, but we seem as far away as ever from the flying cars, weather control, and hover boards.  Curiously enough, the movie made no mention of areas where we have made enormous progress, such as computers that can beat chess and jeopardy champions, and consumer devices like iPhones and iPads.  Did Heinlein consult on this movie?

On the other hand, we've definitely arrived at the future in some aspects.  I realized we had crossed a line into the future when I saw my first web URL on a movie poster back in 1995 (it was for  Mortal Kombat).  If you grew up with the Internet, you have no idea how startling it was when this obscure bit of networking technology finally broke into the mainstream as people had been speculating it might for years.

Once it did, all bets were off.  All of us back then who were on the cutting edge (using advanced services like Compuserve and Prodigy) could have predicted email and Wikipedia.  Nobody could have predicted twitter as a force that could organize revolutions, blogging that would take on mainstream media, or youtube that would turn funny cat videos into global sensations.

So for purposes of figuring out when the future has arrived, I'm going to arbitrarily divide it into three basic stages:
1.    Computers and networks go mainstream
2.    Household robots take over household chores
3.    Flying cars

As noted, we're already well ensconced in Phase 1, and have a generation of kids and young adults who can't imagine the world any other way.  (And I must confess I still scratch my head when I try to remember how people used to find things out before Google, or even its predecessor Alta Vista.)

But if you think Phase 1 was a game changer, just wait until Phase 2.  This is going to revolutionize the very concept of what it means to be human, and will rock our society and economy to its core.  After all, if we have robots that can make the bed, do the dishes and take out the cat,
Do we still need housekeepers?
Do we still need cashiers?
Do we still need auto mechanics?
Do we still need airline pilots?

You might get a bit queasy thinking about that last one.  After all, do you really want to trust your life to a machine that might suffer a blue screen of death at any moment?  And you'd be justified in your concerns, as long as you ignore Bruce's Law of Technology.  Because shortly after we graduate from today's autopilot to something that can approximate a takeoff and landing in good conditions, the technology is going to  improve so quickly that insurance premiums for using human pilots instead of automated ones will go through the roof.  Expect automated pilots to catch on as fast as ebooks.

How far are we away from this transition to a robotic world?  It could be a long way off.  Decades.  In fact, from where we stand, it looks as far off as ebooks looked in 1986.  And 1996.  And 2006.

At Google's 2011 I/O Conference, Google announced an initiative to develop an open source operating system for robots (ROS).  As you might expect from a Google initiative, this OS will be easy to connect to the Cloud.  This means that robots can leverage Google's massive server farms for complex tasks such as object recognition, and not have to lug around the huge volume of CPUs that would be required to do this in real time, nor their associate batteries.

It also potentially means that researchers can share ideas, technology and databases much easier.  Why reinvent the complex algorithms to recognize a household object, when somebody else has already invested hundreds or thousands of hours on it?  Leverage their code, and spend your time thinking about what cool things your robot can do with that object, once it has recognized it.

Does this mean the robot revolution is just around the corner?  That's the problem with predictions - we just can't tell.  Google's latest initiative certainly looks promising.  But I could find other initiatives back in the nineties and eighties that looked just as promising.  The technology needed wasn't ready back then.  Maybe its not now, either.  Or maybe it is.  When it finally is, expect robots to turn our entire world upside down.  It will revolutionize life, work, employment, and leisure, although for good or ill is impossible to say.

All I can say is that humans better still be allowed to drive by the time I can finally buy my first flying car.  I'm sticking firm on that one.

Sunday, January 1, 2012

Intentions versus Capabilities


In the last few weeks, one of the most prominent technology stories has been about one of the most divisive legislative proposals in recent memory, called Sopa (Stop Online Piracy Act).  This proposed bill makes for strange bedfellows, bringing together organizations that have probably never agreed on anything in the past.  For example, the AFL-CIO and the Chamber of Commerce are supporting the legislation, while the Tea Party and The Huffington Post find themselves united in opposition.

Part of the reason for the opposition to the bill is the sweeping powers it provides.  It is possible, for example, to interpret the legislation as allowing a judge to shut down Google if any of Google's search results end up aiding pirates (which is virtually inevitable, given the sweeping nature of Googles services).  Proponents of the bill dismiss this concern, saying that's not the intention of the legislation.  This dismissal soothes the critics not at all.

It's worth noting that critics of the bill include a large number of programmers and other people with technical backgrounds.  Programmers have deep experience with a problem that was perhaps best illustrated in the story The Sorcerer's Apprentice (of which Disney's 1940 Fantasia cartoon is one of the most accessible and popular versions).  In this story, we see the young apprentice, eager to use his new-found skills to lighten his load, enchanting a broom to carry the water.  Everything goes fine until the apprentice realizes he doesn't know how to make it stop.  His attempts to do so makes things infinitely worse, and he nearly drowns before the Sorcerer returns to the scene to save the day.

The moral of the story is that having access to power (or technology) does not convey understanding of or control over it, which can lead to unexpected and very unpleasant results.  Any programmer who has done even basic debugging has come to realize that it doesn't matter in the slightest what they intended their code to do.  What matters is what it actually does.  Discovering and correcting the difference between the two can be fiendishly difficult.

This is why programmers are not soothed by reassurances of what the legislation is intended to do.  It doesn't matter that somebody says "give me these sweeping powers, and I promise not to abuse them".  The world doesn't work that way.  Once the genie has been let out of the bottle, it can't be stuffed back in.  Power that can be abused, will be.  Even if you trust the current person in charge, what about the next one?

I don't support piracy.  But even less do I support sweeping reform that is poorly understood, with concerns swept under the rug because the architects say "That's not what we meant."

Spend some time learning to debug C, then we can talk.