Wednesday, December 19, 2012

We need to re-examine gun control



I'm not a gun person.  I've never owned one, and my father never took me hunting when I was a kid.  I once went to a firing range, took some training, and spent half an hour punching holes in a paper target, but that's about it.  I've never been particularly anti-gun either.  Had you asked me in a survey if I supported increased gun legislation, I would have said yes in an ambivalent sort of way, but it was never my issue.

That changed on December 14, 2012, in the Newtown Elementary school massacre.  My thoughts on the subject are captured below in question and answer format.  If you are a gun proponent and believe I've missed some significant arguments, I would sincerely like to hear them in the comments.

Q: Aren't you politicizing a tragedy?
A: No.  Politicizing the tragedy would be to suggest it wouldn't have happened under a republican white house, or a democratic congress, etc...  Advocating gun control following a gun related tragedy is no more political than advocating a review of fire safety standards after a fire.

Q: Guns don't kill people.  People kill people.
A: True.  But guns make people much more efficient at it.  Newtown Elementary involved guns.  So did Columbine, and the shooting at Aurora.

Q: What about 9/11?
A: True.  Even with no guns, there will be the potential for large scale catastrophic incidents.  But they'll be much more rare.  9/11 took years of sophisticated planning - it's not something a lone lunatic is likely to be able to achieve.  Also, you'll note that after 9/11, we took significant action to prevent a recurrence.  While you can argue for or against the effectiveness of any particular bit of airport security, you must acknowledge that there has not been a recurrence of that type of large scale airline hijacking since 9/11.  There have, however, been multiple recurrences of large scale shootings in public places since Columbine.

Q: What about Oklahoma City?
A: Once again, you'll note that after the bombing of Oklahoma city, we introduced new safety restrictions around government and other buildings to make it more difficult for somebody to drive up and detonate a bomb.  And once again, you'll note we haven't had a major recurrence since then.  We try to make things better after every plane crash, after every major fire, and after every financial disaster.  It's only gun violence that we allow to recur again and again.  Something is wrong with this picture.

Q: But if you take away the guns, only criminals will have guns.
A: Criminals and police, yes.  Some criminals will still have guns, but far fewer than otherwise.  That means fewer crimes, fewer household accidents, fewer rampages.

Q: Gun ownership prevents crime via deterrence.
A: While there is anecdotal evidence of specific crimes being prevented, there is no evidence that gun ownership lowers the overall crime rate.  Gun ownership did not prevent Newtown, or Columbine, or Aurora, or Virginia Tech.  Also, if gun ownership prevents crime, we should expect to see significantly higher levels of crime in countries such as the United Kingdom and Australia (which I picked for their cultural similarity to us, and different approach to gun laws).  This is not the case.  The United Kingdom has an "intentional homicide rate" of 1.2 per 100,000.  Australia's rate is 1.0.  The rate for the United Stated is 4.2.  That's a higher rate than France, Spain, Portugal, Germany, or Italy.  It's even higher than Libya, Algeria, Somalia, Iraq or Iran.  A citizen of Switzerland coming to the United States faces roughly the same increase in danger that a citizen of the United States would face traveling to Sudan, Kyrgyzstan, or Tanzania.  In case you're not familiar with these countries, it would be safer to go to Rwanda.

Q: The constitution protects gun ownership.
A: Three points on that:
1.    Congress has already implemented some gun legislation.  It’s relatively toothless, but the legal precedent has been established.
2.    The constitution doesn't spell out the right for individual gun ownership.  It states: "A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."  Our current policies on gun ownership do not require a prospective owner to be a member of a well regulated militia.  One could easily restrict guns to members of the police, army and national guard without straying from the wording of the constitution, as long as any able bodied individual was permitted to join these institutions (especially the guard).  I admit that I'm on the opposite side of current judicial thinking on this one.  In his decision on District of Columbia v. Heller, Justice Antonia Scalia wrote "Nowhere else in the Constitution does a 'right' attributed to 'the people' refer to anything other than an individual right. What is more, in all six other provisions of the Constitution that mention 'the people,' the term unambiguously refers to all members of the political community, not an unspecified subset."  In short, he argued that because every other part of the constitution only deals with individual liberties, this section should be interpreted as dealing with individual liberty as well.  Put another way, he argues that this section of the constitution is unconstitutional, and should be reinterpreted based on how the rest of it is written.  I think that logic requires a serious re-examination in court.
3.    I'm willing to back a change in the constitution if need be.  The constitution supported slavery.  Then one day, we said "that's not who we are", and we changed the constitution.  It's a great document.  It's not infallible.

Q: Changing the constitution just isn't feasible.
A: Neither was eradicating smallpox, or traveling to the moon.  Until we did both of these things.  There are some problems that are so vexing that we just have no idea how to start.  This is not one of those problems.  Difficult?  Yes.  But that's hardly a reason not to do it.

Q: Restricting guns is a restriction of individual liberty.
A: True, to a point.  Not restricting guns is also a restriction of liberty.  The word freedom is an often misused concept.  It is rarely acknowledged that every "freedom to" is an infringement on a "freedom from".  This was well summarized by Oliver Wendell Holmes, who said: "The right to swing my fist ends where the other man's nose begins."  Whenever we discuss protecting or infringing liberty, we need to look at both sides of this equation.  In this case, I believe that your freedom to own a gun is outweighed by a first grader's right not to be massacred in the classroom.

Q: If the shooter in Newtown didn't have guns, he just would have found another way to commit this atrocity.
A:  I'm sure he would have tried.  But I don't believe he would have gotten nearly as far as he did before one of the adults at the school managed to take him down.

Q: We have a long tradition of gun ownership in this country.
A: The South had a long tradition of slavery.  But it was wrong, so we ended it.  Not all traditions can or should be maintained.

Q: If we abolish gun ownership today, what's to stop somebody from ending other constitutional liberties like free speech tomorrow?
A: Me, you, and every other citizen who values free speech.  The day that we are massively outnumbered by citizens who believe that the costs of free speech are too heavy to bear, then free speech will end.  Whether or not that day ever comes has nothing to do with whether we address gun ownership today.

Q: I'm a responsible gun owner.  Why must I pay the price for some lunatic's actions?
A: Because, unfortunately, lunatics look like everybody else.  If there's a reliable way to prevent lunatics from ever getting their hands on guns, I'm all ears.  Sadly, the NRA, which should have been leading the effort to ensure that only responsible, committed people get their hands on guns, had instead been opposing every effort to take steps in this direction.

Q: If the citizenry is unarmed, what's to stop the US government from taking over and turning into a police state?
A: The same thing that prevents this from happening in the United Kingdom, Australia, Japan, and most other first world countries that have much more restricted gun ownership.  The fact that politicians and the army consist of your friends and neighbors.

Q: If you made it illegal to buy guns tomorrow, there would still be millions of guns in people's homes.
A: Most gun owners are responsible, honest citizens who would turn in their guns if it became illegal to own them.  That right there would have stopped the Newtown shooter, who apparently got his guns from the collection of his mother, a gun enthusiast.  There would still be people who would insist on keeping grandpa's vintage rifle in the attic.  But the more effort it takes for lunatics to find these guns, and the more people they have to ask to find where these guns are, the more likely they'll be stopped before the tragedy begins.  It would definitely take time.  Once again, that's no reason not to begin.

Monday, October 1, 2012

The Most Influential Person in History

"There is no need to run outside
For better seeing

Nor to peer from a window.  Rather abide
At the center of your being;
For the more you leave it, the less you learn.
Search your heart and see
If he is wise who takes each turn:
The way to do is to be."
-Lao Tzu

I was recently catching up on back episodes of the truly excellent British History Podcast (which I strongly recommend if you haven't yet discovered it), and found myself listening to an argument between a number of history podcasters over which person was the most influential in all of history.

An interesting question, and one virtually impossible to answer without a time machine, because you never know if somebody changed everything, or was simply marching in front of the parade.  For example, you could argue that Constantine ushered in the age of Christianity, which had enormous influence over Europe and most of the rest of the world.  On the other hand, Christianity was already spreading rapidly.  If not Constantine, would some other Emperor a few decades later have done the exact same thing?  Similar arguments can be made for many historical events and technological innovations.

However, there is one figure who I think is a very strong candidate for the most influential person of all time, by virtue of the fact that if he hadn't been exactly where he was, there's at least a reasonable chance that the entire human species would have come to a very abrupt end.

Stanislav Petrov was a Soviet officer monitoring a nuclear early warning system.  On September 26, 1983, the alarm went off, indicating five missiles had been launched from the United States.  Imagine yourself in his shoes.  You have moments to decide, and only one chance to get it right.  The fate of your entire nation, and the rest of the world, lies in the balance.  Your worst enemy has done what everybody feared, and launched a pre-emptive strike.  Under these circumstances, could you even think straight?


Commander Petrov clearly could.  This attack made no sense to him.  He knew that the United States knew that an attack would precipitate a counter-attack.  The only hope for victory under this scenario was to wipe out your foe in one sudden onslaught.  Five missiles wouldn't do that.

So Commander Petrov made an instant decision.  With no time to run a diagnostic on the system, no theory to offer as to what might have gone wrong, and knowing what was at stake, he informed his commanders that it was a false alarm.  Based largely on this information, Soviet command chose not to launch a retaliatory strike.  Petrov was correct.  Later analysis showed that a quirk of the weather had played havoc with the system, indicating missiles where none existed.

Another man in that position might have easily made a different decision.  It's not a sure thing that humanity would have been wiped off the map had Petrov called in sick that day, but it's a very real possibility.

So next time you weigh the influence of Aristotle versus Caesar versus Pasteur versus Napoleon, remember that had it not been for one man, you might not have the luxury of thinking about it at all.  Remember Stanislav Petrov.




Sunday, September 30, 2012

Who's driving this car?



In September 2012, the state of California passed a bill (SB1298) allowing self-driving cars onto California's roads.  This is a great step forward for a technology that many have been researching, though perhaps none as enthusiastically as Google, which has logged over 300,000 miles so far in its fleet of self-driving automobiles.


While still in its infancy, this technology shows every sign of maturing very quickly.  Nearly everybody is eagerly anticipating the day when they can spend time doing something other than grimly staring at the road while sitting in bumper to bumper traffic.

Nearly.

One group that is not so thrilled is Consumer Watchdog (CW).  In an open letter to California State Assembly Speaker John Perez, Consumer Watchdog urged the banning of driver-less technology without strict controls preventing the collection of information for marketing or other non-driving purposes.

Really guys?

I have to say, I'm a hate receiving advertisements, mostly because advertisers are universally incompetent.  In the seventeen years I've been using the internet, I've seen exactly two advertisements that were of interest to me.  Not a great track record.  Monkeys on typewriters could probably do better.
But still, CW is really missing the bus on this one.  If they had their way, it would be illegal to offer a reduced fare or free bus or taxi service that used driver-less technologies that subsidized its service using advertisements.  Such a service offered to people too poor to own a car might be the difference between having a job and not.  And preventing this is CW's best idea for how to improve the world?

Even ignoring this point, you have to take a look at the bigger picture,  we now have face recognition technology that can identify people from photographs.  We have have license plate scanners that can read and identify up to 1800 plates per minute.  And CW thinks that the driver-less technology is the Pandora's box in this equation?

So let's keep some perspective.  Much as I dislike it, the concept of privacy is vanishing fast.  Let's not stand in the way of some of the most promising technology we've seen in many years in a quixotic attempt to slow down this process.  Because I can think of all sorts of better uses of my time than staring at the road in bumper to bumper traffic.

After all, when else am I going to find time to keep up with funny cat videos?

Dinosaurs on a spaceship!


Doctor Who fans were recently treated to the second episode of Season 7, called "Dinosaurs in a Spaceship".  Awesomely cool?  Absolutely.  Really stupid?  Perhaps.  But who cares?

This is probably the most awesome mash-up of genres since this classic incursion in the late Mesozoic:


Calvin's logic was simple.  Carnivorous dinosaurs are cool.  F-14s are cool.  Put them together, and you've got something 2x cool, or perhaps even cool^2.  Half the population reading the strip agreed with Calvin on this.

The other half agreed with Hobbes, that mixing these elements was just stupid.  (There's another half that never read Calvin & Hobbes, but I choose to pretend these people simply don't exist.)

Once upon a time, the culture was defined by people who liked their genres one at a time, thank you very much.  Detective stories, sure.  War movies, good.  Mythology?  Absolutely.  But mix them all together?  No way, that's just ridiculous.

But now things are different.  There's a new sheriff in town.  And a whole new set of rules.

Call it the triumph of the geeks.  Growing up, geeks hung out in the background while the popular kids played football.  They were mocked for reading super-hero comic books, and scorned for spending huge amounts of time with these weird counting machines instead of socializing like real people.  It was sometimes grudgingly admitted that there might be a place for them in society, but only on the fringes.

Very people considered that some day, the geeks might take over.


This happens over and over throughout history.  Julius Caesar and his pals were the radical outcasts of the Roman Republic, shocking people with their new styles of dress, and redefining the entire social and political order.  The thinkers of the renaissance threw out a thousand years of medieval philosophy and culture as they embraced new worlds of art and science.  Each generation redefines the world as they see fit.

And now the comic book geeks have come of age.  We no longer have the Dirty Dozen, we've got the Avengers.  Science Fiction and Fantasy is no longer relegated to cheap pulp fiction and considered second class literature, it regularly sells in expensive leather-bound editions and tops the New York Times bestseller lists.  The old myths are not forgotten, they're just being recast to a new set of tastes.  Have you seen the trailer for "Hansel and Gretel: Witch Hunters"?

So deal with it.  Gen X is in charge.  Life moves on, and the culture moves with it.  While some things are lost, and some of those losses are a tragedy, it's hard to get too worked up about it when we've got...




Monday, April 23, 2012

Know Your Strengths


Last week, I attended the Amazon Web Summit in New York City.  My conclusion: Amazon is taking over the world.

OK, yes, I get that this was an elaborate sales presentation, not a technical conference.  While I was being wowed by their talks, I was cognizant that I had entered Amazon's reality distortion field, and that not everything was to be taken at face value.  But even taking these factors into account, what Amazon is working on is impressive.

In case you don't follow these things, Amazon is one of the biggest players in Cloud Computing.  Suppose you need to run a large, complicated computing problem, such as a one-time conversion of a large number of video files.  It's computation intensive, so running it on your company's computers might take weeks or months.  You could bite the bullet and wait weeks or months.  You could purchase additional servers.  This would be an expensive option, to say nothing of the logistics of ordering and installing them, plus the question of what you do with these servers once the task is done.  Alternately, you could lease some online servers from Amazon.  Use as many as you like.  Pay by the hour.  Once you're done, take back your processed files and shut down your account.  All done, no mess, no hassle.

Sounds like an interesting niche business.  But it's not a niche business, not anymore.  It's huge.  Netflix relies on Amazon Web Servers to stream movies.  Instagram relies on Amazon to host it's infrastructure, which may help explain how a billion dollar company was able to get by with only twelve employees.  Schrodinger recently worked with Cycle Computing to build an Amazon Cloud supercomputer, leveraging 50,000 cores to do an exhaustive analysis of 21 million compounds to discover good drug candidates.  This virtual supercomputer was estimated to be among the top 50 supercomputers in the world, and cost just under $5000 an hour, with no setup or capital costs.  One recent analysis of web traffic estimated that 1% of all web traffic goes to a web server hosted by an Amazon cloud computer.  This business is massive.

How did Amazon get here?  To hear Amazon tell it, they're not a retail company.  They're a technology company that happens to do retail, among other things.  And they've certainly paid their dues in the process.  If you followed the tech industry back in the late nineties, then you probably remember the anecdotes that surfaced at the end of the every year like clockwork, as Amazon's systems were overloaded by a Christmas rush bigger than anything they'd ever experienced before - again.  Everybody, including the CEO, pitched in to help get packages out of the warehouses on time.  Computer systems were overloaded.  Inevitably, many packages only shipped weeks after Christmas had come and gone.  I'm sure it was a painful process to go through.

But having gone through their trial by fire, Amazon seems to have learned their lessons well.  They've developed one of the top logistical systems in the world, and more importantly, they've mastered the art of running the tens or hundreds of thousands of servers required to keep it all running.  You can't manage that type of complexity one server at a time.  You need to organize it, glue it together with seamless software that allows you to transparently shift processing activities from one part of the world to another to deal with the inevitable breakdowns and snafus.  And once you've built this impressive infrastructure, it's a very small step to sell some of this processing power to the outside world.

In short, Amazon has done exactly what business gurus like Tom Peters and Peter Drucker have been talking about for decades - building on its strengths as much as possible.  A superficial analysis of Amazon would have said that they were retail wizards, and should stick to that world.  If they want to expand, maybe they should try opening a line of physical stores.  That misses their real strength, which is managing the complexity behind their operation.  Thus they've found a huge opportunity in a business that seems highly unrelated to their origins as an online bookstore.

Here's the real question: why aren't more businesses doing this?

The answer is, they try.  For example, in 1994, Quaker Oats acquired Snapple, thinking that their experience running Gatorade would make this an easy win.  They discovered, to their great dismay, that the logistical and marketing issues made the skills needed to manage these different brands very different from each other.  Quaker finally sold the business in 1997, having lost a billion or more learning this lesson.

Quaker made a common mistake.  They assumed that two processes that produce similar products or outcomes must be similar.  What they needed to do instead was to ask what their real core competency was, and figure out how they could leverage it, even if it turned out to be in a radically different business.

Heres a thought.  Walmart is one of the best companies in the world at getting their products where they need to be.  Stories abound of how they can shift inventories from one region of the country to another in a blink of the eye, whether its old video games being purchased in Florida by grandparents for their visiting grandkids, or emergency supplies in New Orleans in the wake of Hurricane Katrina.

So why hasnt Walmart started a shipping business?  You could simply drop off your package in your local Walmart store, and the recipient could pick it up in another Walmart store anywhere in the country the next day.  Because Walmart is one of the pioneers in the use of RFIDs, there would be excellent package tracking through the entire process.  And because they already have all the fleets and computer systems developed, deployed and scaled to massive volumes, they could ship packages for a fraction of what it costs Fed Ex to or UPS to bring it to your door.  It wouldnt replace traditional shipping companies, but there would probably be significant interest, for almost no risk or additional cost.

What applies to a business applies to a person.  Whats your unique strength?  Dont assume it has to be something related to your current line of work.  An administrative assistant with a relentless eye to detail might make a great event planner, or accountant.  An interior decorator with exquisite taste and an eye for color might do well consulting on movie design, or taking photographs.

The deeper your understanding of your unique strengths, the more prosperous youll be in any type of economy.

Monday, April 16, 2012

Who are you?


On March 30th, 2012 (or sometime close to then, nobody knows the exact date), a server breach occurred within the Utah Department of Health.  It was reported that personal information for 25,000 people was exposed, including names, addresses, birth dates, and social security numbers.  This figure was later increased to over 250,000 people.  If you follow this sort of news at all, you’ll know that this is just one in a long string of similar breaches.  It happens to all sorts of organizations, public and private, government and corporate.

These data breaches are a serious problem.  This information is all that is needed to open financial accounts, such as credit cards.  When this happens, a person’s credit record can be destroyed.  Fixing the problem can take years or decades.  Some people never recover.  There is very little recourse for a person claiming that an account was opened without their knowledge or consent.  The Utah Department of Health is offering a year’s worth of free credit monitoring to all affected individuals to limit the damage, which is better than nothing, but not much.

Organizations that treat this type of data carelessly need to be held accountable.  We need to highlight these cases in the media, and discover who was negligent, and why.  Lessons need to be learned, and civil and criminal charges need to be issued where appropriate.

And yet…

At the end of the day, this type of breach is not the real problem.  It’s just a symptom.  The real issue is that we allow to people to be held accountable for accounts opened using nothing more than their name, address, birth date and social security number.  While all of this information may be tricky to assemble, none of it is actually secret.  All of it is in the public domain, somewhere.  Before all records were computerized, the cost of pulling together a comprehensive data profile for somebody was considerable, and often required the services of a private investigator, who would physically visit town halls and other archives.  It’s only a matter of time before all this data is available freely on the Internet, with automated data agents able to pull it together with no human involvement.

What happens then?

There are some areas of security, such as choosing better passwords, that are relatively easy to address.  But before you can validate somebody’s access, you first need to identify that person.  How do you do so, in a way that only one person in the entire world can successfully pass the test?

It used to be that most transactions and account applications were done face to face.  In a small town, the people involved had probably known each other all their lives.  Therefore, identity was a combination of hair color, facial and body structure, accent and speech patterns, and thousands of other minor factors.  While it was possible to engage in fraud by mimicking another person, it wasn’t common, and carried a high degree of risk for the perpetrator.  It's too easy to be caught when you have to show up in person.

As the economy grew and national (and then global) banks and stores took over, the system of personal recognition broke down.  We evolved a new system, which is based on security through obscurity.  A bank would ask all sorts of questions that were difficult for another person to know.  It would then spend time and effort to validate that this data was correct.  It was certainly easier to commit fraud than it had been when everybody knew everybody else, but was still enough of a challenge that losses were manageable.

We are now approaching the tipping point, when information will flow so freely that we will need to develop a completely new approach to identifying people.  How can we achieve this?

There are generally three ways to identify somebody:
  • Something you know
  • Something you have
  • Something you are
Our current system is based on something you know.  People know their own Social Security numbers, addresses, mother’s maiden names, and so forth.  If we’re going to keep this solution, then we need to ensure that people know something that is not, and never will be, public knowledge.  In short, everybody will need a “secret” Social Security Number, or other identifying text string, which would be issued when they are born.  Their parents would store it in a secret place, and have them commit it to memory as soon as they were old enough.

I doubt this will work.  In order to effectively validate this secret, somebody else will need to look it up.  Even if you use a fully encrypted process, sooner or later, the secret is going to get out, and we’ll be right back to Social Security numbers.

Something you have is generally some type of possession which can be uniquely identified, such as an RSA token.  Unfortunately, tokens are easily lost – people will constantly be having to ask for new ones.  And if the token is the principle way of identifying somebody, how do you figure out that the right person is asking for the replacement token?

Something you are represents physical characteristics, such as finger prints, retina patterns, or bone structure.  Since we’ve ruled out something you have and something you know, we’re probably going to have to go with this one.  And yet, this is a very tricky and problematic option.  The first solution that everybody immediately thinks of are finger prints.  They are unique to a person, and have successfully solved countless crimes by criminals who fail to take the basic precaution of wearing gloves.  It’s easy to find their fingerprints covering everything they touched.

That’s the problem with fingerprints.

We leave them everywhere.  Finding somebody’s fingerprints is a trivial exercise – just pick up any object they’ve handled.  And once you have their prints, they’re pretty easy to replicate.  All you need is a pair of extremely thin latex gloves, encoded with another person’s prints, and voila!  You are now that person.

Retina patterns are a little better, but still problematic.  With today’s technology, you don’t leave your retina patterns lying around everywhere.  But what about with tomorrow’s technology?  How long before a high quality camera can take a picture of somebody from 5 or 10 feet away, and see enough detail to capture all the necessary details in their eyes?  Heck, all you need to do is to put your own camera onto something that looks like an ATM, and ask them to have a retina scan to make a deposit.  (This is already done for skimming a person's ATM card.)  What you do with that information is a trickier challenge, but it doesn’t seem impossible for somebody to invent a set of contact lenses in the near future that can match another person’s retinal patterns.  And once that happens, here’s the real problem with retinal patterns and finger prints – you can’t change them.  You’re stuck for life, so as soon as anybody figures out what yours are and invents the technology to duplicate them, you’ve now lost your identity permanently.

DNA scanning is possible, I suppose.  But frankly, I simply don’t want to spend much of my time time envisioning a world where you’re asked to spit into the handy receptacle to prove your identity every time you wish to make a purchase.  So we’ll leave it at that.

That leaves facial and body recognition.  This seems to have some promise.  Unlike when humans recognize faces, computers are not fooled by wigs or extra glasses.  Facial recognition software works by looking for structural features such as the contours of the eye sockets, shape of the cheek bones, and length of the jaw line.  Weaknesses in the facial recognition systems tend to result from poor lighting or bad angles, which can be reduced or eliminated when the subject wishes to be identified to complete a transaction.  Add measurements of bone structures to the mix just for safety, and you’ve probably got a fairly robust system.
But remember, the purpose is to identify somebody, not just validate them.  Which means that every time you want to really demonstrate your identity, you need to be measured – we don’t want people simply showing a photograph to fool facial recognition software.  Which means that first of all, your measurements have to be recorded in a secure, encrypted database somewhere – probably initially when you are born, and then updated every year until you reach maturity.    Then, to demonstrate you are the same person who belongs to these measurements, you need to subject yourself to detailed scanning that can confirm that you are a real live person who has facial and body features that match those on record.  Only then can anybody safely assume that you are who you say you are.

Of course, this doesn’t prevent identity theft, it just brings it down to a manageable level.  You still have the problem of an impostor taking your turn when you go to be measured.  Or a hacker breaking into the database to update the records recording who you are.  And of course, you’re always in danger from your evil identical twin.

In the meantime, call up the Utah Department of Health and give them an earful about losing their data.  We may never be able to go back to the days when are personal information was relatively obscure and private.  But I miss them already.

Monday, April 9, 2012

Looking for Danger in All the Wrong Places


A doe had the misfortune to lose one of her eyes, and could not see anybody approaching her on that side. So to avoid danger she always used to feed on a high cliff near the sea, with her sound eye looking towards the land. By this means she could see whenever the hunters approached her on land, and often escaped by this means. But the hunters found out that she was blind in one eye, and hiring a boat, rowed under the cliff where she used to graze and shot her from the sea.
-Aesop's Fables

Anybody paying attention to advances in artificial intelligence has noticed that significant  milestones are being crossed more and more frequently.  Deep Blue defeated world chess champion Gary Kasparov in 1997.  Watson took the crown in the much more flexible game of Jeopardy on 2011.  And while Dr. Fill disappointed many observers by only placing in the near the bottom of the top quartile at the America Crossword Puzzle Competition, few expect it to take much longer before computers reign supreme in this area as well.  Jeopardy Champion Ken Jennings summarized the ambivalence of many when he used his terminal to display the line I for one welcome our new robot overlords.

We've all seen movies like The Terminator and The Matrix.  We're all wondering which massive collection of computers is going to go Skynet on us and achieve the critical mass needed to wake up and start making its own decisions.  Will it be Watson?  Dr. Fill?  Or perhaps Google's seemingly infinite collection of server farms, running in unmarked, undisclosed locations spread across the world?  We pontificate endlessly about what safeguards we need to keep these colossal systems in check.  Can we build in kill switches?  Keep them fire-walled off from the control software running power plants and weapons systems?  Build in Asimov's three laws of Robotics in the hope that the newly awakened system will serve our needs instead of their own?

I have to wonder if, like the one-eyed doe, we're all looking in the wrong direction.

For all the intricacy of a Watson or Dr. Fill, these are highly monolithic programs, designed to do one thing and do it well.  I haven't heard any reports that Deep Blue or any other chess program has been getting bored and asked to try its hand at tennis, or Parcheesi.  Adapting Dr. Fill to to do some task other than complete crossword puzzles would be a monumental undertaking.  Probably easier to throw everything out and start from scratch.

Moreover, these programs have no survival instinct.  They have no ability or inclination to replicate, or to try to thwart the intentions of anyone who would prevent these activities.  They cannot rewrite their own code to avoid detection and adapt to a new environment.

Modern malware has all of these attributes.

The sobering fact is that it's becoming increasingly difficult to come up with a good definition of life that does not include malware.  Wikipedia lists the following criteria to consider something "alive":
  • Undergoes metabolism.  While this traditionally refers to chemical reactions that sustain an organism, there's no intrinsic reason why it couldn't refer to the processes of a functioning program.
  • Maintains homeostasis.  Similar to metabolism.
  • Possesses a capacity to grow.  True, though currently limited.  (But see below)
  • Respond to stimuli.  Absolutely.  Many worms and viruses will watch what is happening in the operating system and take actions accordingly.
  • Reproduce.  Yes, and then some.
  • Through natural selection, adapt to their environment in successive generations.  Limited again, but not for long.
The two points above that are weakest today are the ability of malware to grow and adapt to its environment.  In malware terms, this most closely translates to polymorphism, where a virus will modify its own code.  In today's world, these are generally very minor modifications, designed to make the virus more difficult to detect by an anti-malware program looking for a specific code signature.  A given unit of malware doesn't have the ability to spontaneously change itself in order to discover and take advantage of a new zero day exploit.

Not yet.

There's no reason why its not possible.  The technique involved is called a genetic algorithm.  It involves replicating evolutionary techniques by introducing random variations into code to see if it improves.  It has minimal usefulness in many programming applications due to the high level of computational power required, and the difficulty of measuring improvements from one generation to the next.  When the computing power is provided by infected computers on the internet, and effectiveness is measured by the ability to survive and propagate, both these limitations go away for malware.

We are then left with the question of how fast a self-replicating, self-modifying worm in the wild could improve using genetic algorithms could improve.  I see no reason why it could not improve very quickly indeed.  The field of medicine has recently seen the introduction of "super-bugs", bacteria which has acquired immunity to many or most antibiotics over time.  A bacteria attempting to infect humans has faced a very difficult environment since the introduction of antibiotics.  What we're only beginning to appreciate is how a difficult environment leads to much more rapid evolution.  With an internet full of anti-malware programs and researchers dedicated to stamping it out, malware must be very good to survive for long.  Many or most strands will be identified and wiped out.  Those that survive will be scary indeed.

I don't know when we're going to get the first malware in the wild that can truly modify its own capabilities, rather than just its signature.  Maybe its already out there.  How complex is it getting?  At what point is it going to exceed its creators wildest expectations?  At what point will it begin exhibiting behaviors that will appear to demonstrate creativity and innovative problem solving?  A what point does it become self aware?

Whenever that happens, I don't know if we'll know what to do.  We're going to need help.

Maybe we can ask Watson.

Monday, April 2, 2012

Intentions versus Capabilities - Part II


I’d like to open this post with an apology, because I’m deviating a little bit from my usual subject matter.  This is a blog about technology, and about how patterns of technology change and interact with daily life.  I don’t normally comment on general news stories that don’t have a direct connection with technology.  I’m making an exception this time because it touches on some themes I’ve covered before, which I’d like to reiterate from a different angle.

The story in question is the tragic death of Trayvon Martin, and the failure of Florida police to prosecute his assailant due to Florida’s “Stand Your Ground” laws.  This is still a story in progress, and anything could happen, but the evidence is increasingly casting doubt on George Zimmerman’s story.  Independent analysis demonstrates it is exceedingly unlikely that the cries for help were Zimmerman’s.  It is further difficult to demonstrate that George Zimmerman had sustained the types of injuries that his story had claimed.  Add to this the undisputed fact that George Zimmerman left his car to confront Martin despite the instructions from 911 not to do so, and it’s difficult to paint this as anything other than a one-sided assault.  This is a crime.

Or is it?

Under normal circumstances, Zimmerman’s departure from his car to confront Martin would paint him fairly clearly as the aggressor.  Under the stand your ground laws, he had no duty to retreat from what he believed to be a dangerous situation, and, having deliberately entered into this situation seeking a confrontation, was justified in the use of deadly force to protect himself from what he reasonably believed was a threat.
Critics might question whether his fear of Martin, who was unarmed and a hundred pounds lighter than Zimmerman, was “reasonable”.  Unfortunately, there is no precise definition of what constitutes “reasonable”.  Certainly his fears would be shared by many other residents in his neighborhood.  Does that make them reasonable?

Regardless of how the law will be interpreted, the fact that the Florida police used it as an excuse to not initiate a criminal investigation of Zimmerman indicates that something is seriously askew.  At the root of the issue is an asymmetry of information created by the situation.  In any dispute between two individuals, each is likely to have a different narrative explaining the context, motivations, and possibly even the facts leading to the event.  In a murder, one of those narratives disappears.  The only narrative remaining belongs to somebody who will likely have every reason and every inclination to skew the facts in his own favor.  This is why imposing a duty to retreat is such a useful concept.  While it does not eliminate this type of situation – anybody can still claim they had no opportunity to retreat – it does reduce the number of situations in which an assailant can claim a legal justification for their own aggression.

I don’t believe the Florida legislature had malicious intent when they enacted this law.  I don’t expect they ever thought about the possibility of an individual committing an act of murder and using their law to claim it was self-defense.  And that’s the crux of the problem.  They didn’t think.

As I noted when I discussed SOPA, there is a world of difference between the intentions of a law, and its real life effects.  It behooves legislatures to think long and hard about the possible secondary effects of the laws they pass.  In this case, it seems they didn’t think hard enough.  The results have been lethal.

Well done, Florida.  Well done.

Sunday, March 25, 2012

Waiting for the End of the World


On March 19, 2012, Amazon announced they were purchasing Kiva Systems for $775 million.   By all accounts, they're not planning on selling these robots on their website.  Rather, this is another step forward in the development of the automated warehouse, in which robots will manage inventory and fulfill orders more cheaply than humans could.  Many people have noted that the loss of jobs is one more step in a long cycle of economic decline for the United States.  It's the end of the world.

And yet, as I touched upon in a previous discussion, the end of the world has come and gone many times, and somehow there are still people around to complain about it all.  (OK, technically I talked about how civilization was going to the dogs, but same thing.)  People will probably lose jobs when Amazon automates their warehouses.  On the other hand, we've been losing jobs for centuries, since the dawn of the Industrial Revolution.  If something wasn't filling the void, we'd have an unemployment rate in excess of 99%.  Life goes on.  When the citizens of the Roman Empire became more interested in enjoying the luxuries of civilization than in defending it, the Roman Empire fell, and the world came to an end.  Of course, what that really meant was that people were more interested in investing their time and energy building the new kingdoms that became feudal Europe than they were in defending a political structure that had become obsolete centuries ago.  Somehow, life went on.  More recently, when rock & roll swept through the nation, it was a corrupting influence on our youth that would destabilize society.  Today we consider many early rock & roll tunes to be light and easy listening, and play them as background music in elevators.  Life goes on.

Of course, sometimes the world really does come to an end.  Our records are scarce, but I guess that there was probably somebody in the city of Carthage around 250BC saying that everybody should focus a bit less on making money and a bit more on beefing up their own military.  They should have listened.  For the Carthaginians, the world truly came to an end, as the city was burned to the ground, and all the inhabitants were killed or enslaved.

The problem is, it's really hard to tell in advance if the particular end of the world we're talking about is going to be in the "life goes on" category, or the "no, its really the end of the world" category.  So without attempting to forecast exactly what's going to happen, lets look at a number of other ways in which the world is going to end, and think about what this really means.

Talkin' 'Bout My (Facebook) Generation
George Orwell got it all wrong.  Big brother didn't start spying on people in their living rooms.  Instead, people simply started broadcasting every aspect of their lives on tools like Facebook, Twitter, and Foursquare.

It's easy to see examples of highly questionable decisions being made about what people post.  Everything from pictures and videos of drunken behavior at parties, to cursing of bosses and co-workers and customers, to defamatory insults of whatever is nearby.

The law hasn't quite settled on how to treat these situations - do they represent free speech to be protected, or does an employer have a right, or even an obligation, to control its image by firing employees who engage in unseemly behavior?

It seems to run against human nature for people to stop acting out and sharing everything.  Which raises an interesting question: if everybody (or at least lots of them) engages in "unseemly" behavior, is it still unseemly?  What happens when the generation that grew up with Facebook is in their fifties and sixties, and run all the corporations (and not just the hot tech startups).  Do we give up and adopt new societal standards for behavior because "everybody does it"?  Or does society bifurcate into a new set of haves and have nots, in which a few people are lucky (or boring) enough never to do anything really stupid in the public eye, and are thus our only candidates to run major corporations and hold public office?

If a Tree Falls Before Wikipedia Covers It, Was It Ever Standing?
Encyclopedia Britannica recently announced that it was finally discontinuing the print version of it's encyclopedia.  On the one hand, this was a momentous event, as Britannica had been in continuous publication since 1768.  On the other hand, its amazing they kept making the print version as long as they did.  The company has been struggling for years, threatened first by CD Rom dictionaries like Encarta, and now by Wikipedia.  While there are a certain number of adherents to Britannica's use of experts rather than Wikipedia's crowd sourced model, it is unclear how long the for-profit company can survive against a free competitor.

This is simply one example of print materials giving way to their electronic counterparts.  The Kindle and the iPad have ushered in a new era of ebooks, with the result that Borders has declared bankruptcy, and Barnes and Noble loses money hand over fist while it struggled to reinvent itself.  Perhaps the independent bookstores will survive indefinitely as a niche market, but paper publishing in general is likely to be a shadow of what it once was.

Which raises the question: when the vast majority of all information is only available in online formats, how will we footnote our references?  If George Lucas decides that Greedo shot first, could we ever lose every reference that the story wasn't always that way?  (Yes, I know that he made the original versions available on DVD, but you'll note he didn't extend that to Blu-Ray.  Wanna bet as to whether they'll migrate onto whatever format comes next?)

Perhaps references will become impossible, and all knowledge will devolve into "truthiness".  On the other hand, challenging data problems have a way of attracting programmers looking the next great start-up idea.  Perhaps this very problem will inspire new types of technology that will lead to unprecedented levels of accuracy and transparency.  Imagine a scientific journal that automatically tracked its footnotes, and warned the reader if key references were out of date and could no longer be trusted.  It would dynamically call the entire premise of the article into question, and you would have real-time feedback as to whether the article you were reading was still relevant.

To Boldly Go Where No One Has Gone Before (Back Home)
Whenever criticisms of Gen Y come up, high among this list is that they are the "Boomerang Generation".  Whether due to an unprecedented combination of school debt and economic malaise, or simply a lack of gumption caused by overindulgent helicopter parents, this generation leaves home, marries, and starts their real careers later than any other generation in recent memory.  Have we lost the critical pioneer spirit that makes up the American psyche?

On the other hand, maybe looking only at generations in recent memory is a short-sighted endeavor.  The whole concept of personal independence is relatively recent.  The standard family unit used to be extended, with three generations living under a roof, not just two.  Kids didn't generally strike out on their own, they joined the family business.  Perhaps the last couple generations have been a deviation from a historical norm, and Gen Y is simply taking us back to our roots.  Only time will tell.

On so many levels, the world is ending.  Sometimes the end of the world really is the end of the world.  But much more often, it just seems that way to the previous generation who doesn't like or understand the new environment, and can't imagine how it will hold together if it's not like it used to be.

So do what you can.  If you see a problem, try and fix it.  But remember to keep an open mind and a sense of wonder, because whether your're going to love it or hate it, you'll hate to miss what's coming next.