Sunday, March 25, 2012

Waiting for the End of the World


On March 19, 2012, Amazon announced they were purchasing Kiva Systems for $775 million.   By all accounts, they're not planning on selling these robots on their website.  Rather, this is another step forward in the development of the automated warehouse, in which robots will manage inventory and fulfill orders more cheaply than humans could.  Many people have noted that the loss of jobs is one more step in a long cycle of economic decline for the United States.  It's the end of the world.

And yet, as I touched upon in a previous discussion, the end of the world has come and gone many times, and somehow there are still people around to complain about it all.  (OK, technically I talked about how civilization was going to the dogs, but same thing.)  People will probably lose jobs when Amazon automates their warehouses.  On the other hand, we've been losing jobs for centuries, since the dawn of the Industrial Revolution.  If something wasn't filling the void, we'd have an unemployment rate in excess of 99%.  Life goes on.  When the citizens of the Roman Empire became more interested in enjoying the luxuries of civilization than in defending it, the Roman Empire fell, and the world came to an end.  Of course, what that really meant was that people were more interested in investing their time and energy building the new kingdoms that became feudal Europe than they were in defending a political structure that had become obsolete centuries ago.  Somehow, life went on.  More recently, when rock & roll swept through the nation, it was a corrupting influence on our youth that would destabilize society.  Today we consider many early rock & roll tunes to be light and easy listening, and play them as background music in elevators.  Life goes on.

Of course, sometimes the world really does come to an end.  Our records are scarce, but I guess that there was probably somebody in the city of Carthage around 250BC saying that everybody should focus a bit less on making money and a bit more on beefing up their own military.  They should have listened.  For the Carthaginians, the world truly came to an end, as the city was burned to the ground, and all the inhabitants were killed or enslaved.

The problem is, it's really hard to tell in advance if the particular end of the world we're talking about is going to be in the "life goes on" category, or the "no, its really the end of the world" category.  So without attempting to forecast exactly what's going to happen, lets look at a number of other ways in which the world is going to end, and think about what this really means.

Talkin' 'Bout My (Facebook) Generation
George Orwell got it all wrong.  Big brother didn't start spying on people in their living rooms.  Instead, people simply started broadcasting every aspect of their lives on tools like Facebook, Twitter, and Foursquare.

It's easy to see examples of highly questionable decisions being made about what people post.  Everything from pictures and videos of drunken behavior at parties, to cursing of bosses and co-workers and customers, to defamatory insults of whatever is nearby.

The law hasn't quite settled on how to treat these situations - do they represent free speech to be protected, or does an employer have a right, or even an obligation, to control its image by firing employees who engage in unseemly behavior?

It seems to run against human nature for people to stop acting out and sharing everything.  Which raises an interesting question: if everybody (or at least lots of them) engages in "unseemly" behavior, is it still unseemly?  What happens when the generation that grew up with Facebook is in their fifties and sixties, and run all the corporations (and not just the hot tech startups).  Do we give up and adopt new societal standards for behavior because "everybody does it"?  Or does society bifurcate into a new set of haves and have nots, in which a few people are lucky (or boring) enough never to do anything really stupid in the public eye, and are thus our only candidates to run major corporations and hold public office?

If a Tree Falls Before Wikipedia Covers It, Was It Ever Standing?
Encyclopedia Britannica recently announced that it was finally discontinuing the print version of it's encyclopedia.  On the one hand, this was a momentous event, as Britannica had been in continuous publication since 1768.  On the other hand, its amazing they kept making the print version as long as they did.  The company has been struggling for years, threatened first by CD Rom dictionaries like Encarta, and now by Wikipedia.  While there are a certain number of adherents to Britannica's use of experts rather than Wikipedia's crowd sourced model, it is unclear how long the for-profit company can survive against a free competitor.

This is simply one example of print materials giving way to their electronic counterparts.  The Kindle and the iPad have ushered in a new era of ebooks, with the result that Borders has declared bankruptcy, and Barnes and Noble loses money hand over fist while it struggled to reinvent itself.  Perhaps the independent bookstores will survive indefinitely as a niche market, but paper publishing in general is likely to be a shadow of what it once was.

Which raises the question: when the vast majority of all information is only available in online formats, how will we footnote our references?  If George Lucas decides that Greedo shot first, could we ever lose every reference that the story wasn't always that way?  (Yes, I know that he made the original versions available on DVD, but you'll note he didn't extend that to Blu-Ray.  Wanna bet as to whether they'll migrate onto whatever format comes next?)

Perhaps references will become impossible, and all knowledge will devolve into "truthiness".  On the other hand, challenging data problems have a way of attracting programmers looking the next great start-up idea.  Perhaps this very problem will inspire new types of technology that will lead to unprecedented levels of accuracy and transparency.  Imagine a scientific journal that automatically tracked its footnotes, and warned the reader if key references were out of date and could no longer be trusted.  It would dynamically call the entire premise of the article into question, and you would have real-time feedback as to whether the article you were reading was still relevant.

To Boldly Go Where No One Has Gone Before (Back Home)
Whenever criticisms of Gen Y come up, high among this list is that they are the "Boomerang Generation".  Whether due to an unprecedented combination of school debt and economic malaise, or simply a lack of gumption caused by overindulgent helicopter parents, this generation leaves home, marries, and starts their real careers later than any other generation in recent memory.  Have we lost the critical pioneer spirit that makes up the American psyche?

On the other hand, maybe looking only at generations in recent memory is a short-sighted endeavor.  The whole concept of personal independence is relatively recent.  The standard family unit used to be extended, with three generations living under a roof, not just two.  Kids didn't generally strike out on their own, they joined the family business.  Perhaps the last couple generations have been a deviation from a historical norm, and Gen Y is simply taking us back to our roots.  Only time will tell.

On so many levels, the world is ending.  Sometimes the end of the world really is the end of the world.  But much more often, it just seems that way to the previous generation who doesn't like or understand the new environment, and can't imagine how it will hold together if it's not like it used to be.

So do what you can.  If you see a problem, try and fix it.  But remember to keep an open mind and a sense of wonder, because whether your're going to love it or hate it, you'll hate to miss what's coming next.

Sunday, March 18, 2012

Know Your Limits

On June 22, 1941, Germany launched Operation Barbarossa, sending 3.9 million troops into the Soviet Union.  This was the largest invasion in the history of warfare, and was likely the most significant reason why the Axis powers lost World War II.

Untangling all the reasons why Germany lost along its Eastern front could keep a horde of rampaging historians busy for a lifetime.  However, very high among the top candidates would have to be that the Germans failed to pay proper attention to the most critical constraints.  They had plenty of weapons, ammunition, motorized vehicles and horses, which Hitler believed to be all that was necessary to win.  But, overconfident of his ability to achieve a decisive victory quickly, he made no provisions for the supplies that would be needed if the campaign stretched into November or later.

There are some constraints in warfare which are flexible.  Treaties can be bent or broken, and then rationalized later.  Troops who are tired or dispirited can be bribed or threatened to march just a little farther.  Recruitment standards can be lowered to expand the pool of available soldiers.

Winter in Moscow is not one of those constraints.  It makes no allowances for your failure to plan.  When temperatures hit -40 degrees Fahrenheit, guns jammed, fuel turned solid, and troops stuffed newspapers into their light summer uniforms in a futile attempt to avoid freezing to death.

Military planners have studied these battles carefully.  They have attempted to extrapolate the underlying principles of cause and effect, and apply these principles to as many other situations as possible.  But have the lessons really sunk in for everybody else?

On February 14, 2012, the FCC rescinded permission it had previously granted to Lightsquared Corporation to create a new broadband wireless network.  The effect of this decision was devastating.  Shortly thereafter, the CEO of Lightsquared resigned, half the employees were laid off, and Sprint cancelled a 15 year spectrum-hosting agreement.  It is unlikely the company will recover.

Should anybody care?

Let’s think about constraints from an engineering context.  Some constraints are more important than others.  When designing an airplane, the total weight is a critical constraint.  Engineers will spend thousands of hours and billions of dollars to find a new type of material that will make the airplane a tiny percentage lighter.  When designing a washing machine, weight isn't nearly as important.  It's not completely irrelevant, as you wouldn't want to build a two ton machine that couldn't easily be delivered, or which would fall through an average floor.  But in general, washing machine designers don't spend much time worrying about it.  A 10% difference, say between a 180 lb and a 200 lb machine, is largely irrelevant.

Some constraints tend to become less significant over time.  Computing power is a great example.  If you have an idea for a great video game that would be too complex to run on today's generation of computers, don't throw the idea away.  Given the rate of improvement, driven by Moore's law, it might just might run on the next generation.  In fact, given the lead time needed to develop most games these days, you might just want to start developing it right now, under the assumption that the hardware will catch up to you by the time you're done.

The size and cost of consumer electronics is similar (and closely related).  They are small and constantly getting smaller.  If you have a ten year plan to build a robot that needs a fully functioning Linux server the size of your thumbnail, don't worry that you can't find a server that size on the market today.  The Raspberry pi has already demonstrated a Linux machine the size of a credit card.  Ten years should be more than sufficient time to shrink it down to thumbnail size.

Other constraints aren't so easily resolved.  One of these is the amount of available electromagnetic spectrum.   Spectrum is needed any time you want to communicate without a connected medium (such as a wire).  Its uses include television, AM and FM radio, Wifi networks, cell phones, radar, microwave transmitters and garage door openers.  There is only only have a fixed amount of it, and there will never be any more.  Demand for spectrum is intense, and growing rapidly.  Companies pay tens of billions of dollars to license relatively small portions of the available spectrum.

It is more precious than gold.

With this in mind, a number of years ago, Lightsquared made a proposal to use advanced technology to deliver a new 4G LTE wireless network using a slice of unallocated spectrum bandwidth.  They would provide this network capacity to existing providers wholesale.  This would have increased capacity, exerted downward pressure on the currently rising prices, and been a boon for both consumers and corporations alike.  Everybody wins, right?

Well, not everyone.

It seems that GPS manufacturers were a bit concerned by this idea.  Lightsquared's plans would potentially interfere with some GPS devices.  In February, their concern made enough of an impact on the FCC to cause the FCC to put a halt to the venture.

If we left the story right there, then I'd probably side with the GPS manufacturers.  I like my GPS.  I don't want it to break.  If somebody has other ideas, well tough, because my GPS was here first, right?

Well, its not quite that simple.  You see, you need to remember what I said about the electromagnetic spectrum.  Because its so precious, it's all carefully defined and licensed out.  And as it turns out, Lightsquared was not infringing on the GPS wavelengths at all.  Instead, GPS's are infringing on spectrum that was licensed to Lightsquared.

Although we talk about spectrum in very precise, digital terms, its actually very analog.  If you broadcast on a specific frequency, unless you're very careful, it doesn't look like a sharp spike at that specific frequency.  Instead, it looks like a broad bell curve, centering around that frequency.  Try looking at a simple frequency analyzer (such as Soniqviewer for iOS) while you play a single note on a musical instrument.  You'll be amazed at how many sound frequencies are playing all at once.

The problem in this case isn't the transmissions from the GPS satellites, its the receivers.  GPS signals tend to be weak.  GPS devices are inexpensive.  To make the devices as cheap as possible, and get the best possible reception, GPS devices use receivers that are very imprecisely tuned, in an attempt to pick up as much of the signal as possible.  That means that GPS devices end up listening to a lot of spectrum they have no business listening to.  And, no surprise, they have the tendency to get confused when they hear some other signal on this other spectrum.

This is a fixable problem.  Lightsquared has demonstrated how a simple filter could be added to the GPS filter to clean up reception.  This would increase the cost of a GPS.  But cost isn't a significant constraint for consumer devices.  GPS chips are already such a commodity that they are being added into every model of smartphone on the market.  Garmin, the leader in standalone GPS devices, is desperately trying to find a new niche because it's product is becoming such a commodity.

So lets think about this from a constraint perspective:
  • GPS devices are cheap, and getting cheaper
  • Spectrum is expensive, and getting more expensive
  • The constraint the FCC wants to address is the current cost of GPS devices, rather than increasing utilization of spectrum
Does this make sense to anybody?

So far, weve managed to handle spectrum concerns.  Smart phones, which are among the biggest consumers of data, are still relatively new.  Weve been enjoying a pleasant summer of bandwidth availability.

But demand is growing fast.  Available spectrum is fixed.

Winter is coming.

Sunday, March 11, 2012

You Can Lead a Horticulture…


Another year, and once again Apple has managed to dominate the attention of the global media by releasing a new version of the iPad.  Curiously enough, even news outlets that didn’t think it was big news carried it as their top story.  For example, the New York Times article described the improvements in their headline as “modest”.  And yet, it chose to carry that story on the front page of their website.

Well, I’m sure it made sense to them.

Whenever Apple hits the news like this, the world seems to split into two extremely vocal camps.  Camp number one loves Apple and follows every detail.  Camp number two loathes Apple and disparages Apple followers as “sheep” and “lemmings” for snapping up every Apple product.  Curiously enough, they also tend to follow every detail.  (There’s a third camp who doesn’t pay any attention to any of this, but since they tend to be a quiet bunch, we can safely ignore them for this exercise.)  Without stepping too deeply into this debate, I would like to focus on one particular criticism that is sometimes leveled at Apple, which is that its products should not be purchased because they are much more expensive than competitor’s products with the same specifications.

This criticism represents extremely sloppy thinking.  I might even go so far as to call it stupid.

Before the deluge of hate mail begins, let me make clear that I’m not criticizing people who dislike Apple products.  I’m not even criticizing people who choose not to buy Apple products because of their specifications.  However, I am criticizing people who think that the technical specifications are necessarily the only relevant criteria for purchasing a product.

To illustrate what I mean by this, let’s take a look at a branch of artificial intelligence (AI) known as expert systems, and more specifically at automated rule induction.  I find it useful to consider artificial intelligence for subjects like this because AI was designed to replicate, at least superficially, how humans think.  By understanding the limitations of AI, we can sometimes gain insight into the limitations of our own thought processes.

In expert systems, knowledge is represented by a series of rules.  While the concept of an expert system as a form of artificial intelligence is only about forty years old, the idea of capturing knowledge in a simple rule is ancient.  For example consider the following sayings:

Both of these represent knowledge within a particular knowledge domain (meteorology and herpetology respectively) captured in a simple, rhyming rule.  With these types of rules, it becomes much easier to communicate, store and pass this knowledge on.   An expert system uses rules similarly to develop a knowledge base about a knowledge domain.

The creators of the first expert systems collected knowledge manually.  They would interview experts (hence the name) in a particular knowledge domain, and codify the knowledge gathered from these experts in the form of rules.  As the field developed, the concept of automated rule induction was introduced.  In automated rule induction, a computer uses various algorithms to analyze a set of data, and derives various predictive rules from that data set.  For example, perhaps you would prepare a set of data which includes a simplified description of the weather (such as “clear” or “stormy”), the time, and the color of the sky twelve hours previously.  The computer analyzes the data, and discovers a strong correlation between a red sky at night and clear weather the following day.  Similarly, a red sky in the morning is strongly correlated with an upcoming storm.  This example is trivial, but when you increase the number of potential variables to hundreds, with thousands or millions of data points, computers can become very useful at spotting patterns and relationships that a human would miss.

There are two challenges in running an effective automated rule induction program.  The first is to avoid rules that are too narrow.  For example, the computer might produce the following rule: “Whenever the date is August 29th, 2005, there will be a hurricane.”  This rule would be 100% accurate based on the available data.  However, it’s also not particularly useful, because that date will never come around again.

The second problem is a rule that is too wide.  For example, the computer might produce a rule that says “It will snow whenever it is Christmas”.  The computer might choose this rule because it finds that it was true more than 50% of the time in the data set it analyzed.  But it’s so vague that it has extremely limited predictive value.  It is, to use the colloquial term, a “stupid” rule.  If you want a good weather forecasting program, you need a computer with much more intelligent rules, which look at a number of relevant variables and model how they interact.

It’s this second problem that trips up so many people in real life.  Somewhere along the line, they get an overly simplified rule in their head, and for whatever reason, they stick with it.  Maybe it’s too tiring to keep running the automated rule induction program in their brain.  Despite the constant influx of new data to the contrary, they continue to operate with an over-simplified view of the world that ignores most of the key variables.

In fact, this is exactly what racism is.  Whatever else you can say about somebody who is racist, you can also, quite objectively, call them stupid.  This is because the rules in their head which judge people look only at a single variable, and ignore the more complex factors and relationships which are necessary to develop a more sophisticated judgment of a person.  Racists are not intrinsically stupid – they might be quite sophisticated when it comes to other knowledge domains.  They’ve simply picked up a bad set of rules for this domain, and kept them despite evidence to the contrary.

Curiously, this same phenomenon proliferates in the “sophisticated” world of technology as well.  Dave Pogue discussed this at length in a 2007 column, in which he challenged the assumption that more megapixels on digital cameras result in better image quality.  It’s easy to understand where this assumption came from.  In the very early days of digital photography, megapixels were so limited that any time they increased, you did see a noticeable improvement.  But somewhere along the line, the megapixel count stopped being the limiting factor, and other variables, such as the lens and sensor, became much more significant.  But megapixels are simple.  They're clearly labelled on the box.  If you buy based on megapixels, you don’t have to think very hard about your choice.  You can get by using simpler rules.  You can, in short, be stupid.

Which brings us back to Apple.  The differences between an Apple laptop and, for example, a Dell laptop are so numerous that it’s difficult to list them all out.  OS usability.  Form factors (such as size, weight and shape).  Available applications.  Available interface ports.  Susceptibility to malware.  Compatibility with other devices.  Access to friends or colleagues who can provide support if need be.  Any one of these might be important to a particular person.

Computers are, in a word, complicated.  Attempting to define them solely based on a small list of metrics such as gigahertz and gigabytes is simple.  It’s easy.  But it is also, unfortunately, just a bit stupid.

Monday, March 5, 2012

Do You See What I See?


Do you want to heap praise upon somebody in the technology industry?  Use a single word to describe them as tops in their field, somebody to be looked up to, respected, and admired?

Call them a "visionary".

Everybody wants to be a visionary.  To be a visionary means to be able to see what other people can't.  It means being able to predict the future.  It means being able to find winning solutions, and understanding what your customers want before they know it themselves.  What could possibly be more valuable than to have a clear view of the future before anybody else?

Lots of things, actually.  Just ask Cassandra.

According to Greek myth, Cassandra was blessed by the god Apollo with the ability to see the future.  However, nobody would believe her, so she was unable to act on her visions, and the blessing became a curse.

Steve Jobs was often acclaimed as a visionary.  Perhaps that was true in spots, but its also true that much of his success was attributable to being a guy with absolutely zero vision, but who could identify quality whenever he saw it.  This was well demonstrated in a famous exchange between him and James Vincent, who was developing the commercials for the forthcoming iPad.  Jobs was adamant that the first round of proposals "sucked", but was unable to provide any guidance for how to improve them. When pressed on this point, Jobs responded "Youve got to show me some stuff, and Ill know it when I see it.

On the flip side, let's talk about somebody who is not often called a visionary: Bill Gates.  We need to be a bit cautious about denigrating Gate's achievements, because whatever you may think of Microsoft's products, its indisputable that he achieved a level of professional success that most people will never approach.  But it's also true that despite hundreds of different products, the vast bulk of Microsoft's wealth and success came from just two product lines: Windows and Office.  In neither of these categories was Microsoft first to market.  (Anybody remember when the leading Office applications were WordPerfect and Lotus123?)  Instead, Microsoft came late to the game with inferior, "me too" products, and achieved market dominance through aggressive marketing, incremental improvement and sheer stubbornness.

Despite this, Bill Gates has demonstrated a brilliant understanding of the future on numerous occasions.  Consider this excerpt from his book "The Road Ahead", where he described a future product, which he called the wallet PC:  "You'll be able to carry the wallet PC in your pocket or purse. It will display messages and schedules and also let you read or send electronic mail and faxes, monitor weather and stock reports, play both simple and sophisticated games, browse information if you're bored, or choose from among thousands of easy-to-call up photos of your kids."

Sound familiar?  Swap the words "Wallet PC" with iPhone or the Android of your choice, and the description is spot on.  Bear in mind that this book was published back in 1994, when the state of the art in portable computing was a device that weighed only 5 lbs and had no built in internet connectivity.

With a lead like this, how could Microsoft possibly fail to dominate the mobile technology platform as thoroughly as it dominated the personal computer?

How indeed.

This isn't the only lead that Microsoft has squandered.  Back in 2008, 2 years before Apple released the iPad, Microsoft was working on a revolutionary new tablet device code-named Courier.  It had a built in camera, and accepted input from both multi-touch and stylus.  Gizmodo, in a review of a late stage prototype, called it "astonishing".  It might have proved a fearsome competitor to the iPad, but we'll never know, because Microsoft killed the product before it was ever released.  This decision was made in part on the advice of then-retired Chairman Bill Gates, who didn't think that Courier followed the core Microsoft Windows strategy closely enough.

In hindsight, this was probably a bad choice.  Might it have been anticipated?  Many years before Courier was killed, a technology visionary made an astute observation about how an ailing IBM was stifling its own innovation.  This came during the period of collaboration between IBM and Microsoft, who were jointly developing the new OS/2 operating system.  At the time, IBM was extremely concerned about interoperability between all of its product lines, and suggested in all seriousness that fonts be dropped from OS/2, because they would not be supported by IBMs existing line of mainframe printers.  This visionary correctly noted that you can't survive in the technology industry for long unless you are willing to cannibalize and render obsolete your own product lines.

That person's name?  Bill Gates.

Cassandra must have been laughing her head off.

For all of Microsofts considerable success, it seems like it might have been greater still had it only managed to take better advantage of the visionary skills of its founder.  It might not be struggling to stay relevant in a world where PCs are rapidly giving way to mobile phones and tablets.  It might still succeed.  But if so, it will be celebrated as a surprising turnaround story, not of a Goliath maintaining its dominance.

So if ever Apollo offers you the chance to be a visionary, think about Bill Gates.  Remember Cassandra.  And most importantly, remember Noah, who understood the key principle of being a visionary: Predicting rain doesnt count, unless you're also building arks.