Monday, February 27, 2012

The Promise of Online Advertising

On February 1st, 2012, Facebook filed papers in preparation for an Initial Public Offering, which is expected to take place sometime later this year.  This may be one of the largest IPOs ever, and is expected to result in Facebook achieving a $100 billion market capitalization.  That would give Facebook CEO and founder Mark Zuckerberg a paper wealth of $28 billion.

Facebook is the latest incarnation of an intoxicating vision we've heard about for some time now: customized, behavioral marketing.  This will revolutionize the marketing experience for companies and consumers alike, helping to connect people with the exact products and services they need, for little cost and with no effort.

Who wouldn't like that?

To understand this model, its helpful to remember where we've been.  Let's use television as an example.  It used to be that TV networks would produce a half hour or hour show, possibly about a group of four Vietnam vets who went around blowing up everything in sight in order to solve various problems, all without actually injuring anybody.  The TV networks would pay for the cost of producing and broadcasting the show by selling air time during the broadcast to companies who wished to advertise their products to the viewers of the show.  The nature of the show would help the companies to determine the types of viewers most likely to be watching, and thus to target their advertising effectively.  For the A-Team, one might reasonably expect a high population of males between the ages of 7 and 30.  Adult females?  Not so much.

Sadly, the advertisements we got were mostly for toothpaste and paper towels.  Maybe not the best fit for the target audience.  I'm not entirely sure why that was.  Maybe the lack of sophisticated computers back then limited their ability to figure out that 18 year old guys were not obsessed with decisions over which brand of paper towels to buy.  Maybe the ad execs were all out to lunch.

Today, instead of television, we've got cool, targeted advertising venues like Google, Youtube and Facebook.  Instead of relying on Nielsen to figure out roughly who is watching which show, we have direct access to detailed characteristics about a person.  Facebook not only knows I'm a male over 35, it knows about my taste in books, movies and music.  It can probably make some shrewd guesses about my politics, my interest (or lack thereof) in sports, and my relative socio-economic status.  What a wealth of data!  I can now look forward to sophisticated, targeted advertisements that are aimed directly at me.  It will be like they're reading my mind, figuring out exactly what I want moments before I decide I want it.  Advertisements will be transformed from an annoying distraction into something eagerly anticipated.

It will be cool.  It will be awesome.  It will transform everything.

Just as soon as those ad execs get back from lunch.

Because, for reasons surpassing my understanding, the ads I see are still as mundane and pointless as they've ever been.  I just checked my Facebook page, and found advertisements for:
Visa (I haven't applied for a new card in over 15 years)
Heineken (I don't drink)
An electronic cigarette (I don't smoke)
Restaurants located in Bucks County (I don't know where that is)
And many more.

I don't mean to single out Facebook here.  I recently received a targeted advertisement from an Internet provider (who shall remain nameless, but their initials are AT&T).  The email suggested that I might like to upgrade to a faster internet service.  I don't fault AT&T for emailing me - I'm an existing customer, so it doesn't really count as spam.  Maybe they checked my account logs and noticed that I do max out the bandwidth from time to time.  This was actually a good advertisement, because I would like to have faster internet speed.  This is advertising at it's best - a focused, targeted advertisement, sent directly to somebody who is looking for that service.

There was just one teensy, tiny problem.

AT&T doesn't offer faster internet service in my zip code.

Don't tell me they don't know my zip code, because they use it to mail me a bill each month.  And don't expect me to believe that it takes a human to check on the service availability, because I was able to look it up using the link they provided.  For whatever reason, AT&T simply finds it convenient to advertise their services to people they know they can't deliver them to.

So call me an advertising skeptic.  Don't ask me to believe that sharing my personal data will benefit me.  It's not that I don't like the vision, I do.

It's just that before it can become reality, those ad execs are going to have to come back from a very, very long lunch.

Monday, February 20, 2012

China or Bust

I'd like to continue with the subject raised in my last post regarding working conditions and wages in China.  There has been a lot of scrutiny in the media on this subject lately, most of it focused on Apple, largely because Apple is so insanely profitable that it's easy to make the case that they could improve wages and working conditions if they really wanted to.

So the question is: should they?

On the one side, we have the Milton Friedman, pure economics school of thought, which believes that the economy should be driven by pure supply and demand, and any attempt to artificially tinker with it will be negative (or catastrophic) to all involved parties.

On the other side, we have people who believe that everybody deserves a living wage, and good working conditions, no matter what.  This group often overlaps with people who are upset about the loss of US manufacturing jobs, and would like the US to become more competitive by bringing foreign workers up to US standards.

Who is right?

On the subject of wages, I have to vote with the school of economics here.  Judging wages in other countries against US standards makes no sense when you consider local costs of living, and economic alternatives.

Let me provide a personal example.  A few years ago, I worked with a group of programmers in India.  I found them to be dedicated, hard-working and competent.  I knew their names, talked with them frequently, and enjoyed their company.  They were paid a small fraction of a US competitive wage.  This money allowed them to live a comfortable life in India.  The savings to my company helped (in small part) to lower the price of its products and increase its margins (which directly benefited US stockholders, which includes a large number of US retirees).

I think about them any time I hear somebody complaining about US programmers losing work to foreign competition.  While I can sympathize with anybody whos lost their job, I cant see a net advantage to the planet by putting three or four of Indian programmers out of work so a single US programmer could be hired in their place.  Looking back on it several years later, I still believe that working with them left all of us better off.

So if wages were all there were to it, I'd be a pure free market advocate.  As much as I would like to improve the economy and employment figures here in the US, I cant justify one country enjoying a high standard of living while people in another country, possessing the same work ethic and skills, starves.  My hope is that we can quickly bring much of the rest of the world up to first world standards, without the US sinking down to third world standards.

But in addition to wages, we also need to consider working conditions.  In their profile on Foxconn (which largely triggered the recent round of media attention), the New York Times highlighted workers being exposed to poisonous chemicals, explosions caused by non-ventilated aluminum dust, and other health hazards.  On this subject, I believe first world standards should prevail.

While I was making this argument on a forum in the Huffington Post, somebody named "S M V" responded as follows: "Exposure to toxic chemicals and other work place hazards ... needs to be compared to the other options available. In 3rd world countries a lot of people stay alive by scavenging through garbage dumps. If they can have a better life sorting through computer waste and the resulting exposure to chemicals would you ban it? If you required 1st world safety levels much of the recycling either would not happen or would be automated."

I think S M V has a point, but it's not one I can subscribe to.  It has the same horrible logic of questioning whether its right for shipwreck survivors to resort to cannibalism and eat their best friends in order to avoid certain starvation.  On paper, considering only raw survival metrics, it's probably inarguable.  But we don't live on paper, and were more than just metrics.  People have names, and are more than the sum of their statistics.  So while I can't come up with a coherent logical argument to defeat S M V's rationale, I can propose some scenarios to consider.

Here's scenario number one: I'm back with my group of Indian programmers, and one of them leaves the group because they're not making enough money.  I imagine I would wish that person well, and hope that they got something positive out of working with me.    Even if they left angry, I'd be more inclined to think the fault lay with that person, rather than myself, because I know they took the job willingly, fully aware of the compensation, and have lost no future opportunities in the process.

I can live with that.

Here's scenario number two: I'm back with my group of Indian programmers, and am told that one won't be working anymore, because he or she has gone blind due to working conditions in the facility.  I don't think I'd sleep well that night, or the night after that.  I knew that person's name.  I'd worked with them for months, or years.  I'd think about the cost pressure I'd personally exerted on their employer, and wonder if I had direct responsibility for the conditions that resulted in this permanent injury.  Even if I knew they had equal or greater likelihood of getting injured even worse doing some other local job, I don't think that would help me sleep better at night.

I don't think I could live with that.

Neither, I suspect, could S M V.

Monday, February 13, 2012

The Democratization of Technology

A few weeks ago, Henry Blodget wrote an article on Yahoos Daily Ticker, casting a critical eye on Apple's outsourcing agreements with companies such as Foxconn.  It's worth reading, and essentially makes the case that we're enjoying inexpensive consumer electronics at the cost of terrible working conditions for thousands of people.

It was an interesting article, but what really caught my eye was one of the comments, posted by somebody calling themselves "T Bo R": "iphone a low price...??? where have u been????"

Ignoring the punctuation and spelling issues, this raises an interesting question.  Are devices like the iPhone "cheap", or are they "expensive"?  And given that these are subjective designations, how do we even decide?

One place to start is to figure out who can afford an iPhone.  The median income in the US is $49,909.  This works out to $959 a week, pre-tax.  A top end iPhone costs $399, plus data plan.  Spending half a week's salary on a phone is pretty steep.  Paying for food, shelter, clothing, taxes, and everything else doesnt leave much left for discretionary purchases.  Its not completely out of reach for a median income person, but it is a large expenditure.  So if you define "cheap" as "easily affordable by the vast majority of the population", then the answer is "no".

However, that's probably too simplistic a definition.  If I find a house for sale on a ten acre lot for $50,000 in Connecticut, I will certainly define that as "cheap", even though its many times the cost of an iPhone.  So to define cheap versus expensive, we need to establish a relative baseline.

What's a useful baseline?

Let's assume the iPhone is one of the top phones you can purchase in the world.  Maybe you're an Android fan, but since they fit into a similar price point, it's hard to argue convincingly that theyre not fairly competitive with each other.  Unlike the world of, say, automobiles, there's not another level of phone experience that can be procured for ten or a hundred times the price of your basic iPhone.  You can certainly buy phones for those price points, but they're pretty much just iPhones decked out with gold or diamonds.  Other than displaying the owners wealth, they don't offer a different experience for the money.

This hasn't always been the case.  Remember car phones?  They used to be the exclusive domain of the extremely wealthy.  The first car phone, called the ARP (or Autoradiopuhelin, "car radio phone"), was released in Finland in 1971.  It was considered a huge success, reaching an install base of 10,000 users within 6 years.  Compare that with the iPhone, which has sold 73 million in the first four years of its launch.

I have no idea how much the ARP cost per user, so it's difficult to compare the relative financial success of these two products.  But it's abundantly clear that far more people have access to the top tier of telephone technology today than was the case 40 years ago.

OK, so maybe phones are cheap.  Is this an isolated case?

How about supercomputers?

It's true that an iPhone or a basic laptop packs more punch than a Cray supercomputer from a few decades ago.  But technology always improves, so thats not a fair comparison.  The real question is, based on what we consider to be a supercomputer in todays terms, how accessible are supercomputer technologies to the average consumer?

The answer: Much more than they used to be.

It used to be that supercomputers were a breed apart.  When the Cray-2 became the worlds fastest machine in 1985, it was unlike anything you might have at your home or in your office.  It used a proprietary operating system not shared by any other computer in the world.  It cooled itself using a brand new inert liquid (Fluorinert).  Operating and programming a Cray required a unique set of skills.  If you didn't work for the Department of Defense or one of a very few huge corporations, you probably never got near one.

By contrast, not too long ago, the Air Force Research Laboratory constructed a powerful supercomputer, not out of advanced components you've never heard of, but by connecting 1760 Playstation 3 gaming consoles.  It may not be the absolute fastest in the world, but it's definitely in the top 50.

Imagine going back to 1972 and suggesting the government build a supercomputer by networking thousands of pong consoles.

Of course, the average person doesn't have access to 1760 Playstation consoles.  That doesn't mean that supercomputing capability is out of reach.  The Seti at Home project showed that in 1999, when it demonstrated the ability to distribute computing projects among millions of personal computers using fairly simple software.  Today there are dozens, possibly hundreds, of similar volunteer computing projects underway.  Access to supercomputing power no longer requires vast amounts of cash.  It simply takes some competent programming skills and a persuasive message.

If your ethics are lacking, you might even dispense with the persuasive bit.  Botnets are the latest form of distributed computing infrastructure, collecting the computing resources (coercively) from thousands of PCs.  The Conficker worm controlled an estimated seven millions machines at the height of its infection.  Its a single example of many.

The list goes on and on.  Near-professional quality movie cameras are now available for amateur movie-makers at an affordable price.  Vast databases of knowledge are accessible for free or for modest subscriptions.  High speed networking is now installed in over 25% of US households.

So sorry, T Bo R, by any realistic measure, iPhones, and all sorts of other advanced technologies, are available dirt cheap.  Maybe this is good, maybe it's bad, but it's the way things are right now.  Instead of complaining that youd like them even cheaper, I suggest you take advantage of what we have, and figure out how to use it to change the world for the better.

Monday, February 6, 2012

What Limits Humanity?

One of the oldest ideas in existence is of humans receiving super powers due to some type of artificial augmentation.  Achilles was said to have gained near-invulnerability after his mother submerged him in the river Styx.  The Chinese folk hero Fong Sai-yuk was made similarly invulnerable when his mother treated him by breaking every bone in his body and then bathing him in Chinese Rubbing Alcohol.  (No word on where child services was during either of these events.)  More recent cultural heroes include astronaut Steve Austin being rebuilt stronger, faster and better following a debilitating accident in The Six Million Dollar Man, and Peter Parker's inheritance of super powers as a result of being bitten by a radioactive (or genetically modified) spider.  From the beginning, these stories have been harmless fantasies, an escape for people wishing to exceed their own limits.

How much longer will these stories remain fantasies?  And what will happen when they become real?

The New York Times recently profiled Oscar Pistorius, a runner from South Africa, who is the center of an unusual controversy.  Given that he has no legs, should he be allowed to race in the Olympics?  Until recently, this would be a non-issue, as his handicap would have prevented him from qualifying.  Now, however, the question is being raised as to whether his artificial limbs give him an unfair advantage.

Both sides raise excellent points, and the case could still go either way.  The complexities of the biology and physics involved make it difficult or impossible to arrive at a truly objective and accurate answer.  The fact that other amputees, using the same type of artificial limbs, have not enjoyed similar athletic success certainly suggests that Mr. Pistorius is an accomplished athlete.  On that basis, perhaps he should be allowed to compete.  However, that's a short term answer, and every year the decision will get more difficult.  Regardless of the state of technology today, it is inevitable that at some point, possibly soon, a runner using artificial limbs will have an overwhelming advantage over his or her "non-handicapped" competition.  Recent progress in biomechatronics has been impressive, and this progress shows every indication of continuing or accelerating in the foreseeable future.

Perhaps you think I'm being overly optimistic in this assessment.  If so, you probably haven't seen the videos of Dean Kamen's "Luke" arm.  You probably haven't seen the demonstrations of walking robots, capable of navigating terrain ranging from hills to snow to ice.  For years, we couldn't even think about these types of technologies, because the computing power needed was too heavy.  But artificial limbs have finally made the same types of breakthroughs that computers made when mainframes and minicomputers finally gave way to the first primitive personal computers.  The current generation, represented in those video links, represents the TRS-80 and the Apple IIe of prosthetic technology.  Think about what happened to personal computers after 10, 15, or 20 years.

Where is this going to go?

On the one hand, this development is a triumph of progress.  For most of human history, a physical handicap has been a death sentence.  In the best of circumstances, it was an overwhelming burden for the handicapped, forcing them to live on charity at the outer fringes of society.  Only recently have people missing limbs had a reasonable chance to participate in society on almost equal terms, with that "almost" always representing a disadvantage.  The idea that a handicap might turn into an advantage seems like long overdue compensation for millennia of suffering and marginalization.

Still though, are we ready for what comes next?

Let's suppose that to keep things "fair" (or at least simple), the Olympic Committee decides that athletes cannot compete with artificial enhancements, thus limiting the field to people with their full complement of limbs.  This is roughly where we've been up till now, albeit enforced by the limits of technology, not the rules.  But then what happens?

Even if you're not a sports fan, it's likely that at some point, you've watched at least a small portion of the Olympics.  And even if you are a sports fan, unless you have a physical disability, or are close friends with or related to somebody who is, odds are that you've never watched the Paralympics.  It doesn't have the same excitement, the same cachet, nor the same advertising or endorsement dollars.

When Paralympic athletes start outperforming their Olympic peers on a regular basis, expect that to change in a big hurry.  The definitions of what is possible will go out the window.  The Paralympics will be faster, more exciting, and more unexpected than the Olympics.  It's audience will grow.  The advertising and endorsement dollars will follow.  Fame, recognition, and glory will follow for the participants.

How long before the first able-bodied athlete voluntarily amputates his or her own limbs in order to be able to compete?

Perhaps you think the answer is "never".  Even if the Paralympics gain more prestige than the Olympics, surely that won't drive people to willingly mutilate themselves.  Right?  Isn't that much too high a price to pay to be a champion?

For me, absolutely.  Probably also for the majority of the human race.  But there's a problem with this sample size.  The majority of the human race are not competitive athletes.  In a famous study done by Dr. Gabe Mirkin, 100 competitive runners were asked if they would be willing to take a magic pill that would guarantee them an Olympic gold medal, if the pill would also kill them within a year.  Roughly half of them said yes.  Presumably they planned to take it during an Olympic year.

To my knowledge, this survey has not been repeated, and I have no idea if it was conducted properly, with the necessary scientific sampling.  But even if improperly conducted, it seems a strong indicator that the number of people willing to trade their health for victory is far above zero.  If you doubt, check take another look at your sports pages for the latest steroid scandal in your favorite sport.

For years, people were terrified by the loss of privacy that would come with advancing computer and networking technology.  Now that the technology has arrived, we find that privacy has not been snatched away by big brother.  It's been voluntarily given up by people in the form of Facebook, Twitter and Foursquare.  Most of the information we share seems like a good idea at the time.  Occasionally, somebody is rudely reminded that perhaps there should be some self imposed limits.

We've similarly been terrified of the possibility of being transformed into Cyborgs against our will.  It seems much more likely that we'll do it to ourselves, because it seems like a good idea at the time.  Let us hope that we figure out how to set appropriate limits before we learn the consequences unpleasantly.