I wish I didn’t believe in this blog. Like many people, I’m intrigued by the potential of Google Glass, and would like to see it succeed. But so far, I haven’t seen evidence that anybody working with Glass really understands what it is or how to use it.
In many ways, Google Glass is simply a continuation of a long trend in making computing power more accessible. We started with mainframes, then evolved to minis, desktops, laptops, and smart phones. Mainframes could not be moved without a small team equipped with a truck or two, and could only be accessed by a small number of highly trained individuals employed by an institution large enough to spend millions of dollars. Each successive phase of computing evolution reduced the barriers for accessing computing power, and in the process, opened up new possibilities of what you could use that computing power for. Using a mainframe for word processing made no sense, but it’s a great use for a desktop computer. It would be technically feasible to log your location in Foursquare using a laptop, but it only took off once we had smartphones.
In many ways, Google Glass is simply a continuation of a long trend in making computing power more accessible. We started with mainframes, then evolved to minis, desktops, laptops, and smart phones. Mainframes could not be moved without a small team equipped with a truck or two, and could only be accessed by a small number of highly trained individuals employed by an institution large enough to spend millions of dollars. Each successive phase of computing evolution reduced the barriers for accessing computing power, and in the process, opened up new possibilities of what you could use that computing power for. Using a mainframe for word processing made no sense, but it’s a great use for a desktop computer. It would be technically feasible to log your location in Foursquare using a laptop, but it only took off once we had smartphones.
Viewed
simplistically, Google Glass is simply the next phase in making computing power
smaller and more accessible. It’s only modestly smaller and more mobile than a smart phone,
so on the surface, it might seem to occupy a similar niche in terms of
applications. However, in terms of its
usage, it’s much more akin to the
difference between a mainframe and a laptop.
Every previous phase of technology had an interface which the user interacted with to the exclusion of doing anything else. You could run programs on a mainframe, or you could go sightseeing in town, but you couldn't do both at the same time. Smartphones come close to crossing this line,
as you can shift your gaze quickly from phone to your surroundings, but it’s still an either/or proposition, as many people learn to
their dismay after unsuccessfully attempting to drive while texting.
Google glass, on the other hand, offers a unique opportunity to integrate computing power with your everyday interactions with the world. This opens up an entirely new world of computing applications. This is why I was so disappointed when the New York Times recently reported on the latest Google Glass developments from Google's I/O developers conference.
Apps
under construction include
·
Twitter
·
Facebook
·
CNN
news alerts
·
Elle
fashion features
In short,
these are simply repackaging existing applications and data feeds onto the new
device. All of them buttress the worst arguments that Google Glass's critics make, which is that people are going to be distracted (with potentially fatal results) from whatever they need to be really focused on. None of these apps enhance the user's existing visual experience.
What
should developers be working on instead?
Here's a few initial ideas:
· The
personal database. A lifesaver for those
of us who have difficulty remembering peoples names, but also useful for anybody without a perfect memory. Whenever the camera focuses
squarely on somebody’s face, it uses facial
recognition to compare that person to the people in your database and reminds
you of that persons name. It would also
quickly pop up any timely facts such as if that person's birthday or anniversary
was coming up. A quick input from the
user (perhaps a nod or a tap on the side of the glasses) would produce a quick
summary of facts you had previously stored, either displayed on screen or
recited to a bluetooth earphone.
· The
tour guide. Travel through any location,
and have great attractions, good restaurants, and other points of
interest presented to you in real time. Focus and tap again, and get
detailed reviews for that restaurant or historical information on the cathedral.
Ideally, this application would be an API, rather than being a single
vendor's perspective, so you could subscribe to whatever perspective you
liked. Imagine legions of bloggers, each
with their own unique voice, annotating the landscape with a variety of
viewpoints. Anything from the Hipster's Guide to San Francisco to the Italian Gourmet's Guide to Topeka Kansas.
· The
training assistant. Need to change the oil
on your car, or install a video card in your computer, but not quite sure how
to do it? Youtube has thousands of great
videos instructing people how to do all sorts of things, but they are limited
by the context of the film-makers environment.
Imagine getting step by step instructions, and when you got stuck, the
glasses would highlight the component you're looking for via pattern
recognition.
Google
Glass has the potential to be the indispensable tool for tomorrow. It's a pity that the developers working on it seem to be
thoroughly fixated on yesterday.