Recently in Smart Objects Category

Last Thursday I was honored to have been invited to keynote the Internet of Things Day 2012 in Stockholm, organized by the Swedish Internet of Things Centre.

Since so much discussion of the Internet of Things is based on infrastructure technologies deployed by large institutions, I decided to take a step back and, with this presentation, talk about consumer-centered technologies created by entrepreneurs (which, to my surprise, turns out to be the focus of the new center, also).

Abstract
The technologies underlying most current Internet of Things visions are not particularly revolutionary. That of course doesn't mean that the visions are not compelling, just that the challenges in creating these visions have little to do with building new technologies. The real challenge is to identify what people want and need, and how -- or if -- automatic identification, distributed processing, and pervasive networking can help address those needs and desires. We need to think about how we're going to create the Google of Things, the Facebook of Things, the Foursquare of Things, the PayPal of Things, the Farmville of Things. It's not about the infrastructure, it's about the applications, and the applications are about people.

PDF
Slides and transcript (1MB)

Slideshare

Scribd
The Internet of People: Integrating IoT technologies is not a technical problem (Swedish Internet of Things...

Video
Part 1

Part 2
Part 3
Part 4

Transcript
Good morning. It's an honor to have been invited to this gathering. I have long been a fan of the work done by SICS and the Mobile Life center. Today I'm going to present my perspective on how the main challenge of meeting the big visions of Internet of Things will not be in on creating new infrastructure technologies, but in developing user-centered services, that the focus should not be on what digital things do, but how they can help people.

First, let me tell you a bit about my background. I'm a user experience designer. I was one of the first professional Web designers in 1993, where I was lucky enough to be present for the birth of such things as the online shopping cart and the search engine. This is the navigation for a hot sauce shopping site I designed in 1994.

I'm proud of the fact that 16 years later they were still using the same visual identity. These were some of the oldest pixels on the Web.

Here's one of my UI designs for the advanced search for HotBot, an early search engine, from 1997. If you're wondering why Google's front page is no minimal, I think it was because we were doing this.

Since then I've consulted on the user experience design of dozens, maybe hundreds of web sites. Here's one for credit.com, who were fantastic clients a couple of years ago.

I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods. It came out in 2003 and the second edition [CLICK] will come out this fall.

And 2001 I co-founded a design and consulting company called Adaptive Path.

I left the Web behind in 2004 and founded a company with Tod E. Kurt called ThingM in 2006.

ThingM is a micro-OEM and an R&D lab. We design and manufacture a range of smart LEDs for architects, industrial designers and hackers. Our products appear on everything from flying robots to Lady Gaga's stage show. This is an RFID wine rack that we did about four years ago. The different light colors represent different facets of information that's pulled down from a cloud-based service, such as current market price. This is a capacitive sensing kitchen cabinet knob we did two years ago. It glows when you touch it to creates a little bit of magic in your everyday environment and was an exploration in making a digital product that would still be useful 20 years after it was made.

In 2010 I wrote a book on the user experience design of ubiquitous computing devices, which I define as things that do information processing and networking, but are not experienced as general purpose computing or communication devices.

However, ThingM and books are primarily side projects. My primary day job is as an innovation and user experience design consultant focusing on the design of digital consumer products. Here are some I've worked on for Yamaha, Whirlpool and Qualcomm.

The last couple of years my clients have been large consumer electronics companies and my focus has been on creating experiences that span multiple devices. I can't give you any details.

A lot of my projects broadly fall under the description of the Internet of Things, but that's a really challenging name to work with.

Talking about The Internet of Things is a hard because there are so many different definitions. This is Time Magazine's illustration of the Internet of Things for their "Best Inventions of 2008" edition. I love this illustration is because it makes no sense no matter how you think about it, which is actually quite an accurate representation of how confusing the many definitions of the Internet of Things are right now.

Let me give you my definition, which is pretty broad. For me the Internet of Things is the combination of distributed information processing, pervasive wireless networking and automatic identification, deployed inexpensively and widely. The underlying technologies and the applications that are traditionally discussed don't matter much, because it is this combination of factors that deeply affects people and industries, and it does it by connecting people's immediate experiences to the power of digitally aggregated and analyzed information. In other words, the Internet to Things turns physical actions into knowledge in the cloud and knowledge in the cloud into physical action in a way that's never existed before.

So, for example, I count the FedEx Sensaware smart tag and the Yottamark tracking system to be roughly identical. The Sensaware tag has a bunch of sensors, a GPS and the equivalent of a phone in it. It's used to track high value items, such as human organs, that need to be shipped under precisely maintained conditions. The Yottamark system uses stickers, readers and a wired network service to track things such as produce and car parts.
Technologically there's almost no overlap, but they both give people the ability to treat physical objects like they have been treating data packets. They bring the power and ideas of the internet to physical things.

I also count this to be a member of the Internet of Things. It's a cheap phone. It has all of the the core components of the Internet of Things and it creates many of the same social effects. People begin to use it in the same way.

You can see this in how people are using hacked phones to do cheap Internet of Things prototyping.

Here's a project by Tellart, a Rhode Island design firm, that uses cheap phones to inexpensively add wireless tracking and identification capabilities to chairs. They did this with an advertising agency working for a furniture client. They strapped a GPS-enabled phone to the bottoms of chairs and distributed those chairs around Manhattan, leaving them on street corners to look like trash. People of course picked up the chairs, and the agency tracked those chairs around the city and found the people who had taken them. They then did an advertising campaign with those people, asking them why they had taken that specific chair. This is exactly the same kind of thing that the FedEx Sensaware tag is doing, but deployed by a bunch of designers for an ad campaign using technology so old that it was on phones that were ready to be thrown away.

Here's a project that Eric Paulos did with Intel research. They attached mobile phone-based air quality sensors to garbage trucks to create a daily updated air quality map of San Francisco. The core piece of technology is the mobile phone, which at the time was the cheapest Internet of Things platform.

In other words, although it's discussed as an emerging technology, I believe that The Internet of Things is actually a combination of mature technologies, much more mature than people give it credit for. The reason it's climbing the Gartner Hype Cycle is because those mature technologies are now cheap, and the rise of smartphones has made people more aware what happens when you take a small bit of functionality, which is what an app is, and distribute it through the world. I believe that people are looking at apps and thinking to themselves "Why do I need that expensive phone, with all its capabilities, to do this one thing? Why can't I just take that app, pluck it off the screen, and put it into a dedicated piece of hardware that only does that one thing? These technologies are really cheap. I can do that."

However, if you look at what applications are currently given as examples of the Internet of Things, you'll see that they're mostly top-down large-scale centralized infrastructural applications. Here's San Francisco's parking system. It uses sensors in the street to see what spaces cars are parked in. It can tell you where there are empty parking spots and can dispatch meter maids to write tickets more efficiently.

But these projects are not the ones that I believe will have the greatest impact on the world, nor where the greatest innovation will lie. I believe that the greatest Internet of Things innovation, and the deepest impact, will come from small, risky projects undertaken by entrepreneurs working with existing infrastructures.

And I believe that this will happen as people bring online services into the physical world as specialized devices. Let me start by discussing a consumer electronics trend I've been working with for the last several years, which I believe points to a deep shift in how people think about products.

Over the last couple of years, there's been a collapse in device functionality. There is now little distinction between a phone, a tablet, a laptop and a smart TV, except for the size of the display. Anything can do anything, roughly speaking. This has been accompanied by a fall in profit on these devices.

Companies have recognized that this shift to increasingly generic devices has been accompanied by a shift in people's loyalty. People's associations are no longer with the device, but the service that the device delivers. Loyalty is not to the maker of the device, but to the services that device gives access to.

Let me give you an example. Netflix is a US movie rental and streaming service. To the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. Netflix works hard to reinforce this by creating a continuous experience across devices. You can pause a film you're watching on one device and unpause it on another.

Netflix has worked very hard to make their service available on virtually every device that has a screen and a network connection. They use every device available to bring what is perceived as a single thing to every corner of a customer's life.

Another example is the Kindle. Here's a telling ad from Amazon for the Kindle. It's saying "Look, use whatever device you want. We don't care, as long you stay loyal to our service. You can buy our specialized devices, but you don't have to."

Jeff Bezos is now even referring to it in these terms.

The upshot is that this perspective reverses a traditional way of thinking about technology. Rather than thinking "Let's build an infrastructure and then figure out how to use it. Now that we have it, what the applications of the technology?" this service-centric way of thinking about technology starts with a service, starts with concrete ways of creating value for people, and then uses every available technology to deliver that service. Of course Amazon started with the device, but they quickly realized that it was not the device where the impact and profit were.

As value shifts to services, devices, software applications and websites used to access those services--what I call the avatars of that service--simultaneously become more specialized and more secondary. A camera becomes a good way to take photos for Flickr, while a TV becomes a nice full-resolution Flickr display, and a phone becomes a convenient way to take your Flickr pictures on the road.

From this perspective, specialized hardware avatars begin to make more sense as people increasingly see "through" each device to the service it represents. Now they can recognize situations where a specialized device can provide significant value in using a service, while understanding that the service is not limited to that device.

I believe that this combination of factors will lead to an Internet of Things that are primarily services in the cloud, but services that have specialized hardware devices as one of their many avatars. This is already happening.

Let me show you a handful of examples that serve as early models. I'd like to start with these two, the Withings bathroom scale and the Nest thermostat. You've probably heard of both of these, but let me revisit them as avatars of Internet of Things services.

The Withings scale is an internet connected scale. At first it was kind of a gimmick. "You can tweet your weight to your friends!" was one of the ways it was originally pitched. That's of course not particularly interesting, but that was not the purpose of the device. The device is the avatar to a health service that helped you track your weight. The scale is the way the service differentiates itself from other weight tracking services, but the value is not in the scale, but in the service, which is fully experienced using other avatars, such as the ones depicted on the right.

Withings has now expanded the service to include a blood pressure cuff. Again, the value is not in the devices, but in the knowledge that they create by collecting simple pieces of information and then providing users with the full power of cloud-based services to make use of that piece of information. Withings can keep adding avatars, new sensors and new ways to display the information the sensors collect, without fundamentally changing the promise of the service.

The Nest thermostat is a wireless thermostat that takes it one step further by closing the loop and allowing the online service to make changes in the world. The service uses information collected from the thermostat, the internet, and people's behavior to learn what the optimal temperature conditions are for an environment given how people use that environment. The sensor is pretty simple, but the service it provides access to is sophisticated. You can imaging them branching out into a wide variety of avatars for collecting information about your house and then acting on it in interesting ways, automatically moving money you save to special bank account when you behave in a particularly energy-saving way, but they begin with this very simple one that's almost a physical manifestation of an iPhone app. It even looks a bit like an app.

There is a whole class of such devices that are essentially projections of a cloud service through a limited functionality hardware product. Here are some that monitor personal health and fitness, there's the Fitbit pedometer, the Zeo sleep sensor and the Bodymedia sensor that can sense heart rate, skin temperature and other senses. These are of course sensor-based devices, but what they're selling is not the capabilities of the sensor, but of the cloud-based service the sensor connects to.


Here are a couple startups focused on the home security sector. Lockitron lets you control digital locks over the internet, so that you can, for example, use your phone to create a unique code for people who are renting your apartment that only opens it during certain times, or keep track of when a specific door has been opened. Cam.ly takes cheap internet security cameras and adds many of the features that a sophisticated surveillance system provide, such as the ability to review many days of video quickly, or to have it alert you when it notices movement in a specific area. They charge $20 a month for this instead of hundreds of dollars. They can do this because most of the functionality is in the cloud.

My favorite example is still Vitality's Glowcap, which I've been talking about for years. This is a wireless network-connected pill bottle that's an avatar to Vitality's service for increasing compliance to medicine prescriptions. When you close the cap, it sends a packet of information through a mobile phone- based base station to a central server and it starts counting down to when you next need to take your medicine. When it's time, it lights up the LED on the top of the bottle.

However, the real power is in the packet of data it sends. That packet opens a door to the full power of an Internet-based service. Now Vitality can create sophisticated experiences that transcend a single piece of software or a single device.

For example, another avatar of the Vitality service is an online progress report that can be used interactively or delivered by email. It's like Google Analytics for your medicine.

Health care practitioners get yet another avatar that gives them long-term and longitudinal analytics about compliance across medications and time.

To me, this kind of conversation between devices and net services is where the real power of The Internet of Things begins.


Vitality has developed a wide range of avatars for patients, patients families, health care practitioners and pharmacies. Each avatar looks different and has different functionality, but they're perceived, and designed as a single system.

The Vitality system is an Internet of Things service that doesn't use any esoteric or complex hardware or software. It takes a model of a service that's long been popular in websites, one that has multiple touchpoints and which uses digital representations of personal relationships to create significant social effects, except that in addition to emails and web sites and apps, it also uses a couple small pieces of hardware. It treats the hardware as a part of the service, as an extension of the service, but it begins with the service.

Creating services like this is becoming increasingly straightforward. The whole Web 2.0 change was at its heart about creating tools for rapidly building and iterating Web services. Ruby on Rails, server virtualization and web analytics technologies created an ecosystem where it's very easy to provision new services and to iterate based on data about how people use the service.. This infrastructure then became used by app developers who used it to create hundreds of thousands of apps in just a couple of years.

Now we're seeing technologies that make it similarly easy to add specialized hardware devices to services.

I grabbed this image from Arrayent, who is a company that makes a little hardware blob that connects virtually anything, in this case a smoke detector, to their cloud service. It can make any device look like a Web site, and there are other devices like it on the market.
Source: Arrayent

Services such as Pachube, sen.se, Thingspeak and Axeda are now serving a similar role by acting as data brokerages that make arbitrary, different devices act consistently. Pachube, for example, allows an arbitrary data stream from any net connected device to share that stream with any other device. The service will do the buffering, the protocol translation, the analytics, everything. It's a system that has its roots in Web protocols and mashups, now connected to hardware.

Connecting devices to the cloud allows for rapid iteration on features, since most of the functionality of those devices lies in the cloud.

The key to success with the IoT is to move beyond thinking of it as an infrastructural technology, such as this diagram of an RFID system, and to stop letting the name give you the wrong expectations for what it is. The name is a distraction. it implies a parallel universe that is as pervasive as the Internet, but different because it's about things. That gives the impression that projects that don't try to be as ambitious as the Internet somehow don't count. That misses a key point. The Internet of Things is ALREADY as pervasive as the Internet, because it IS the Internet. What's different is that it's now incredibly cheap to connect anything to the internet.

Image from "THE INTERNET OF THINGS: From RFID to the Next- Generation Pervasive Networked Systems" (Lu Yan et al, 2008)

The real challenge is not thinking about how we're going to create the Internet of Things. We need to think about how we're going to create the Google of Things, the Facebook of Things, the Foursquare of Things, the PayPal of Things, the Farmville of Things. It's not about the infrastructure, it's about the applications, and the applications are about people.

Scandinavia, and especially Sweden, has led the world in humanizing digital technology for decades. The work at SICS and Mobile Life has been doing has been pushing the boundaries of understanding people and adapting technologies to their needs and desires. The rise of the Internet of Things is a fantastic opportunity for Sweden and I'm very excited to see what you'll produce, because I'm sure it'll be amazing.

Thank you.

The fantastic folks at Interaction South America invited me to be the closing keynote of their 2011 conference. I took the opportunity to revisit themes of the relationship between products and services I had talked about before, but focusing on the effects that servicization of products has on the shape of products and by trying to define some specific interaction design challenges associated with designing service avatars.

PDF
The slides and transcript (1.8M PDF)

Slideshare
Click through to see the transcript in the notes.

Scribd
Products are Services, how ubiquitous computing changes design

Presentation Transcript

Good evening

Thank you for inviting me. Today I'm going to talk about how products and services are merging as a result of cheap processing and widespread networking, and how these technologies are changing everything from our relationships to everyday objects, down to the shapes of the objects themselves.
First, let me tell you a bit about my background. I'm a user experience designer. I was one of the first professional Web designers in 1993, where I was lucky enough to be present for the birth of such things as the online shopping cart and the search engine. This is the navigation for a hot sauce shopping site I designed in 1994.
I'm proud of the fact that 16 years later they were still using the same visual identity.

Here's one of my UI designs for the advanced search for HotBot, an early search engine, from 1997. If you're wondering why Google's front page was so stripped down, I think it was because we did this.

I also helped in the design of hundreds of other sites.

And 2001 I co-founded a design and consulting company called Adaptive Path.
I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods.
I left the Web behind in 2004 and founded a company with Tod E. Kurt called ThingM in 2006.
We're a micro-OEM. We design and manufacture a range of smart LEDs for architects, industrial designers and hackers. We've also done a range of prototypes using advanced technology. Here's an RFID wine rack we did in 2007. It shows faceted metadata about wine projected directly onto the bottles.
Because self-funded hardware startups are expensive, I've simultaneously been consulting on the design of digital consumer products. Here are some for Yamaha, Whirlpool and Qualcomm.

I even still do some strategic web design as a user experience director. Here's the homepage for credit.com, who were great clients a couple of years ago.

The last couple of years my clients have been large consumer electronics companies. I can't tell you who they are or give you any details about the projects.

This talk is based on my most recent book, which is on ubiquitous computing user experience design. The book is called "Smart Things" and it's published by Morgan Kaufmann.

Three days ago, BERG London, which is a design consultancy, released this product. It's called Little Printer, and that's all it is. It's a little printer. It doesn't connect to a specific device. Instead, it connects to the cloud to print things from Twitter, FourSquare, The Guardian newspaper, etc. It doesn't need to be plugged into a network connection and it doesn't have an interface that looks like anything we're familiar with. It's not designed to print out your Word document. Instead, it's designed to give you a feeling of what is happening in your digital world. They describe it as more like a family member than a tool. What does that mean? Is it a joke? It's not a joke. They're totally serious.

We're going to see many more objects like this, digital things that don't look or behave like the computers we're familiar with. Tonight, I want to talk about the underlying forces that are coming together to create them and I want to encourage you to start thinking about interaction design not as something that happens on boxes with screens, but as something that brings together the physical and and the digital.

I want to start by talking about unboxing. Many of you have probably seen unboxing videos or followed along a sequence of photographs as someone unwraps a device for the first time. Here's an intentionally old unboxing sequence I found on Flickr. It's from 2007.
Let's step back and think a bit about why this is interesting.
Unboxing is the documentation of the intimate experience of savoring the first time a person got to physically use, to touch, to own their precious new device. You, the viewer got the vicarious thrill of seeing someone else's intimate experience.

The act of unboxing is a kind of a devotional act to the physical form of a digital object. We have grown up in a world where the physicality of objects matters. We want there to be meaning in the form of an object, in how it looks and feels. We want to experience it with our hands, not just our eyes. We want to know what the skin feels like, how heavy it is. Is it warm, cold, hard, soft? These things matter.

Photo: Brian Yeung

Five years ago, when that first set of photos was taken, the form factor of devices was still very important. We were at the peak of form factor experimentation. The basic value of mobile phones had been established and handset makers began to compete on the physical experience of their devices. The way that the device was shaped, how you held it, how it looked mattered.

This is the Nokia 7280 and the Philips Xelibri 4, both of which come from this era.

However, something happened along the way. The unboxing became pretty boring.
Today, when we look at unboxing images for the latest products, they're all look basically the same. They're black rectangles in various sizes. Sure, each Android handset manufacturer has their own Android skin to make their black rectangle look different, but ultimately the physical objects are all trending toward the same size and shape.

Why? What happened in the last five years to change objects from these different, complex, sensuous to flat black rectangles that all do the same thing?
What happened is that our objects have become less important than the services they represent. This shift in value, from physical objects to networked services is huge and profound. It means that many of the physical things we've taken for granted are rapidly changing, new things are being created and our relationship to our world is rapidly shifting.

The shift of device focus to services represents a shift in the way that we relate to our things akin to what happened during electrification.
If you've ever used a wind-up record player or a treadle sewing machine, you know the wonder of the experience of a machine that's doing something complex, but doing it completely without electricity or gasoline. Those two substances, electricity and gasoline, as like modern magic. You don't really experience how they work directly. You can only see the effects that they have, so our relationship to electrical and gasoline-powered devices has an inherent leap of faith that somehow, somewhere inside the windings of a motor or in the pistons of an engine this invisible magic happens and the device works.
When you see a complex device that works on purely mechanical means, one that requires no magic substance, there's a feeling of incredible wonder, since your dependence on assuming the magic of electricity and gas is revealed.

That feeling is exactly the feeling our children will have about objects that aren't connected to the network. Our children will say, "Wait, you mean your cars didn't automatically talk to the net?" "How did they tell you when to fill them up?"

The simplest place to start thinking about this change is by looking at how expectations for user experiences on networked devices has shifted in recent past.
When information processing and networking were expensive, computers had to be general purpose devices that had deal with almost every situation. All the value was local. It was in the machine in front of you. That one tool was designed to cover every possible situation.

The software that ran on these computers also had to cover every possibility. The tools had to be completely generic and cover every imaginable use case.
However, that's no longer the case. Today processing is cheap. Our generic tools have become fragmented. The generic tools have been broken into pieces and rather than buying one generic tool, you now have a tool BOX for the same price of that one expensive device ten years ago.

That device is also not isolated. Widespread networking and the Web created a shift in people's expectations. Today, most people understand that the experience you see on one device is often a part of something that's far away, that's connected to the world through some kind of digital back channel. There's no longer a need to pack all possible functionality into a single piece of software, and there's no expectation that everything will be there.
Moreover, we are increasingly accepting that the experience we get when we pick up a device and start an app may not be like the experience we had last time. The content or the functionality of a device is no longer stable, it's fluid and it's often not under our control. The device is no longer the container of the experience, but a window into it.
In other words, widespread networking has shifted our expectation of value from the device to the information that it contains, from the local to the remote.

If we take those shifts to their logical conclusions, we see that as information moves to the network, an individual device is no longer the sole container of the information. The information, and the value it creates, primarily lives in online services.
Devices become what I call "service avatars." A service avatar is a representative of a service, and a conduit for a service. You can give the device away without giving away the service. You can change it without changing the service. You can turn it off without turning off the service. None of that was true when the value was local.
For example, let's look at digital photography. If we take Flickr as our service, we see that a camera becomes a good tool for taking photos for Flickr, a TV becomes a high resolution Flickr display, and a phone becomes a convenient way to take your Flickr pictures on the road.

We now increasingly see THROUGH devices and software to the cloud-based services they represent. We no longer think of these products as being places we visit online, but services that we can access in a number of different ways, unified by brand identity and continuity of experience. We used to think of the Internet as a place we visit, now we think of it like we think of as the atmosphere, as something that always around us. We don't have to visit it. In fact, we're surprised when we don't have it.

For example, you can now get Netflix on virtually any device that has a screen and a network connection. You can pause a Netflix movie on one device and then upause it on another.

Because to the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. The technology exists to enable the service, not as an end to itself.

Netflix appliances are created for a single reason: to make it easier to access Netflix. That's what Roku does. It turns any device that's not already Netflix enabled into a Netflix avatar. The Boxee box does that for the Boxee service.

Here's a telling ad from Amazon for the Kindle, which is one of the purest examples of a service avatar based user experience. This ad is saying "Look, use whatever avatar you want. We don't care, as long you stay loyal to our service. You can buy our specialized device, but you don't have to."

Jeff Bezos is now even referring to Kindle Fire in exactly these terms.

Facebook and HTC have now partnered to make a Facebook-specific phone from the ground up. If Facebook is the primary service you use on the Net, why not have a specialized device for it?

My favorite example of a dedicated hardware avatar is still Vitality Glowcaps, which is a wireless network-connected pill bottle that's an avatar to Vitality's service for increasing compliance to medicine prescriptions. When you close the cap, it sends a packet of information through a mobile phone-based base station to a central server and it starts counting down to when you next need to take your medicine. When it's time, it lights up the LED on the top of the bottle. That glow is the simplest output as an avatar of the Vitality service. The real power is in the packet of data it sends. That packet opens a door to sophisticated experiences that transcend a single piece of software or a single device.

For example, another avatar of the Vitality service is an online progress report that can be used interactively or delivered by email. It's like Google Analytics for your medicine.

Health care practitioners get yet another avatar that gives them long-term and longitudinal analytics about compliance across medications and time.
To me, this kind of conversation between devices and net services is where the real power of The Internet of Things begins.

Vitality has developed a complete system around this service that includes a social component, and different avatars for patients, patients families, health care practitioners and pharmacies. Each avatar looks different and has different functionality, but they're perceived, and designed as a single system.

Our ability to digitally track individual objects, like pill bottle caps, and connect them to the internet is creating a profound change in our physical world. We can now take what we've learned in the last ten years about creating networked experiences and moving that to physical objects.

Today we have the technical ability to uniquely identify and track even the most disposable objects. This is a melon that's uniquely tracked using a sticker from a company called Yottamark. Their service tracks each individual melon back to the farm where it was grown, through every warehouse and truck. You can use this check to make sure that it's fresh and that it was kept in appropriate conditions and that the farm is genuinely the organic farm that's advertised.
Once you know what kind of melon it is, you can also automatically find out how to cook it, how to compost it, what recipes work well with it, what your friends think about it, etc. In other words, you can do the things with it that are familiar with digital content, but now with physical objects.
Source: Yottamark

I call this cluster of data on the internet about a specific thing that object's information shadow. Every object and every person casts an information shadow onto the internet, onto the cloud.
In a very real sense, once you can identify each individual melon, it becomes the avatar of a melon service that provides information to you as a consumer, allows the store to understand their logistics, and allows the farmer to understand patterns of production and consumption. In the same way that data about yourself changes your behavior, as Chloe talked about yesterday, data about the objects in the world changes the world.

Wrapping your brain around what this means can difficult, so let me give you an example.

When you buy into a car sharing service such as City Carshare, Zip Car or Zazcar in São Paolo, you subscribe to a service. Each car is an avatar of it's respected service, actively connected to the service at all times. You can only open the car and start the engine if the service allows it. The car logs whether it's been dropped off at the right location, and how far it's been driven. All of that is transparent to you, the subscriber.
3
It's a lot like having your own car. It's available 24 hours a day and you can just book one, get in it and go. However, your relationship to it is different than having your own car.
Instead of a car, what you have a car possibility space that's enabled by realtime access to that car's information shadow.

This is the German Call-a-Bike program, run by the rail service. You need a bike, you find one of these bikes, which are usually at major street corners. You use your mobile phone to call the number on the bike. It gives you a code that you punch in to unlock the bike lock. You ride the bike around and when you've arrived, you lock it. The amount of time you rode it automatically gets billed to your phone, by the minute.

Each bike is an avatar of the bicycle service. Instead of a bicycle, you are now interacting with a transportation service that exists in the form of bicycles. You are not getting a thing, but the effect that the thing produces.

Here's another example that points to some exciting possibilities. Bag, Borrow or Steal is a designer purse subscription site. It's a service for expensive handbags. You don't normally carry a super expensive handbag all the time. You want it for a weekend, or for a couple of days. Through this service you subscribe get the latest purse delivered to you. You use it for a couple of days, or for however long you want, and mail it back. Next time, they'll send you another one.

Again, what you own is not an object, but a possibility space.

Here's another one called Rent the Runway that also does dresses and accessories.

How long until you get a subscription to Zara and instead of buying your clothes, you just pay a monthly fee to get whatever is seasonal for your type of work in your part of the world at your price point.
We already have Exactitudes and people seem quite comfortable with it. Why not turn it into a subscription business model for clothes?

For me, the process of creating a successful product is not limited to creating great visual experiences, or efficient, clear interfaces, but understanding how to make products fit into people's lives today and tomorrow.
When designing service avatars, a number of different design disciplines-- service design, industrial design, visual design, even branding--come together and affect how we interact with avatars.
Since this is an interaction design conference, I wanted to identify some issues with service avatar interaction design to give you a feel for what the challenges, and interesting opportunities are.

The first challenge is figuring out what an avatar won't do. When anything can do anything, when any avatar can computationally perform the same action as every other, you get a kind of design vertigo. What should THIS product do? What makes it different from that one?
A watch is a 20 centimeter interface, a phone is a A 50 centimeter user interface, a TV is a 3 meter UI. They're completely different, but app designers, people who are making these terminals into avatars, are tasked with designing a consistent experience across all scales.
It's a nightmare.
To me, this means that one of the biggest service avatar interaction design challenges is deciding what a given device is NOT going to do.

But saying no is really hard. As Chloe talked about yesterday, consumer electronics companies add the equivalent of a tablet PC to the front of a refrigerator because it's technically easy. The problem is that they don't think through how this computer will make the refrigerator a better REFRIGERATOR.
If we think in terms of networked devices, we encounter the question of how a service avatar of an online service make this fridge better? When Chloe presented her idea, she was absolutely correct in focusing on having the fridge know what food is in it so that it can become the avatar of an online grocery store service. The key insight is to create a service that focuses on what the fridge does, not what a computer can do. The challenge is to make the fridge an avatar to the service, not another general purpose computer that has to be managed.
As we've seen, no consumer electronics company has managed to do this successfully. I've been thinking about this for a long time and joking about it as a repeated failure. However, as I was writing this in the hotel room today, I realized that there is a model for this service that might just work.

It's the Hotel Mini-bar. So, Chloe, if we can figure out how to convert this this model to something that everyone will want to have in their house, we've got a huge business waiting for us. Let's talk.

More practically, the Nest thermostat is a smart home thermostat that's an avatar of their online service. Yes, as a computer it's probably computationally the equivalent of an iPod Nano, but they're not trying to make another random small computer stuck to your wall. Instead, it's a networked thermostat. It doesn't do ANYTHING except try to be the best way to keep comfortable and save energy, using its status as a service avatar to do that.
They could have made it an invisible box that you control through an Android app, or a tablet that hangs in your hallway, but why? It's much easier to think of it as a thermostat. It's focused on the context in which it's used.

They also have other avatars for the same service. Each one is focused on maximizing the value that's possible in the context in which it's used. What is good about a computer with a high resolution screen? Well, you can use the large screen to see a complex schedule on it. The designers used the affordances that are available in the way that makes sense given what people want to do in context in which the avatars are going to be used. It sounds like straightforward user centered design, but it's surprisingly confusing about what the right context is, where the right places to say no are, given everything that's possible.

A second key interaction challenge deals is how to manage service avatars ability to behave on their own. When you had an unconnected computer on your desk, or a simple feature phone, you were pretty sure you knew what it was doing most of the time. The more connected a device, the more it does things without asking you, without you knowing. Designing interactions with devices that have their own behaviors is quickly becoming a significant interaction design challenge.
Let me give you a simple example. This is the Water Pebble. It's a shower timer that aims to reduce water usage. When you first use it, you push a button and take a shower. From then on it glows green while your shower time is fine, yellow when you're almost done, red when you should stop, and blinking red when you're really over. The interesting part is that, after a while, it starts slowly reducing the amount of time it gives you so that you progressively build a habit of using less water.
My personal experience with it, however, is that its algorithm for behavior change doesn't match my ability to actually change. It reduced the amount of time it gave me to shower, and I was following along with it, until my change curve deviated from its. Instead of helping me change my behavior, it just sat there at in the shower drain blinking red and mocking me for not being good enough. I couldn't reason with it, I couldn't get it to change its algorithm to match my capabilities, so I stopped using it.
The interaction design challenge is how to let a user negotiate with this device that's making decisions for me. This is a simple ubiquitous computing device, but what if it was a service avatar that controlled the actual amount of water I used. I would now need to negotiate with

You can see how iRobot solved this with their Roomba robotic vacuum. They initially gave you four different ways, four different buttons for selecting what kind of mission the roomba was supposed to go on. Of course the robot can do much more than that, but they watched people use the robots and determined what kinds of activity was most requested, what kind of behavior you could expect from the algorithm.

Then they revised it based on further research, essentially down to one button. That's not minimalism for the sake of minimalism, it's saying no to functionality based on an understanding of context.

The next interaction design challenge is how to deal with interactions with data streams, rather than data files. Traditional computer devices produce files, and over the last 30 years we've developed a number of different mechanisms for dealing with them. Today's modern file browsers resemble search engines more than they do the original Mac Finder and that kind of works. It's not great, but it's functional.
Service avatars, because they're autonomous networked devices, do not produce files. Their basic unit of data in a service is the data stream. They produce continuous streams of information, rather than single units of information. Think of it as a change from a world of static Web pages to dynamically generated sites. It's a completely different design philosophy.
Here's Pachube, an online data brokerage for what I would call service avatars. Each one of the 80,000 devices is producing a continuous real time stream of data.
How do you manage one of these? How do you manage twenty?

I think that the financial industry is a great place to look for models for dealing with data streams. Money is one of the oldest services with lots of well known service avatars from credit cards to ATMs to online shopping. There are a lot of good services out there that have very good interactions with streams of money. Mint.com collects the output of a number of different financial data streams and gives you lots of ways to see trends and to control what happens where.
Let's think of streaming video subscriptions. When people are subscribed to twenty different streaming video services, how do you help them manage that? Perhaps the answer is that we should start interacting with all service data like we interact with money.

Finally, we hit the last major interaction problem, which is that these devices can technically work together well, but in practice they're all separate. How can you design these avatars so they use their power and work together to make your life easier? How can you bridge devices to create a single experience that crosses multiple devices?

We're now starting to make headway, most notably in what's called "second screen" user interfaces. The TRON Legacy Blu Ray, for example, has a companion app that listens to the soundtrack and synchronizes interactive content on a second device along with the movie. These are essentially two avatars for the same service, which is the delivery of TRON Legacy. This is the beginning of multi-device, multi-screen user experiences.
Again, we're at the start of figuring how interactions can span multiple devices that are simultaneously working together. Very soon as we have toolboxes of devices, rather than individual all-purpose devices, we're going to have to hook them together, and that's a fantastic interaction design challenge.

The last thing I want to talk about is the most speculative. I want to talk about the shape of service avatars.
Shape is a key component of the user experience, I'm really interested in how the physical shapes of objects change when they use new technologies, and I think we're about to see a big shift in the shapes of the objects in our world.

Let's start with telephones.
The old phone network was one of the first avatar-based services and you can see the effects of that relationship on the physical design of the devices.
If you look at an old phone, you see that it's not built for fashion or for flexibility. It's built for the most common use case and it is built not for annual replacement, but to minimize the need for repair. It is simple and modular and its internal parts didn't change for decades. It is a very conservative product design, for better and for worse.

The minute that phones stopped being owned by the service, they stopped being service avatars and became normal products, their shapes went crazy and the manufacturing quality became incredibly cheap because the entire set of incentives in the design of the device was different.

As we move back to a world of more service avatars, we can see this pattern repeating itself.
Municipal service avatars, the familiar Internet of Things devices such as smart electricity meters and networked parking meters that are being deployed by governments and utilities in large quantities, are very conservative for all the same reasons as the original phones. That's not so surprising.

What's surprising is that because the designers of Call-A-Bike bicycles had many of the same design constraints, constraints that are inherently imposed by the economics of centrally-controlled services, they made the same kinds of decisions. The Call-A-Bike bikes are different than any other bike on earth, but because they are robust, overdesigned, easily repaired they may also be the most conservative.
Does this mean that this is the case for any service avatar design? That the design philosophy has to be ultraconservative?

No, but the other direction is not pretty, either.
Before the advent of LCD TVs the replacement cycle of a CRT-based TV was on the order of 10-15 YEARS. Today, you can see that as the price of LCDs drops on the order of 20% per year, so people are replacing their TVs much more quickly.

This affects how the TV is designed and built. As prices fall, margins shrink and the build quality starts to go down because there's an expectation that consumers will replace the device soon.
Vizio, a low-end TV maker, now regularly tells people that they must replace their TVs if those TVs are older than 12 months. Instead of a 15 year replacement cycle, Vizio is working on a 12 MONTH replacement cycle for TVs.
In other words, like the Garfield phone, when you buy the avatar of a service, you are just buying a frame. Thus, the design incentives are to make it as cheap as possible with gimmicks, because the makers know there is no real value in the avatar, it's all in the service.

Neither of these options is appealing to me. You either get conservative or disposable. That's a bad choice. If we had a Zara clothes subscription service, does this mean that their choices would be to build clothes that were built like tough work clothes or made of paper?
I hope not.
As I said, in the opening, the physicality of objects matters. I think that the answer is for us as designers to reinvent business models. We are the ones who have the tools to satisfy consumers' desire for self-expression, elegance, variety and functionality, while still making products that are designed to be useful for many years

It's the beginning of a profoundly new world, with these emerging technologies shaping the objects in our world, our relationship to those objects and how those objects are changing our expectations.
Because we are interaction designers, we will be the people designing the devices, the services and the world. We have a great responsibility.
We, those who grew up on the net and who design it, will be the ones who create ubiquitous computing, not the roboticists or network engineers, and ubicomp will fundamentally change the world and us along with it. Like Jon said yesterday, it is our responsibility to use our knowledge of people and technology to create new business models, to start companies, to take huge risks, and to be thoughtful about the implications of what we're doing without ever forgetting that we have no idea what's going to happen next.

Thank you.

Web Directions South graciously invited me to keynote their 2011 conference this year. I took the title of the conference somewhat literally and decided to roll up a bunch of themes that have been rattling around my head, and my presentations, to talk about what direction the Web is going, as it relates to ubiquitous computing. I also wanted to touch on the fact that as designers we create technology and although we can't understand how it works, we generally don't know what it means. I tried to provide some ideas and some guidance about that.

You can download a 1MB PDF of my presentation with slides and a full transcript.

Here it is on Slideshare (click through and look at the speaker notes to see the transcript):

And here on Scribd:

Unintended Consequences: design [in|for|and] the age of ubiquitous computing

Here's the full transcript:

Good morning! Thank you very much for inviting me. I've heard great things about this event for years and it's an honor to be here. Today I'll be talking about ubiquitous computing and, very broadly speaking, design.
First, let me tell you a bit about myself. I'm a user experience designer. I was one of the first professional Web designers in 1993. I've worked on the design of hundreds of web sites and many digital consumer products. I also regularly work with companies to help them create more user centered design cultures so they can make better products themselves.
I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods.
And 2001 I co-founded a design and consulting company called Adaptive Path.
...and three years later I left it, and I left the Web altogether, to found a company with Tod E. Kurt called ThingM in 2006.
We weren't sure what we were going to be but it's turned out that we're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers.
This talk is based on my book on ubiquitous computing user experience design. It came out last September and it's called "Smart Things" and it's published by Morgan Kaufmann.
I want to start with a little history. I love the history of technology. This example comes from Harold Innes, who was a political economist and Marshall Mcluhan's mentor, wrote about technologies and empires. He has an interesting take on papyrus. According to him, it nearly brought down the Ancient Egyptian empire, and ended up changing it forewver. Before papyrus, writing in ancient Egypt was the process of slowly inscribing information permanently on immobile things like obelisks and tomb walls. Information moved slowly, formally. It was easily controlled and constrained.

When papyrus was invented, it seemed like a great idea for those in power. The pharaoh could administer his empire from a central location and wouldn't have to rely on messengers. Now he could send lots of precise instructions and scribes could write down complex ideas, such as those about geometry. But papyrus is not stone. It's easier to write on, orders of magnitude easier. So, people wrote more. A lot more. They were writing so much that they needed a less formal florid writing system, and more people learned to read and write. Suddenly, and by suddenly I mean over the course of hundreds of years, this meant that knowledge, and the control that comes with it, was no longer be centrally controlled. People started to get strange ideas. They started to ask why it was only the Pharoah who got to go to heaven. Scribes, the nerds of their era, were suddenly quite powerful. Surprisingly powerful. Dangerously powerful.

The Pharaoh--and I can't remember which dynasty this was, maybe the 19th?--decided that this was really endangering the stability of the Empire, which was under a lot of stress anyway. He needed to do something drastic. He made the all the Scribes report directly to him. They were elevated to the same level as priests and the position became hereditary and bureaucratic. No one else was allowed to write. Amazingly, this worked, and the literacy that was
The interesting thing is that the people who invented papyrus did not create it to threaten Egypt. Quite the contrary. And the scribes, they were just producing content. Moving symbols around. They were not intending to undermine their government.

No one involved intended to nearly topple Egypt with papyrus. There was nothing inherent in the technology that could have predicted this. No, it's that technology always, always has unintended consequences.
We who make technology have a strange perspective in its role in the world. We feel that because we make it, we understand it. We like to think we can predict where it will go and what it will do.

The problem is that our perspective is tiny and incremental. We usually miss the real deeply transformative change that happens outside our frame of reference. Often it's the people who create a technology that are the most surprised by its effects.

These are two small piece of Scott Weaver's toothpick sculpture of the Bay Area.
The whole thing looks roughly like this. It took him 30 years and a bazillion toothpicks.

As technologists, as human beings, really, we are great at seeing the details, but in many ways we're cognitively equipped to to see the whole. We're terrible at seeing emergent phenomena that come from the confluence of thousands of small things. Big social waves brought on by technology have to be nearly on top of us before we see them.

We're currently in the upslope to such a shift brought on by something familiar, something that we may think we have a handle on, but which is creating deep social shifts we couldn't have predicted.
I'm of course talking about Moore's Law, since that's where all conversations about the implication of digital technology start. When people talk about Moore's Law, it's often in the context of maximum processing power. But it's actually something different. It's actually a description of the cost of processing power. It's a model of how much more processing power we can fit into a single chip that's priced at a predictable pricing point this year than we could last year. This means that it's not just that processors are getting more powerful, it's that PROCESSING is getting cheaper.

For example, at the beginning of the Internet era we had the 486 as the state of the art and it cost $1500 in today's dollars. It's the processor that the Web was built for and with. Today, you can buy that same amount of processing power for 50 cents, and it uses only a fraction of the energy. That decrease in price is the same orders of magnitude drop as the increase in speed. This is not a coincidence, because both are the product of the same underlying technological changes.

What this means in practice is that embedding powerful information processing technology into anything is quickly approaching becoming free.
We see this most readily as a proliferation and a homogenization of digital devices because virtually any device can now do what every other device does. This is why we're seeing all of this churn in form factors, since the consumer electronics industry is trying to figure out how they can sell yet one more screen of a different size. Four years ago it was smart phones, three years ago it was all netbooks, two years ago it was tablets, now it's 7-inch tablets and connected TVs. They're all essentially the same device in different form factors.
That's fine, but it's the most primitive of the transitions that's happening.
Simultaneously, the number of wireless networks in the world grew by several orders of magnitude.

This is a video by Timo Arnall that envisions how saturated our environment is with networks, and it's not even counting the mobile phone network, which covers just about everything. This means that virtually any device, anywhere can share data with the cloud at any time. People right now are excited about moving processing and data storage to the cloud and treating devices as terminals. That's certainly interesting, but it's also just the tip of the iceberg. That's like saying the steam engine is really great for pumping water out of mines. Yes, it's good at that, and also creating the industrial revolution.
It is thus no longer unthinkable to have an everyday object use an embedded processor to take a small piece of information--say the temperature, or the orientation of a device, or your meeting schedule--and autonomously act on it to help the device do its job better. Information processing is now part of the set of options we can practically consider when designing just about any object.

If you look at what happened when the price of writing fell, or when extracting aluminum became two orders of magnitude cheaper in the late 19the century, or when electric motors became significantly cheaper and smaller in the 1920s you see dramatic material and societal change. When something becomes cheap enough, when cost passes a certain tipping point, it quickly joins the toolkit of things we create our world with.
In other words, information has become a material to design with.

And with that, we have entered the world of ubiquitous computing, the world Mark Weiser roughly .
Because we have information as a design material, we no longer think it's crazy to have a processor that creates behavior in a toy, or for a bathroom scale to connect to a cloud service, or for shoes to collect telemetry.

This capability of everyday objects to make sophisticated autonomous decisions and acting using arbitrary information is new to the world and it is as deep an infrastructural change in our world as electrification, steam power, and mechanical printing. Maybe it's as big of a deal as bricks. Seriously, it's a huge change in how the world works, and we're just at the beginning of it.
Today it's relatively simple to make a device sense the world with a great deal of precision.

There are thousands sensors that convert states of the world into electrical signals that can be manipulated as information. This also includes sensors that sense human intention. We call these "buttons", "levers", "knobs" and so on.
Our things can make physical changes in the world based on input. Devices made from the perspective of treating information as a design material can autonomously affect the world in a way that no previous material was capable of.
Information can be used to store knowledge about the state of the world and act on it later. This could be just a single piece of data.
Or it can encode very sophisticated knowledge about the world. This is a Blendtech programmable kitchen blender. With it you can program a specific sequence of blender power, speed and duration and associate that sequence with a button on the blender. it allows you to embed experience and knowledge about food processing into the tool, which can then produce that as a behavior, rather than requiring the operator to have that knowledge and develop the experience.

Why do this? Well, if you're Jamba Juice, which is a large US smoothie chain, your business depends on such programmable blenders so their staff don't have to be trained in the fine points of blending and their product is always consistent. Their profit margins depend on knowledge that's encoded into their blenders, knowledge that's accessed with a single button.
This is the control panel of Blend Tech's home blender. Blenders used to have buttons for different speeds. They described WHAT you were doing. Now, with embedded knowledge, it's about the desired end result. It's about WHY. The software handles the what.
One of the most transformative qualities of information is that it can be duplicated exactly and transmitted flawlessly. This has already changed the music and video industry forever, as we know.

But it also means that device behavior can be replicated exactly. We've become acclimated to it, but--stepping back--the idea of near-exact replication in a world full of randomness and uncertainty is a pretty amazing thing, and is a core part of what makes working with information as a material so powerful.

Image: N-Trophy, 2000-2003, Kelly Heaton, Feldman Gallery: http:// www.feldmangallery.com/pages/exhsolo/exhhea03.html
Finally, and most profoundly, things made with information do more than just react, they can have behavior.

Information enables behavior that's orders of magnitude more complex than possible with just mechanics, at a fraction of the cost. This is a modern small airplane avionics system. It consists of a bunch of small fairly standard computers running special software. It's a bit like a flight simulator that actually flies.
Found on: http://www.vansairforce.com/community/showthread.php?t=51435
Compare that to a traditional gyroscopic autopilot, what it replaced. Every component is unique, it does very little, and to change its behavior you have to completely reengineer it.

When you make something with information, you enable that thing to exhibit behaviors that are vastly more sophisticated than what was possible with any previous material.
That is the wave that's basically on top of us.
So what can we as designers do in this situation?
Well, we're possibly the luckiest ones.

For the last 20 years we've been building a digital representation of the world on the Internet. We call the the Web, and if you look at it as a unit, it's a rough and unorganized, but fairly complete model of most things in the world and how they interact.

Until now, however, it was disjoint from the thing that it was modeling. We left it up to people to make the connection between this map of the world with the world itself. We had to resort to things like stickers to tell people in the real world that a given object, or location, had an information shadow in the cloud.
But that's quickly changing.
Here's Toyota and Salesforce's plan for having your car continuously embedded in both Toyota corporate's network and your social network. The factory can update the car firmware remotely and the car can text you when it's done charging. The information shadow of the object, it's representation in the cloud, and the object have been glued together.
For Web designers this is great news. As the model of the world and the world merge, as the map and the territory become increasingly intertwined, who knows the most about the map? It's us. We're been swimming in it longer than anyone else. And as things are increasingly made using information as a material thanks to the inclusion of cheap processing and networking, we're the ones who know how to design for it.
Colliding galaxies, NASA
Because we're way ahead of the curve in terms of figuring out how digital things should talk to each other.

Everything that communicates needs to do in some standard way, and increasingly that way looks a lot like the Web. Here's a slide from a project by Vlad Trifa and Dominique Guinard of ETH Zurich. They've build a middleware layer that makes every physical objects look basically like a Web site. They call it, appropriately, The Web of Things. It doesn't sound particularly farfetched. They're just applying stable technical standards that were developed when Web servers were as powerful as today's smart TVs to things like, well, smart TVs.

This allows us to transfer our skills easily, since we can now mash up objects how we mash up web sites.
That's a way of treating devices from afar as you would Web sites, but people's use of devices close by is also becoming more Web-like.

When devices are used to access online services people begin to see through them to the online world they provide access to, rather than looking at them as tools in their own right. In many
ways we no longer think of experiences we have on devices as being "online" or "offline," but as services that we can access in a number of different ways, unified by brand identity and continuity of experience. Our expectation is now that it's neither the device nor the software running on it that's the locus of value, but the service that device and software provide access to.
These devices become what I call "service avatars." A camera becomes a really good appliance for taking photos for Flickr, while a TV becomes a nice Flickr display that you don't have to log into every time, and a phone becomes a convenient way to take your Flickr pictures on the road.

Thus, the service and the device become increasingly inseparable and we who create the services effectively control the devices.
For example, you can now get Netflix on virtually any terminal that has a screen and a network connection. You can pause a Netflix movie on one terminal and then upause it on another.
Because to the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. The technology exists to enable the service, not as an end to itself.

This is one way that objects in the world and the digital online map are becoming the same thing, a thing that we as interaction designers, control.
Here's a telling ad from Amazon for the Kindle, which is one of the purest examples of a service avatar based user experience. This ad is saying "Look, use whatever avatar you want. We don't care, as long you stay loyal to our service. You can buy our specialized device, but you don't have to."
Jeff Bezos is now even referring to it in these terms.

This leads to another experience design conclusion. The core of the product is not the web site that you're designing, or the product you're designing--it's not any of the avatars of the service. The core is the service that lies underneath. The avatars reflect that service, they deliver the product in context- appropriate ways, and their design is very important since they are how people experience the service, but the most important part of the design is the itself.

Thus, when we are designing FOR the Web, we are increasingly designing for the world.
So what's the upshot of all of this? How do these pieces fit into place?
It's still pretty early, and--like I said, we're terrible at identifying emergent phenomena--so we don't really know what this ubicomp elephant looks like. We do, however, have some pointers to what kinds of changes we could see.
Source: Banksy's elephant.
For example, what happens when you mix information shadows and service avatars? You get a blurring between what's a product and what's a service.

When you sign up with a car sharing company like Flexicar or GoGet you become a subscriber to their service.

Each specific car is an avatar of its respected service, actively connected to the service at all times. You can use it any time you want, but you can only open the car and start the engine the service allows it. Your relationship with these cars becomes something different than either renting a car or owning one, sharing elements of both. It's a new kind of relationship that we don't yet have a good word for. And it's a relationship that's created by the capabilities of underlying technologies that didn't exist or were impractical 20 years ago.
This is the German Call-a-Bike program, run by the rail service. You need a bike, you find one of these bikes, which are usually at major street corners. You use your mobile phone to call the number on the bike. It gives you a code that you punch in to unlock the bike lock. You ride the bike around and when you've arrived, you lock it. The amount of time you rode it automatically gets billed to your phone, by the minute. Each bike is an avatar of the Call-A-Bike service.
Photo CC by probek, found on Flickr.
Here's another example that points to some exciting possibilities and that also straddles this model of not quite ownership and not quite rental. Bag, Borrow or Steal is a designer purse subscription site. It works like Netflix, but for really expensive handbags.
It's fashion by subscription. From a user-centered design perspective, it's great. Here's a class of infrequently-used, highly desired, expensive objects whose specific instantiation changes with the seasons. You don't want a specific bag as much as you want whatever the current appropriate thing to fill the dotted line is, but actually keeping up with that fashion is expensive.

This service lets you own that bag possibility space without actually owning a single bag.
Photo CC by bs70, Flickr
Here's another one called Rent the Runway that has expanded this idea to dresses and accessories.
How long until you get a subscription to Zara and instead of buying your clothes, you just pay a monthly fee to get whatever is seasonal for your type of work in your part of the world at your price point.
We already have Exactitudes and people seem quite comfortable with it. Why not turn it into a subscription business model for Zara?
Another effect, and one which may be the most profound of all, is how our increasing reliance on embedded algorithms shifts relationships of authority and responsibility. This isn't necessarily bad--I, for one, am happy to let Google Maps plot routes for me since it only gets it spectacularly wrong every once in a while--but the more we embed sensors in our world and use automatically processed information to make material changes in the world, the more power we implicitly give algorithms and the more authority we give their designers.

For example, San Francisco has instituted a dynamic parking pricing system called SFPark. Sensors that look like speed bumps are embedded in the pavement. They sense whether a car is in a given parking space or not. This information is uploaded to the cloud where three things happen to it: it serves as the data source for an app that shows drivers where there are empty spaces, it tells meter maids where there are cars with expired meters, and--most interestingly --it uses the parking frequency data to adjust parking prices dynamically. Their stated goal is that the algorithm will price the parking so that there's always two available spaces on every block. Theoretically, a spot in a busy part of town that costs 50 cents an hour at 5AM may cost $50 an hour by 1PM. The people that run this program in San Francisco understand the potential danger of letting such an algorithm run completely free and they've intentionally limited both the price range and how often it changes, but the fact that they felt they had to do that shows that a public negotiation with algorithms that control the world has already begun. You can see a similar negotiation happening with smart electrical meter pricing.
This kind of negotiation is happening all the way to the personal level, down to individuals and their relationship with themselves.

Right now the Quantified Self movement is quite popular in the San Francisco Bay Area. People are using a wide variety of sensors to measure things about themselves so that they can optimize their bodies and lives. Here's the cloud- connected pedometer from Fitbit, Bodymedia's multi-sensor cuff. The sleep sensor from Zeo. They're all designed to collect data about you, then process it, perhaps share it, and visualize it. They're great examples of service avatars made with information as a material. But there's something about them that unsettles me.

At their core, they're shifting intrinsic rewards, the positive internal drive for being healthier, getting better sleep, being more fit, to extrinsic rewards-- making numbers go up. But those extrinsic rewards are controlled by algorithms, rather than their owners' judgment. What these products are saying, in effect, is that we can become the people we want to by giving up some of the control of our lives to these digital devices. Perhaps that's true--people depend on a lot of tools--but what results is a hybrid between a person with goals and a set of algorithms that purports to tell them whether those goals have been achieved. This is likely to have many unintended consequences. We trust algorithms and sensors because they look objective, but are they? How do we know?
This is the Water Pebble. It aims to reduce water usage by timing your shower and telling you then you hit your designated shower time. The way it works is that when you first use it, you push a button and take a shower. That sets the baseline. From then on it works like a shower timer. The algorithmic part of it comes in when, after a while, it starts slowly reducing the amount of time it gives you, so that you progressively build a habit of using less water.

My personal experience with it, however, is that its algorithm for behavior change doesn't match my ability to actually change. It reduced the amount of time it gave me to shower, and I was following along with it, until my change curve deviated from its. Instead of helping me change my behavior, it just sat there at in the shower drain blinking red and mocking me for not being good enough. I couldn't reason with it, I couldn't get it to change its algorithm to match my capabilities, so I stopped using it.

I'm not saying that we shouldn't enter into these relationships, but that they represent a deep shift in how we relate to the world. We shift our trust and the responsibility of making sense of the world to algorithms more than our own capabilities. We are likely going to spend the rest of our lives negotiating power relationships with embedded devices, in a way that no people have ever
And we can expect many unintended consequences. The designers of Facebook, Twitter, YouTube and text messaging did not, and could not have predicted, a new papyrus-level crisis in Egyptian government. And yet they provided the medium through which that revolution happened, largely confirming Ethan Zuckerman's assertion that any technology that can be used to share cute cat pictures can be used to overthrow a government.

We, those who grew up on the net and who design it, will be the ones who create ubiquitous computing, not the roboticists or network engineers, and ubicomp will fundamentally change the world and us along with it. We have tremendous power and enormous responsibility. And it's our responsibility to enjoy ourselves, make great stuff, take huge risks, and be thoughtful about the implications of what we're doing without ever forgetting that we have no idea what's going to happen next.
Thank you.

Adaptive Path invited me to run a workshop at UX Week this year. I was very flattered and used this as an opportunity to unify a lot of the ideas I've been working on over the last couple of years into a single big presentation.

Since this was a workshop, I can't relay what happened in the interactive second half of the day, but here are the slides from the morning. These combine in one deck many of the big concepts I've been working on in the last 4-5 years (such as information shadows, service avatars, and applianceness) with a bunch of thoughts about current technologies and use contexts that I find interesting.

You can download the presentation as a 3MB PDF.

Here is the Slideshare version (click through to see the speaker notes--the slides aren't very interesting without the notes, which are a transcript of the whole talk):

And here it is on Scribd:

Designing Smart Things: user experience design for networked devices

Here's a full transcript:


First, let me tell you a bit about myself. I'm a user experience designer and entrepreneur. I was one of the first professional Web designers in 1993. Since then I've worked on the user experience design of hundreds of web sites. I also consult on the design of digital consumer products, and I've helped a number of consumer electronics and appliance manufacturers create better user experiences and more user centered design cultures.

I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods.
And 2001 I co-founded a design and consulting company called Adaptive Path.
and three years later I left it, and the Web behind, and founded a company with Tod E. Kurt called ThingM in 2006.

We're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers.
I have a new startup called Crowdlight I'm trying to get off the ground and I'm currently consulting for the R&D lab of a major consumer electronics manufacturer.
This workshop is based on my book on ubiquitous computing user experience design. It came out last September and it's called Smart Things and it's published by Morgan Kaufmann.
This is a workshop on user experience design for networked devices, and I mean that in the broadest sense. My focus is not just on designing multi-touch apps for tablets or 10-foot UIs for connected TVs. That kind of screen design is part of it, but from my perspective it's a subset of a larger set of design challenges and possibilities around the design of digital devices that are connected to the internet. My goal is to think about the broader experience design possibilities created when any device becomes connected. This means rethinking the possibilities of many things from the ground up. That, of course, can't be covered in a single day workshop.This picture, by the way, is of a visual designer, an interaction designer and an architect mocking up a new kind of clock in a workshop I ran a couple of years ago.What we're going to try to do today is to give you a feel for how design of connected objects is different than the design of things that you may be familiar with, and to give you some concepts and tools that may help you with that. We will focus less on specific techniques than on thinking about how to deploy concepts and critically to ask questions about these sorts of projects so that you can be a better judge of your own designs and the designs of others.
First, I'd like to set some foundational definitions, and that starts with what it is that we're talking about here. I define all digital connected device design as part of the same larger trend that was identified and named by the late Mark Weiser, then the CTO of Xerox PARC.

More than twenty years ago he envisioned a world that didn't have one big general purpose computer per household, but many computers distributed throughout the environment. He called this trend ubiquitous computing, or ubicomp.

As electronics consumers we're most clearly experiencing this as a proliferation of device form factors. Our general purpose computers now come in many shapes and sizes.

But I'm talking about a deeper change. Our relationship to our environment is fundamentally changing through the embedding of technology throughout our everyday environment. This looks like a typical San Francisco parking meter, but it's actually part of an extensive network of overlapping services.

Streetline Networks is one such system that connects parking meters to sensors in the pavement that look like speed bumps but actually identify which parking spaces have cars in them communicating through base stations installed in street lights. This allows the city to be able to know which spaces are open when so that they can write tickets more efficiently and change the price of parking based on demand. It also allows parkers to get a real time map of where there are open parking spaces. This isn't some research project, it's an actual system that's installed in San Francisco. These are from the pamphlet Streetline published FIVE years ago. A similar system went on line in big parts of San Francisco earlier this year.

Now how did we get here? I believe that this is happening because of an intersection of three trends.

I want to start by talking about Moore's Law, since that's where all conversations about the implication of digital technology start. When people talk about Moore's Law, it's often in the context of maximum processing power. But it's actually something different. It's actually a description of the cost of processing power. It's a model of how much more processing power we can fit into a single chip that's priced at a predictable pricing point this year than we could last year. This means that it's not just that processors are getting more powerful, it's that PROCESSING is getting cheaper. For example, at the beginning of the Internet era we had the 486 as the state of the art and it cost $1500 in today's dollars. It's the processor that the Web was built for and with. Today, you can buy that same amount of processing power for 50 cents, and it uses only a fraction of the energy. That decrease in price is the same orders of magnitude drop as the increase in speed. This is not a coincidence, because both are the product of the same underlying technological changes. What this means in practice is that embedding powerful information processing technology into anything is quickly approaching becoming free.
Here's Mark Weiser's diagram showing the shift from mainframes to ubiquitous computing from 15 years ago. He missed cloud services, so this isn't technically true, but it's generally a good model to think about how our world is changing because of all of the inexpensive applications for processing. Basically what this is saying is that information processing used to be expensive and had to be limited to special devices, but now it is cheap and can be used in all kinds of novel situations. This means that you can now include powerful processing and networking in almost anything, and start rethinking the design of everything in terms of embedded digital technology I'll explore the implications of this again later in this talk, but first I want to talk about the other major technological changes that are driving ubicomp.

The other dominant trend right now is of course pervasive data communication. This is an image from Timo Arnall that's envisioning how saturated our environment is with networks, and it's not even counting the mobile phone network, which covers just about everything. This means that virtually any device, anywhere can share data with the cloud at any time. People right now are excited about moving processing and data storage to the cloud and treating devices as terminals. That's certainly interesting, but it's also just the tip of the iceberg.

There are a vast number of networking technologies. They trend to trade off three things, distance, bandwidth and power. If you maximize any one of them, you minimize the other two. So if you want something that's fast, and has long range, it'll require a lot of power. If you want something that uses very little power, it'll have to be either slow or close, or, more likely, both.

Here's what Cisco estimates the trend of wireless data traffic is going to look like. The baseline here is last year. For comparison, to represent the amount of data for 2008 you need a line that's 1/8th as thick as that small line on the left and 2006 was 1/24th as thick. You get the idea. Wireless data has gotten pervasive and, judging by this level of adoption, very cheap.

Which brings me to power, which is not so much a trend as an anti trend. I'm sure you're familiar with the fact that battery technology has not advanced as fast as processing power, but let me show you exactly how much. In the time that processing power increased by a factor of ten million, battery efficiency increased by a factor of, let's be generous, ten. Probably closer to four. This means that in practice many of the things we can theoretically do with processors, we can't do in practice because of batteries. If there's a brake on the advance of ubiquitous computing, it's power.

The combination of these factors has created a shift away from raw processing power to the application of processing, which has led CPU manufacturers to emphasize different things. Here's a slide from a 2009 talk from Paul Otellini, the CEO of Intel. Notice that instead of talking about numbers going up, processor manufacturing has become all about pushing numbers down. Instead of competing on doing more with more, they are now competing on doing the same with less. Less power, smaller size, and lower cost. What these manufacturers are doing is that they're emphasizing the context-specific use of information processing, rather than raw throughput and they're aggressively creating new classes of processors that enable information processing to happen with smaller amounts of energy.

This new System on a Chip from Microchip has about as much processing power as that initial 486, but is also has an onboard video controller that can drive a VGA-class screen, a USB controller for peripherals, a 24-channel analog to digital converter for sensor, and a capacitive sensing driver that can drive a touch screen. It costs about $5, uses less power than a keyring LED flashlight, and fits on a chip the size of your fingernail. It's also not unusual. Almost every semiconductor maker makes similar products.

Ok, so that may have seemed like an obvious beginning: sure, processing is cheap, networking is pervasive, and we have specialized chips, but we knew that. True, but revolutions rarely come completely unexpectedly. The pieces are all around for us to see, but it's a set of circumstances that puts them together. I think that we hit the tipping point to ubiquitous computing in 2005. That's the year Apple put out the iPod Shuffle, Adidas launched the adidas_1 shoe and iRobot launched the Roomba Discovery, their second generation model. That was the year that it began to make sense to create devices that compete through information processor-enabled behavior. The Tickle Me Elmo Extreme, which came out in 2006, is a prime example of this. It's a toy that creates its competitive advantage, that justifies its $80 introductory price in a world of $20 plush toys, by using information processing.

It is no longer unthinkable to have an everyday object use an embedded processor to take a small piece of information say the temperature, or the orientation of a device, or your meeting schedule and autonomously act on it to help the device do its job better. Information processing is now part of the set of options we can practically consider when designing just about any object. In other words, information is quickly becoming a material to design with. If you look at what happened when the price of extracting aluminum dropped by two orders of magnitude in the late 19the century, or when electric motors became significantly cheaper and smaller in the 1920s you see dramatic material and societal change. When something becomes cheap enough, when cost passes a certain tipping point, it quickly joins the toolkit of things we create our world with. It becomes a design material. This capability of everyday objects to make autonomous decisions and act using arbitrary information is as deep an infrastructural change in our world as electrification, steam power, and mechanical printing. Maybe it's as big of a deal as bricks. Seriously, it's a huge change in how the world works, and we're just at the beginning of it.
Now what does this mean for the user experience design of such devices.

First, let me define user experience as it relates to ubicomp. In 2004 Peter Boersma, now of AP's Amsterdam office, defined UX design as a combination of eight disciplines. Interaction design, Information Architecture, marcomm, usability engineering, visual design, information design, copywriting and CS. This is a very accurate description of primarily screen-based experience design.

When working with ubiquitous computing devices, the landscape adds to this. In addition to all of the things Peter mentioned in 2004, there are now considerations of the physical design of devices, what services they connect to, how they operate in space, how they're created as marketable products, and what the technical capabilities of the specific technologies involved are, since these devices can no longer assume a generic computer platform.

In other words, we're moving from a world where the basic building blocks are pixels controlled by a single processor, where the design challenge is how to arrange the pixels appropriately and how to change that arrangement through time to a world where the basic building blocks are made of atoms and controlled by many processors. In this world, the main challenges are what shape to make out of the atoms and how to get the blocks to talk to each other. This is picture of literally digital building blocks made by Modrobotics, but this same principle applies when you have a phone, a smart TV, a tablet and a self-checkout kiosk at the supermarket, and you'd like them all to work together in some way.

There are some big UX trends that I feel are important to understand with our current environment of networked objects.

The result of cheap processing is a shift from generic devices and software to specialized devices and software. When computing was expensive, you had one or two general purpose devices that had deal with almost every situation. This necessitated design compromises that resulted in devices and software that could do almost everything, but did none of it well, and UX design was always a set of compromises about creating functionality within the constraints of an OS, an application environment or a browser. Now that processing is so cheap, this is no longer true. You can now have a high degree of specialization. Your tool is now a tool BOX, a combination of 10, 20, or 30 computing devices and apps that you get for the price of that one expensive device ten years ago. You acquire new functionality as needed and every device and unit of software has a narrower purpose. This fragmentation then creates a new set of challenges for users, which in turn become challenges for designers. Users no longer have to maintain two sets of UI standards one for the device and operating system and the other for the application in mind. Use is much more direct. You pick up a PSP, you know what it's for. You launch the CNN app and you know what content to expect there. However, it now creates a burden of deciding WHAT to put in your toolbox. We only have so much room in our backpacks and app docks. We're encountering new problems, such as finding which app or device does what we want, not which menu. It's a findability problem on the macro scale should I buy a Kindle, a Nook or an iPad and on the micro now which app actually has a decent algorithm for finding gas stations near me? I recently saw an internet connected washing machine that could download new wash cycle apps. Crap. Now I have to make THAT decision too?

To this we add the effect of widely networked devices, which is to move value away from the local environment to a remote one. The lasting legacy of the Web is that we have a shift in people's perception of the value digital technology from being primarily local to being primarily remote. The Web demonstrated that moving functionality online enables access to more compute power, continuous updates, real-time usage analytics, and (of course) social connections. It also created a shift in people's expectations. Today, most people understand that the experience you see on one device is often a part of something that's distributed throughout the world. There's no longer a need to pack everything into a single piece of software, and there's no expectation that everything will be there. Again, this is great, but it too has created new problems. Perhaps it's temporary, but it's disconcerting for me when an application I use that's on my Android phone changes functionality after an overnight update. I'm not used to the applications that I own to shift functionality. I don't use it as a net app it's a note-taking app or something and it's ON my phone and suddenly it changes behavior. What if I don't want all of the changes? Our supposedly stable device, the thing we own, becomes a kind of slippery eel of changing functionality without necessarily our permission.

If we chart these two tends, two broad classes of digital products emerge. If we follow the general to specific axis, we see a shift is to more narrow-function devices that are designed to do a small, specific set of things really well. They primarily differ in what those specific things are. I call these devices appliances. If we follow the local to remote axis, we find general-purpose devices that do roughly the same set of things, and differ primarily in size. They exist to provide access to online services, in a form factor that's appropriate to the context in which they're used. I call these devices terminals. Now, while the digital world as a whole has seen an increase in the kinds of digital devices that exist, the consumer electronics industry has, on the whole, moved in one direction.

Let me look at consumer electronics a little closer, since they're the primary source of digital devices in our lives, and they're undergoing an enormous change that's becoming a huge challenge. Consumer electronics used to make appliances, that were known as brown goods (in contrast to the white goods in the kitchen and laundry room). These goods were of a specific narrow function. A TV was never going to be anything, but a TV, a VCR never anything but a VCR, a computer nothing but a general purpose computing device. Now consumer electronics is largely the business of building variations on the same thing in different form factors, turned to different use contexts. The devices themselves are nearly interchangeable in terms of what they can do, but they're tuned to specific ways that people use them, whether it's traveling, sitting in a living room or in focused interaction. This is great, but I think there's a limit to the effectiveness of this. Eventually, we're going to run out of rectangle form factors. We already have four TVs, tablets, laptops, and phones and we're maybe get one or two more. Perhaps watches on one end of the scale and movie screens on the other. The interchangeability of terminals is really becoming a challenge. The reasons for buying one terminal versus another are becoming either about price, which HP just found out about, or they're as technically esoteric as picking among wines. You either have to really know what you're doing, you'll pick the one that's cheapest, or you'll pick the one that seems easiest to buy. UX can play a huge role in this, and it's one of Apple's big advantages, of course, which is how they pull themselves out of the Android phone and connected TV game. Before: narrow-function appliances Now: using embedded computing, any device with networking and a screen is nearly interchangeable.

Back to this diagram. I think that there's an even larger shift going on where devices are simultaneously specific AND deeply tied to online services. In this model, the service provides the majority of the value, and can be represented either as an inexpensive dedicated hardware device, an app running on a terminal, or anything in between. It's an approach that combines the precision of appliances with the flexibility of terminals to create a fundamentally new class of products that can fill every possible niche where a service may be appropriate. I call these devices service avatars.

As value shifts to services, the devices, software applications and websites used to access it its avatars become secondary. A camera becomes a really good appliance for taking photos for Flickr, while a TV becomes a nice Flickr display that you don't have to log into every time, and a phone becomes a convenient way to take your Flickr pictures on the road. Hardware becomes simultaneously more specialized and devalued as users see through each device to the service it represents.

In effect now see through networked, service-dependent devices and software to the cloud-based services they represent. We no longer think of these services as being online, but services that we can access in a number of different ways, unified by brand identity and continuity of experience. This is a fundamental change in our relationship to both devices and software, since the expectation is now that it's neither the device nor the software running on it that's the locus of value, but the service that device and software provide access to. This is how the local-to-remote axis links up with service design to create a new kind of user experience challenge, one that's simultaneously about creating effective local experiences and integrated services across channels.

For example, you can now get Netflix on virtually any terminal that has a screen and a network connection. You can pause a Netflix movie on one terminal and then upause it on another. This may feel a bit novel, but it also seems natural. Why?

Because to the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. The technology exists to enable the service, not as an end to itself.
Netflix appliances are created for a single reason: to make it easier to access Netflix. That's what Roku does. It turns every terminal that's not already Netflix enabled into a Netflix terminal. The Boxee box does that for the Boxee service. The new Apple TV does it for iTunes.

Another example is the Kindle. Here's a telling ad from Amazon for the Kindle, another pure, and largely terminal-based examples of a service avatar based user experience. This ad is saying Look, use whatever avatar you want. We don't care, as long you stay loyal to our service. You can buy our specialized device, but you don't have to. I really like the Kindle avatar experience, too. You can read on the phone on your way home, close the app, open it on your laptop and it picks up where you left off on the phone. You don't think of it as two separate things, but as one thing that exists in two places.

Let me give you another example. This is Vitality's Glowcap, which is a wireless network-connected pill bottle appliance that's an avatar to Vitality's service for increasing compliance to medicine prescriptions. When you close the cap, it sends a packet of information through a mobile phone-based base station to a central server and it starts counting down to when you next need to take your medicine. When it's time, it lights up the LED on the top of the bottle. However, the real power is in the packet of data it sends. That packet opens a door to the full power of an Internet-based service. Now Vitality can create sophisticated experiences that transcend a single piece of software or a single device.

For example, another avatar of the Vitality service is an online progress report that can be used interactively or delivered by email. It's like Google Analytics for your medicine.

Health care practitioners get yet another avatar that gives them long-term and longitudinal analytics about compliance across medications and time. To me, this kind of conversation between devices and net services is where the real power of The Internet of Things begins.

Vitality has developed a complete system around this service that includes a social component, and different avatars for patients, patients families, health care practitioners and pharmacies. Each avatar looks different and has different functionality, but they're perceived, and designed as a single system.

Another example. Nikeplus started as a service with a couple very simple avatars the iPod, a shoe sensor and a Web site now the service has morphed to encompass a wide variety of devices, use contexts and uses. They've even gamified the experience, so now you can play a game where you capture territory based on your exercise performance. Once the core value of the service was defined in this case the automatic collection, analysis and sharing of physical fitness data and a couple of core use cases were worked through, they could build and extend the platform in a relatively straightforward way into whatever they believed was an appropriate new use context.

There are now many examples of services that have hardware avatars in the physical fitness and health space. There's the Withings connected scale, Green Goose's bike computer, Fitbit's pedometer and Zeo's sleep sensor. All of which depend on their online components to create their core value.

Let me change gears here and introduce a second major concept, machine readable digital identification and tracking, that I think is very important when thinking about how to design ubicomp networked objects. Manufactured things have long had identifying marks, from silversmiths' hallmarks to barcodes. These are the link between the object and information about the object and every object that has one exists simultaneously in the physical world and in the world of data. Photo CC from http://www.flickr.com/photos/dumbledad/298650884/

I call this data the object's information shadow. Until recently, accessing the information shadow was very difficult. The world of objects and the world of information shadows were separated by the difficulty of access. In a store, you didn't know what the barcode meant, the store did, because only the store had the database and the hardware. And even they only knew a small part of what's going on because a barcode only identifies the class of objects, not the individual object.

When Amazon extended ISBN to create their ASIN system they suddenly allowed anyone to reference any product Amazon sells or has ever sold. Tom Coates likened such codes as handles that we can use to grab information shadows and do interesting things with them, such as having conversations about them, getting more information about them. Amazon has built a large portion of their business around the fact that people point at their objects in a million ways, but at the core of that is always the ASIN. The tipping point here is that we're about to enter a world where we can not only just point at objects, but have digital conversations with them by querying their information shadows.

For example, wine has a very rich information shadow. There's a huge amount of structured information about where it was made, what it's made of, how it was made, what critics think about it, etc. In addition, every bottle is a social object. There's a community of collectors, aficionados, etc. You can see some of the possibilities when you look at all the information about a single object in Amazon. That, however, is about a class of objects.

We now have the technology to uniquely see the information shadow of every object you're looking at. Each object is unified with its information shadow and you can query it. You can now know about where it is made, is it a real Gucci, what it is made of, what your friends think of it, how much it sells for on Ebay, how to cook it, how to fix it, how to recycle it, whether it will go with your mother's drapes, whatever. Any information that's available about an object can now be available immediately. Source: Yottamark

Until the recent past, there was a fairly clear distinction between an object, a digital representation of that object and the metadata about that object. Now that distinction has sufficiently blurred so that there is a range of objects that exist to varying degrees as information shadows. Some things have dematerialized almost completely. When was the last time you thought of a plane ticket as a physical thing? You obviously can't dematerialize a cantaloupe like that, but the blurring of physical and virtual objects caused by access to information shadows is transforming the world. This is the beginning of mashups between the physical world and the data world.
The previous examples were of relatively static ways of looking at information shadows: a unique ID, whether it's a QR code or an RFID creates a relatively straightforward local experience: one device, generally a terminal, reads an ID and then allows you to view or manipulate the information shadow that ID is applied to.
But the point of the first of the initial trends is that processing is becoming cheaper. Information shadows don't have to be static things and the objects that they're the shadows of don't have to be static. Conceivably, every object that has an information shadow can update it itself. For example, you can check on the status of your Amazon order because hundreds of devices, hundreds of appliances, are being used to track nearly every single atom Amazon is responsible for. Right now they're using barcodes. The FedEx Sensaware smart tag has a bunch of sensors, a GPS and the equivalent of a phone in it for sending data about where a package is and what conditions it's traveling in. It is an appliance for updating the information shadow of the package it's attached to with a wide range of telemetry. When you put any digital appliance together with a network and a cloud-based information shadow server you get [click] the internet of things, at least by my definition.

This is from Green Goose, a sensor platform based here in San Francisco. They sell these stickers that are actually tiny computers with a wireless transmitter and a sensor pack. They create information shadows for things that don't have them already. You create the meaning for the sensors. These are available right now.

This is a stick on patch measures temperature and then transmits it to NFC phones. It came out earlier this year. The company, Gentag, claims they're developing patches that can test for pregnancy, the AIDS virus, drugs, allergens and certain types of cancers. In real time.

Here's another one that was just announced by the University of Illinois that can monitor heart activity, brain waves, muscle activity, etc. Again, it transmits the telemetry wirelessly in real time to devices that then transmit it to the cloud.

Ok, so where do we get one of those cloud-based information shadow servers? Well, it just so happens that there are a number of them popping up right now. This is Pachube, a free service that allows any arbitrary data stream from any net connected device to share that stream with any other device. It'll do the buffering, the protocol translation, the analytics, everything. You have it subscribe to an input stream like an RSS feed, or you subscribe to an output stream and off you go. It's essentially a platform for creating mashups with physical objects, for connecting information shadows together.

It was used this spring to connect tiny personal digital radiation dosimeters all over Japan to measure radiation levels to a resolution inconceivable before. The service was put together within several days by Haiyan Zhang of IDEO and several other folks, essentially creating a mashup between Google Maps and a thousand different hardware devices. This points to the real power of the combination of device identification and pervasive networking.

What happens when you mix information shadows and service avatars? You get a blurring between what's a product and what's a service. When you can uniquely identify an object and attach it to an online service, you change the business model around the ownership of that object. It no longer has to be owned, but it can be an avatar of the service for as long as you're a subscriber to the service that it's an avatar of. The old phone network is the classic example of this. People did not own their own phones in the US until 1984, when the old phone system was broken up. The phone was your avatar to the system. To set that kind of system up then was incredibly expensive, but now it's much more affordable. We are now seeing what's being called in some circles as the rise of the product service system, which is a system based on the delivery of value, rather than the sale of goods. Much of the product service system literature emphasizes sustainable manufacturing, but I think that's only a side effect to the notion of the dematerialization of everyday objects into service avatars.

Let me give you a couple of examples. When you buy into a car sharing service such as City Carshare or Zip Car you subscribe to a service. Each car is an avatar of its respected service, actively connected to the service at all times. You can only open the car and start the engine the service allows it, when the car has your permissions in its information shadow. The car logs whether it's been dropped off at the right location, and how far it's been driven. Your relationship with these cars becomes something different than with rentals and with ownership. It's like having your own car because you have access to it 24 hours a day, 7 days a week, with very little advance notice, but you can't leave your carseat in it, because it's not yours. It's different kind of relationship.

This is the German Call-a-Bike program, run by the rail service. You need a bike, you find one of these bikes, which are usually at major street corners. You use your mobile phone to call the number on the bike. It gives you a code that you punch in to unlock the bike lock. You ride the bike around and when you've arrived, you lock it. The amount of time you rode it automatically gets billed to your phone, by the minute. Each bike is an avatar of the bicycle service, its state maintained as part of its and your phone's information shadow. See where I'm going? Photo CC by probek, found on Flickr.

Here's another example that points to some exciting possibilities. Bag, Borrow or Steal is a designer purse subscription site. It works like Netflix, but for really expensive handbags.

It's fashion by subscription. From a user-centered design perspective, it's great. Here's a class of infrequently-used, highly desired, expensive objects whose specific instantiation changes with the seasons. You don't want a specific bag as much as you want whatever the current appropriate thing to fill the dotted line is, but actually keeping up with that fashion is expensive. This service, btw, is also about five years old. Photo CC by bs70, Flickr
Here's another one called Rent the Runway that has expanded this idea to dresses and accessories.

How long until you get a subscription to the Gap and instead of buying your clothes, you just pay a monthly fee to get whatever is seasonal for your type of work in your part of the world at your price point. We already have Exactitudes and people seem quite comfortable with it. Why not turn it into a subscription business model for the Gap?

My goal in this review was to describe the general lay of the land in ubiquitous computing. We working in a complex environment with a number of interrelating factors, each of which represents both an opportunity for innovation and a challenge to the status quo in the design of consumer electronics. Things are moving fast and shifting our view of the world as they go. In effect, what we're seeing is the discovery of a new design material. Networked information processing is changing from being special thing that certain specialized devices do, to being a core building block, like plastic or aluminum, and a basic manufacturing process, like standardization in the creation of anything. This is a huge and fundamental change, and we're just at the beginning of it. All of these other things are just symptoms of that one deep shift that we're going to see play out for the rest of our lives.
Now I'd like to do a tour of technologies that are maturing quickly into commercial-grade solutions and which have powerful capabilities. Now as we all know, there are a million new technologies appearing all the time and the vast majority of them go nowhere because the technology is not as easy to implement, or as powerful, as initially advertised. Many just have bad timing. There were a bunch of very clever things that CDROM makers did with their drives just before that medium became obsolete.

Gartner tracks these in the form of their hype cycle, and you can watch as various things go on it and fall off it. It describes the process technologies take in the public eye as a kind of hero's journey. Here's one from 2009. The technologies I want to show you are in no particular order, but they're ones that either have made it past the Trough of Disillusionment or ones I believe will be able to make the jump to the Slope of Enlightment, but I wanted to give you a short tour of ubiquitous computing technologies that are either getting significant market penetration. You're probably familiar with some of these, but

NFC, or near-field communication and RFID are related technologies. RFID is essentially a one-way version of NFC. It assumes a dumb device on one end and a smart one on the other. NFC assumes two smart devices. You're all familiar with the basic idea of RFIDs, since they're the things in your building access cards and have been around for many years. The big plus of RFIDs is that they can inexpensively uniquely identify virtually any object so you can get at its information shadow. The downsides is that they can be replaced pretty easily with QR codes and other optically-readable and even cheaper identification schemes and, most importantly, that the range of action is very small. This is another image by Timo Arnall and the design firm BERG in London that shows the shape of the active reading area on an RFID reader. You can see it's on the order of inches, which is typical for the technology. NFC is about the same. The nice thing about NFC is that it allows touch-based interaction: you can exchange data by just touching two devices. That way you can use your phone, which is a trusted personal device, to introduce your TV to your camera or to replace your credit card. NFC is going to be built into a lot of phones in the next year with the intent of it becoming the basis of a new form of payment. We'll see, but it'll get the technology out there.

Fast processing has enabled the practical deployment of algorithms that can understand the content of non-textual digital data, so an image is not just seen as a bunch of pixels, but as a collection of meaningful objects. This is an incredibly hard problem that AI has been trying to solve for decades, but there's some real headway being made. Recognizing that there's a face in an image is now standard on most cameras, and there's a lot of progress being made in terms of recognizing whose face it is. Identifying brand logos is pretty standard, as is specific landmarks that appear in photos. The same kind of unique fingerprint extraction is happening in audio. Google has an audio API that is very good at deciphering what you're actually trying to say, or what song is playing, or what movie is playing, etc. We're still not yet at the point that the software can tell the difference between a fluffy white dog and a snowscape, which is trivial for people, but it's getting closer.

One particularly interesting application is automatic face beautification. If software can identify facial features and it has an idealized model of what people's faces look like it can adjusts photos and videos to make the face it recognizes match the idealized model without affecting the surrounding area. Thus, as far anyone but people who see you in person are concerned, you can always have beautiful skin and perfect features.

I talked a little about systems on a chip earlier. Putting the equivalent of a set of different kinds of processors on a single chip is relatively new, but robust and popular chipmaking philosophy. The microprocessor manufacturers have things called cores that are like object oriented chip descriptions. You want an ARM processor, Nvidia video, digital to analog conversion and a touch screen driver on a single piece of silicon, a chip fab will make you a single chip that has all of those components. You no longer have to go with what Intel or AMD will sell you. That's why Apple now has their own custom processors manufactured. The chip that's in your iPad or iPhone is not an Intel chip, it's made exclusively for Apple so that they can get exactly the functionality they want at low power and incidentally control the user experience all the way down to the silicon, so iOS doesn't run on anyone else's processors.

Tiny LED and MEMS-based projectors are coming out on a regular basis. You can now include a video projector in something the size of a sugar cube.

They still have heat and brightness issues, but that's improving every year. The great thing about these is that they mean that you can put small images everywhere. You can turn any surface into a display. Some even have motion tracking, so you can turn any surface into a multi-touch display. I've seen phones that have projectors built into them, but I think that the potential is much greater than that.

If you think that Moore's Law has made CPUs fast, you should check out GPUs, graphics processing units. That on the bottom is Moore's Law. In terms of raw processing power, in this case measured in gigaflops, they kill normal CPUs. The reason that we don't just use them for all processing is that they're designed as highly parallel processing machines. Writing parallel processing code is difficult and tasks that aren't easily parallelizable won't run faster on a GPU than on a CPU. http://www.r-bloggers.com/cpu-and-gpu-trends-over-time/

The upside is that it's possible to do all kinds of things with graphics, from layering generated scenes onto the existing world, which is what a lot of augmented reality applications do, to creating sophisticated visual effects for interfaces. Taking the Hidden Middle philosophy, this means that it's possible to take what were state of the art graphics five years ago and incorporate them for a small fraction of the cost and power consumption, and we can assume that this will continue in the future. This is also the technology that will allow a lot of the content retrieval techniques to be applied in a general sense, so that your phone for example will recognize who in all your photos appears multiple times and will offer to cluster such photos or videos together. Or it can recognize voice patterns and identify who is at a table talking. Or know the shape of the room you're in based on the echoes sound makes. That's what the Color application that got all the venture capital is aiming for, as I understand it. They're planning to collect a bunch of data about you, images, sound, location and use these algorithms to figure out who was where when so as to create an automatically generated social graph.

Microsoft's Kinect is a big hit, and rightly so, the technology is clever and the implementation is pretty great. The Wii, its predecessor did a great job, too. I doubt that we'll be running around like dancing chickens to control our devices in a general way soon, but I do believe that gestural input is here to stay. We do not yet have a stable vocabulary of meaningful gestures or the technology to recognize them under all circumstances, but I think there's great potential there. Apple is very carefully and slowly introducing multitouch into its operating systems and I think that's a good approach. I think that gestural input, or a kind of virtual direct onscreen manipulation, is a great way to interact with 10-foot interaction, while accelerometer-detected gestures with objects are a good way to give individual small objects input.

I also really like the multi-person experience of large multi-touch interactive surfaces, since they allow for the use of many small screen avatars at arbitrary sizes, and I think there are interesting possibilities when this technology is mixed with picoprojectors. This is Stimulant Design's wall-size multitouch display for HP.

Increasingly, digital reactivity is being included in special materials. This is probably the most speculative technology here, since there are actually many technologies that fall under the umbrella of smart materials. These range from wall panels with embedded LEDs to paper that changes color based on how electricity is applied, to shape memory alloys that change shape based on how much heat is applied, to ceramic sandwich flooring that generates electricity as you walk on it. This is a table by lighting designer Ingo Maurer.

This is a luminescent fabric from lumigram. From http://www.lumigram.com/

Since batteries are so inefficient, people have been looking at other ways to store energy. One is ultracapacitors, which use an entirely different method to store energy than do batteries. They recharge very quickly, on the order of seconds rather than hours, they can be coupled with an energy harvesting system that takes vibration energy or heat energy and trickles it into the battery, and they can store about half the amount of charge a conventional battery of the same size can store. The problem is that they're still really expensive and the price doesn't seem to be falling very quickly.

Finally we have cloud-based services, which make virtually any kind of processing you can do with a computer locally you available remotely, usually orders of magnitude faster and more efficiently than you could with a local device. This is what has enabled the explosion of so many startups in the last couple of years, because it means that many hard problems are solved by just paying someone per transaction to solve them. This, more than just connectivity between arbitrary devices, is what enables ubiquitous computing to have crossed the tipping point to viability. It certainly still has its problems the interdependence of so many services means that you're entrusting a lot of the back end powering your user experience to people who you don't personally know but it certainly enables rapid deployment and iteration of ideas. Image Source: Cloud Connectivity and Embedded Sensor Networks

This is a bit of a grab bag of ideas, but I think it's a good toolkit to start thinking about how these technologies can be used in designs to create profoundly new experiences.

Finally I want to list a number of application domains where there's a lot of interesting work going. Again, this list is idiosyncratic to my perspective, but these are the domains where I think some of the most interesting work is happening.

Wearable computing is the idea of using clothing or jewelry as a part of your computing ensemble. In sports people regularly wear specialized clothing or devices that collect data about their performance. These are Ruth Kikin-Gil's buddy beads.

This is one of my favorite ubicomp products from the toy world. It's called Clickables and it's a product from a Hong Kong company called TechnoSource. It's part of Disney's Fairies initiative. Source: Disney Clickables

Here's one of the ways that works: when two kids put their Clickables bracelets together, their avatars link up in Pixie Hollow, the online social network associated with the Fairies brand. This bridges the physical world of kids with their social network in a transparent and familiar way. All of the products in this line have such an online-offline existence. Another example: when you get one of the charm bracelets and you touch the charms to the USB-connected jewelry box, your fairy avatar gets a version of the same charm.

I think of appliances as a specialized kind of furniture, and the appliance business has been working hard, although not particularly successfully, at creating ubiquitous computing devices. This is a series of Internet connected appliances by Salton, the people who brought you the George Foreman Grill. It was an experiment from about five years ago that points to some interesting ideas, but never quite got the UX right. For example, the microwave has a barcode scanner built in: when you scan some food, it goes out to their server, gets the cooking instructions and programs itself. That's nice, but how hard is it to read the back of a box and type in one number?
I think that what's more interesting is expanding the notion of furniture and our understanding of appliances by incorporating digital technology into it. This it Jean-Louis Frechin's bookshelf that's also a mirror of your text message feed.

Speaking of furniture, cars are a kind of room that moves around that's full of technologically augmented furniture. People expect them to have lots of technology in them, and I think that auto companies who have been effectively making ubicomp devices for twenty years are finally starting to figure that out.

Cars and mobility and the relationship people have to cities is also a large area of development. IBM's Smarter Cities initiative is a ubicomp initiative. They're treating cities as a mesh of different kinds of networks, social, infrastructural, financial and they're looking for ways that they can inject technology into those networks at a massive scale to create what are essentially new kinds of utility services based on ubiquious computing.

One byproduct of all of this information display embedded in the environment is the treatment of information display as an esthetic experience. I think this is going to go further and we're just going to use electronics as pure decoration, as has happened to pretty much every material before it. You can call Vegasification, but I think it's pretty exciting that the surfaces of our world can begin to shimmer, move and react just to be beautiful. Image: UFO by Cinimod Studio and Peter Coffin, 2009

Moving from things to people, one of the major current uses of ubicomp is in technologies for behavior change. These come in many flavors, from Green Goose's original model, which converted bike miles you ride to dollars that you would have spent driving your car, to a lot of health-related products. Here's Bodymedia. They tell you that the technology can help you go from couch potato to hot potato. How is it going to do that? Well, it's a combination of quantified self tracking and gamification, where they create a set of game-like extrinsic rewards based on automated sensing of body state. . [Do I need to explain QS and gamification?] There are other more subtle interventions.
This is the Water Pebble. It aims to reduce water usage by timing your shower and telling you then you hit your designated shower time. The computation part of it comes in when, after a while, it starts slowly reducing the amount of time it gives you, so that you progressively build a habit of using less water.
The Asthmopolis project uses GPS to track where people used their inhaler. This is designed to give the user some information about when and where they typically have asthma, so that they can change their behavior, while at the same time producing heat maps of high asthma areas for health care scientists. A lot of personal health technology is essentially behavior change technology, and the same kinds of ideas apply to it as to physical fitness, or eco consciousness.
I wanted to finish with the ubicomp initiative that probably has had the most investment in it in the last five years and which you're probably most familiar with. This is, broadly speaking, the push to use ubicomp technologies to provide access to infrastructure services. This means paying for things, opening doors, tracking utility usage, etc. This is already a pretty embedded part of our world, and is getting increasingly so. That's an early Nokia NFC payment image, the Bay Area clipper card and a zigbee smart meter house visualization device. It also points to how these technologies insinuate themselves into life. They usually don't come with a big splashy new device that has a UX that much be mastered to get at the awesome new functionality, but in the form of an incremental digitization of everyday things that creeps along until, suddenly, mint.com can give you instant access to the information shadow of all of your financial transactions at a level that you didn't know was possible, but which was happening all along.
This list, and this presentation in general, is absolutely not exhaustive. I wanted to give you all an overview of both the possibilities, the uses and the challenges in the technology and the broaden the focus a bit beyond classic consumer electronics paths. As we go through the ideation and vision definition process, we may well end up in the expected places because of other constraints, but I want to encourage all of you to think about not just the products that we're making or going to be making, but the ecosystem of devices and services that all of these products exist in. The more we can design something that fits into a larger ecosystem, and perhaps defines a novel and valuable new part of the ecosystem, the more successful we'll be.
The internet refrigerator has been reinvented approximately 50 times in the last 15 years, but has yet to get ANY traction at all in the market. Why?

What made the iPod successful? Unlike smart fridges, MP3 players were a known success, but none of them became the runaway hit like the iPod?
So how do we invent artifacts from the near future? How do we mitigate the risks so that we're more likely to make iPods than internet fridges? There's no obvious way to do that successfully, of course, and this whole field is so new that there are no best practices. This afternoon is an opportunity for all of us to explore some of these ideas together and see what happens.
First, let me plug a book that you may have already read. It's Bill Buxton's Sketching User Experiences. If you only read one UX book this year, this is the one. It's a great description of a way of thinking about how to create novel user experiences.
It starts with ideation. I'm not going to give you instructions about how to ideate, but the idea is to have lots of ideas. This is Martino Gamper's 100 chairs in 100 days project, where he made 100 chairs from parts he found on the street, one per day.
One technique is extrapolation. That's where you take a piece of data that you have and project it to get something new. One effective form of extrapolation is to multiply something by 10 or 100. That's what Weiser was doing in 1988. What if we took this computer that's on our desk and said that we're going to have the equivalent of 10 of them. How will that change our lives? What if that camera becomes 1/100 as cheap? What if we have 1000 computers embedded in that wall? Another extrapolation technique is a projection across demographics and time. For example, the folks whose 20s were spent on Facebook and in World of Warcraft are probably going to have a different expectations around personal information and narrative than people from earlier generations. Let's map the attitudes and behaviors of kids on 4chan to the office environment of 2020. What does that look like? We don't know that for sure, but when doing ideation, we can assume that some mapping will occur and use that as the basis for identifying problems people may be experiencing that can be solved with technology. Image: N-Trophy, 2000-2003, Kelly Heaton, Feldman Gallery: http://www.feldmangallery.com/pages/exhsolo/exhhea03.html
A second technique is thinking about different scales. Computers traditionally have interfaces that are person-scale, but there's no reason that has to be the case. At PARC under Weiser they defined the tab, pad and board as names for the scales of the devices they were developing. The iPad is an homage to that. From Flickr: watch by funadium, box by ubermichael, phone booth by rastrus, room by bigpinkcookie
This is the scale I've been using. It's a set of definitions to talk about granularity and it helps us identify that works and doesn't work at various scales. Screens don't work when you approach the covert scale, which is why wrist TVs have never taken off. Buttons don't work well on the environmental scale and above, because they're too small relative to the object. You probably can't make anything that's designed to be immediately social at anything above the environmental level.

One of the biggest challenges in designing service avatars is moving an experience from from one avatar to another. We've probably all had that experience where a piece of data is in our phone and getting it from there to our laptop seems like an impossibility. You look at your phone and say, but it's just an inch away .can't my phone just borrow that big screen so I can continue doing this thing I was doing.
That's what Pardha Pyla and Manas Tungare called a task disconnect. In your mind you have an idea of a single thing that you're doing and you want to continue doing that thing with whatever tool is available. Spending the extra 10 seconds or 20 seconds or 5 minutes recreating your mental state in a new environment completely brakes your thought flow. You've just experienced a task disconnect. How do you manage that transition? Pyla and Tungare give some very general guidelines in the paper where they described this idea, but there's no clear way.
To bring up Amazon again, when it brings you to whatever page you were last looking at in a different avatar, that's managing a task disconnect.
Another more varied example. When you want to reserve your car from Zipcar, you see through the browser to the Zipcar service on the other side. As you walk you can check on the status and location of your car with an iPhone app. When you get there, your key fob works and the car opens. At no time, theoretically, has the service interrupted your flow of thought around the process of getting your car. You never say, Oh, wait, now I have to do this other thing with Zip car because I'm now using my phone. It's a very smooth experience precisely because they have managed the task disconnects well.
So how do you know where you need to manage those? Well one service design tool that's gaining popularity is the Swim Lane diagram. This one is by Iza Cross, who was a student until recently at the Savannah College of Art and Design's service design program. This type of diagram maps avatars of a service to what those avatars mean in terms of customer actions, service actions and back end technologies. I think it's probably the most useful of all of the service design tools right now in terms of understanding ubicomp UX design and it's a good way to organize what you know about the service you're designing. A task disconnect between two avatars happens when a task crosses between two of the vertical bars. Source: http://www.izacross.com/portfolio/attmobile

Thank you.

fovea.jpg
Tish Shute of Ugotrade generously invited me to present at Augmented Reality Event 2011 yesterday in a session on augmented reality user experience. My time slot was relatively short, and she challenged me to talk outside of the usual topics, so I chose to talk about something that's been interesting me for a long time: the use of non-visual senses for communicating information about the information shadows around us. In the process, I humbly decided to rename "augmented reality" (because I'm somewhat obsessed with terminology). My suggested replacement term is somatic data perception. Also, as an intro to my argument, I decided to do a back of the envelope calculation for the bandwidth of foveal vision, which turns out to be pretty low.

Here is the Slideshare version:

Scribd, with note:
Somatic Data Perception: Sensing Information Shadows

You can download the PDF(530K).

Here's the transcript:

Good afternoon!


First, let me tell you a bit about myself. I'm a user experience designer and entrepreneur. I was one of the first professional Web designers in 1993. Since then I've worked on the user experience design of hundreds of web sites. I also consult on the design of digital consumer products, and I've helped a number of consumer electronics and appliance manufacturers create better user experiences and more user centered design cultures.


In 2003 I wrote a how-to book of user research methods for technology design. It has proven to be somewhat popular, as such books go.


Around the same time as I was writing that book, I co-founded a design and consulting company called Adaptive Path.


I wanted to get more hands-on with technology development, so I founded ThingM with Tod E. Kurt about five years ago.


We're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers. We also make prototypes of finished objects that use cutting edge technology, such as our RFID wine rack.


I have a new startup called Crowdlight.


[Roughly speaking, since we filed our IP, Crowdlight is a lightweight hardware networking technology that divides a space into small sub-networks. This can be used in AR to provide precise location information for registering location-based data onto the world, but it's also useful in many other ways for layering information in precise ways onto the world. We think it's particularly appropriate for The Internet of Things, for entertainment for lots of people, and for infusing information shadows into the world.]


This talk is based on a chapter from my new book. It's called "Smart Things" and it came out it September. In the book, I describe an approach for designing digital devices that combine software, hardware, physical and virtual components.

Augmented reality has a name problem. It sets the bar very high and implies that you need to fundamentally alter reality or you're not doing your job.
This in turn implies that you have to capture as much reality as possible, that you have immerse people as much as possible.


This leads naturally to try to take over vision, since it's how we most perceive the world around us. If we were bats, we would have started with hearing, if we were dogs, smell, but we're humans, so for us reality is vision.


The problem is that vision is a pretty low bandwidth sense. Yes. It's possibly the highest bandwidth sense we have, but it's still low bandwidth.


This morning I decided to do a back of the envelope estimate of how much bandwidth we have in our vision. This is a back of the envelope estimate by a non-scientist, so excuse it if it's way off. Anyway, I started with the fovea, which typically has between 30,000 and 200,000 cones in it.


To compensate, our eyes move in saccades which last between 20ms and 200ms, or 5 to 50 times per second.


So this leads to a back of the envelope calculation of eye bandwidth between 100 bits per second and 10K bits per second

That's around 4 orders of magnitude slower than a modern front-side bus.

The brain deals with this through a series of ingenious filters and adaptations to create the illusion of an experience of all reality, but at the core there's only a limited amount of bandwidth available and our visual senses are easily overwhelmed.

In the late 70s and early 80s a number of prominent cognitive scientists measured all of this and showed that, roughly speaking, you can perceive and act on about four things per second. That's four things period. Not four novel things that just appeared in your vision system--that takes much longer--or the output of four new apps that you just downloaded. It's four things, total.

This is a long digression to my main point, which is that augmented reality is the experience of contextually appropriate data in the environment. And that experience not only can, but MUST, use every sense available.


Six years ago I proposed using actual heat to display data heat maps. This is a sketch from my blog at the time I wrote about it. The basic idea is to use a peltier junction in an arm band to create a peripheral sense of data as you move through the world. So that you can have it hooked up to Wifi signal strength, or housing prices, or crime rate, or Craig's List apartment listings, and as you move through the world, you can feel if you're getting warmer to what you're looking for because you arm actually gets warmer. This allows you to use your natural sense filters to determine whether it's important. If it's hot, it will naturalliy pop into your consciousness, but it'll only be there otherwise if you want it, and you can check in while doing something else, just as when you're gauging which direction the wind is going by which side of your face is cold, and you're not adding additional information to your already overstuffed primary sense channels.

If AR is the experience of any kind of data by any sense then we have the options to associate secondary data with secondary senses to create hierarchies of information that match our cognitive abilities.

For me, augmented reality is the extension of our senses into the realm of information shadows where physical objects have data representations that can be manipulated digitally as we manipulate objects physically. To me this goes further than putting a layer of information over the world, like a veil. It's about enhancing the direct experience of the world, not to replace it, and to do it in a way that's not about being completely in the background, like ambient data weather, or about taking over our attention.

So what I'm advocating for is a change in language away from "augmented reality" to something that's more representative of the whole experience of data in the environment. I'm calling it "Somatic Data Perception" and I close on a challenge to you. As you're designing, think about what IS secondary data and what are secondary, and how can the two be brought together?

Thank you.

Ads

Archives

ThingM

A device studio that lives at the intersections of ubiquitous computing, ambient intelligence, industrial design and materials science.

The Smart Furniture Manifesto

Giant poster, suitable for framing! (300K PDF)
Full text and explanation

Recent Photos (from Flickr)

Smart Things: Ubiquitous Computing User Experience Design

By me!
ISBN: 0123748992
Published in September 2010
Available from Amazon

Observing the User Experience: a practitioner's guide to user research

By me!
ISBN: 1558609237
Published April 2003
Available from Amazon

About this Archive

This page is an archive of recent entries in the Smart Objects category.

Self-indulgence is the previous category.

Social effects is the next category.

Find recent content on the main index or look in the archives to find all content.