Recently in Social effects Category

The fantastic folks at Interaction South America invited me to be the closing keynote of their 2011 conference. I took the opportunity to revisit themes of the relationship between products and services I had talked about before, but focusing on the effects that servicization of products has on the shape of products and by trying to define some specific interaction design challenges associated with designing service avatars.

PDF
The slides and transcript (1.8M PDF)

Slideshare
Click through to see the transcript in the notes.

Scribd
Products are Services, how ubiquitous computing changes design

Presentation Transcript

Good evening

Thank you for inviting me. Today I'm going to talk about how products and services are merging as a result of cheap processing and widespread networking, and how these technologies are changing everything from our relationships to everyday objects, down to the shapes of the objects themselves.
First, let me tell you a bit about my background. I'm a user experience designer. I was one of the first professional Web designers in 1993, where I was lucky enough to be present for the birth of such things as the online shopping cart and the search engine. This is the navigation for a hot sauce shopping site I designed in 1994.
I'm proud of the fact that 16 years later they were still using the same visual identity.

Here's one of my UI designs for the advanced search for HotBot, an early search engine, from 1997. If you're wondering why Google's front page was so stripped down, I think it was because we did this.

I also helped in the design of hundreds of other sites.

And 2001 I co-founded a design and consulting company called Adaptive Path.
I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods.
I left the Web behind in 2004 and founded a company with Tod E. Kurt called ThingM in 2006.
We're a micro-OEM. We design and manufacture a range of smart LEDs for architects, industrial designers and hackers. We've also done a range of prototypes using advanced technology. Here's an RFID wine rack we did in 2007. It shows faceted metadata about wine projected directly onto the bottles.
Because self-funded hardware startups are expensive, I've simultaneously been consulting on the design of digital consumer products. Here are some for Yamaha, Whirlpool and Qualcomm.

I even still do some strategic web design as a user experience director. Here's the homepage for credit.com, who were great clients a couple of years ago.

The last couple of years my clients have been large consumer electronics companies. I can't tell you who they are or give you any details about the projects.

This talk is based on my most recent book, which is on ubiquitous computing user experience design. The book is called "Smart Things" and it's published by Morgan Kaufmann.

Three days ago, BERG London, which is a design consultancy, released this product. It's called Little Printer, and that's all it is. It's a little printer. It doesn't connect to a specific device. Instead, it connects to the cloud to print things from Twitter, FourSquare, The Guardian newspaper, etc. It doesn't need to be plugged into a network connection and it doesn't have an interface that looks like anything we're familiar with. It's not designed to print out your Word document. Instead, it's designed to give you a feeling of what is happening in your digital world. They describe it as more like a family member than a tool. What does that mean? Is it a joke? It's not a joke. They're totally serious.

We're going to see many more objects like this, digital things that don't look or behave like the computers we're familiar with. Tonight, I want to talk about the underlying forces that are coming together to create them and I want to encourage you to start thinking about interaction design not as something that happens on boxes with screens, but as something that brings together the physical and and the digital.

I want to start by talking about unboxing. Many of you have probably seen unboxing videos or followed along a sequence of photographs as someone unwraps a device for the first time. Here's an intentionally old unboxing sequence I found on Flickr. It's from 2007.
Let's step back and think a bit about why this is interesting.
Unboxing is the documentation of the intimate experience of savoring the first time a person got to physically use, to touch, to own their precious new device. You, the viewer got the vicarious thrill of seeing someone else's intimate experience.

The act of unboxing is a kind of a devotional act to the physical form of a digital object. We have grown up in a world where the physicality of objects matters. We want there to be meaning in the form of an object, in how it looks and feels. We want to experience it with our hands, not just our eyes. We want to know what the skin feels like, how heavy it is. Is it warm, cold, hard, soft? These things matter.

Photo: Brian Yeung

Five years ago, when that first set of photos was taken, the form factor of devices was still very important. We were at the peak of form factor experimentation. The basic value of mobile phones had been established and handset makers began to compete on the physical experience of their devices. The way that the device was shaped, how you held it, how it looked mattered.

This is the Nokia 7280 and the Philips Xelibri 4, both of which come from this era.

However, something happened along the way. The unboxing became pretty boring.
Today, when we look at unboxing images for the latest products, they're all look basically the same. They're black rectangles in various sizes. Sure, each Android handset manufacturer has their own Android skin to make their black rectangle look different, but ultimately the physical objects are all trending toward the same size and shape.

Why? What happened in the last five years to change objects from these different, complex, sensuous to flat black rectangles that all do the same thing?
What happened is that our objects have become less important than the services they represent. This shift in value, from physical objects to networked services is huge and profound. It means that many of the physical things we've taken for granted are rapidly changing, new things are being created and our relationship to our world is rapidly shifting.

The shift of device focus to services represents a shift in the way that we relate to our things akin to what happened during electrification.
If you've ever used a wind-up record player or a treadle sewing machine, you know the wonder of the experience of a machine that's doing something complex, but doing it completely without electricity or gasoline. Those two substances, electricity and gasoline, as like modern magic. You don't really experience how they work directly. You can only see the effects that they have, so our relationship to electrical and gasoline-powered devices has an inherent leap of faith that somehow, somewhere inside the windings of a motor or in the pistons of an engine this invisible magic happens and the device works.
When you see a complex device that works on purely mechanical means, one that requires no magic substance, there's a feeling of incredible wonder, since your dependence on assuming the magic of electricity and gas is revealed.

That feeling is exactly the feeling our children will have about objects that aren't connected to the network. Our children will say, "Wait, you mean your cars didn't automatically talk to the net?" "How did they tell you when to fill them up?"

The simplest place to start thinking about this change is by looking at how expectations for user experiences on networked devices has shifted in recent past.
When information processing and networking were expensive, computers had to be general purpose devices that had deal with almost every situation. All the value was local. It was in the machine in front of you. That one tool was designed to cover every possible situation.

The software that ran on these computers also had to cover every possibility. The tools had to be completely generic and cover every imaginable use case.
However, that's no longer the case. Today processing is cheap. Our generic tools have become fragmented. The generic tools have been broken into pieces and rather than buying one generic tool, you now have a tool BOX for the same price of that one expensive device ten years ago.

That device is also not isolated. Widespread networking and the Web created a shift in people's expectations. Today, most people understand that the experience you see on one device is often a part of something that's far away, that's connected to the world through some kind of digital back channel. There's no longer a need to pack all possible functionality into a single piece of software, and there's no expectation that everything will be there.
Moreover, we are increasingly accepting that the experience we get when we pick up a device and start an app may not be like the experience we had last time. The content or the functionality of a device is no longer stable, it's fluid and it's often not under our control. The device is no longer the container of the experience, but a window into it.
In other words, widespread networking has shifted our expectation of value from the device to the information that it contains, from the local to the remote.

If we take those shifts to their logical conclusions, we see that as information moves to the network, an individual device is no longer the sole container of the information. The information, and the value it creates, primarily lives in online services.
Devices become what I call "service avatars." A service avatar is a representative of a service, and a conduit for a service. You can give the device away without giving away the service. You can change it without changing the service. You can turn it off without turning off the service. None of that was true when the value was local.
For example, let's look at digital photography. If we take Flickr as our service, we see that a camera becomes a good tool for taking photos for Flickr, a TV becomes a high resolution Flickr display, and a phone becomes a convenient way to take your Flickr pictures on the road.

We now increasingly see THROUGH devices and software to the cloud-based services they represent. We no longer think of these products as being places we visit online, but services that we can access in a number of different ways, unified by brand identity and continuity of experience. We used to think of the Internet as a place we visit, now we think of it like we think of as the atmosphere, as something that always around us. We don't have to visit it. In fact, we're surprised when we don't have it.

For example, you can now get Netflix on virtually any device that has a screen and a network connection. You can pause a Netflix movie on one device and then upause it on another.

Because to the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. The technology exists to enable the service, not as an end to itself.

Netflix appliances are created for a single reason: to make it easier to access Netflix. That's what Roku does. It turns any device that's not already Netflix enabled into a Netflix avatar. The Boxee box does that for the Boxee service.

Here's a telling ad from Amazon for the Kindle, which is one of the purest examples of a service avatar based user experience. This ad is saying "Look, use whatever avatar you want. We don't care, as long you stay loyal to our service. You can buy our specialized device, but you don't have to."

Jeff Bezos is now even referring to Kindle Fire in exactly these terms.

Facebook and HTC have now partnered to make a Facebook-specific phone from the ground up. If Facebook is the primary service you use on the Net, why not have a specialized device for it?

My favorite example of a dedicated hardware avatar is still Vitality Glowcaps, which is a wireless network-connected pill bottle that's an avatar to Vitality's service for increasing compliance to medicine prescriptions. When you close the cap, it sends a packet of information through a mobile phone-based base station to a central server and it starts counting down to when you next need to take your medicine. When it's time, it lights up the LED on the top of the bottle. That glow is the simplest output as an avatar of the Vitality service. The real power is in the packet of data it sends. That packet opens a door to sophisticated experiences that transcend a single piece of software or a single device.

For example, another avatar of the Vitality service is an online progress report that can be used interactively or delivered by email. It's like Google Analytics for your medicine.

Health care practitioners get yet another avatar that gives them long-term and longitudinal analytics about compliance across medications and time.
To me, this kind of conversation between devices and net services is where the real power of The Internet of Things begins.

Vitality has developed a complete system around this service that includes a social component, and different avatars for patients, patients families, health care practitioners and pharmacies. Each avatar looks different and has different functionality, but they're perceived, and designed as a single system.

Our ability to digitally track individual objects, like pill bottle caps, and connect them to the internet is creating a profound change in our physical world. We can now take what we've learned in the last ten years about creating networked experiences and moving that to physical objects.

Today we have the technical ability to uniquely identify and track even the most disposable objects. This is a melon that's uniquely tracked using a sticker from a company called Yottamark. Their service tracks each individual melon back to the farm where it was grown, through every warehouse and truck. You can use this check to make sure that it's fresh and that it was kept in appropriate conditions and that the farm is genuinely the organic farm that's advertised.
Once you know what kind of melon it is, you can also automatically find out how to cook it, how to compost it, what recipes work well with it, what your friends think about it, etc. In other words, you can do the things with it that are familiar with digital content, but now with physical objects.
Source: Yottamark

I call this cluster of data on the internet about a specific thing that object's information shadow. Every object and every person casts an information shadow onto the internet, onto the cloud.
In a very real sense, once you can identify each individual melon, it becomes the avatar of a melon service that provides information to you as a consumer, allows the store to understand their logistics, and allows the farmer to understand patterns of production and consumption. In the same way that data about yourself changes your behavior, as Chloe talked about yesterday, data about the objects in the world changes the world.

Wrapping your brain around what this means can difficult, so let me give you an example.

When you buy into a car sharing service such as City Carshare, Zip Car or Zazcar in São Paolo, you subscribe to a service. Each car is an avatar of it's respected service, actively connected to the service at all times. You can only open the car and start the engine if the service allows it. The car logs whether it's been dropped off at the right location, and how far it's been driven. All of that is transparent to you, the subscriber.
3
It's a lot like having your own car. It's available 24 hours a day and you can just book one, get in it and go. However, your relationship to it is different than having your own car.
Instead of a car, what you have a car possibility space that's enabled by realtime access to that car's information shadow.

This is the German Call-a-Bike program, run by the rail service. You need a bike, you find one of these bikes, which are usually at major street corners. You use your mobile phone to call the number on the bike. It gives you a code that you punch in to unlock the bike lock. You ride the bike around and when you've arrived, you lock it. The amount of time you rode it automatically gets billed to your phone, by the minute.

Each bike is an avatar of the bicycle service. Instead of a bicycle, you are now interacting with a transportation service that exists in the form of bicycles. You are not getting a thing, but the effect that the thing produces.

Here's another example that points to some exciting possibilities. Bag, Borrow or Steal is a designer purse subscription site. It's a service for expensive handbags. You don't normally carry a super expensive handbag all the time. You want it for a weekend, or for a couple of days. Through this service you subscribe get the latest purse delivered to you. You use it for a couple of days, or for however long you want, and mail it back. Next time, they'll send you another one.

Again, what you own is not an object, but a possibility space.

Here's another one called Rent the Runway that also does dresses and accessories.

How long until you get a subscription to Zara and instead of buying your clothes, you just pay a monthly fee to get whatever is seasonal for your type of work in your part of the world at your price point.
We already have Exactitudes and people seem quite comfortable with it. Why not turn it into a subscription business model for clothes?

For me, the process of creating a successful product is not limited to creating great visual experiences, or efficient, clear interfaces, but understanding how to make products fit into people's lives today and tomorrow.
When designing service avatars, a number of different design disciplines-- service design, industrial design, visual design, even branding--come together and affect how we interact with avatars.
Since this is an interaction design conference, I wanted to identify some issues with service avatar interaction design to give you a feel for what the challenges, and interesting opportunities are.

The first challenge is figuring out what an avatar won't do. When anything can do anything, when any avatar can computationally perform the same action as every other, you get a kind of design vertigo. What should THIS product do? What makes it different from that one?
A watch is a 20 centimeter interface, a phone is a A 50 centimeter user interface, a TV is a 3 meter UI. They're completely different, but app designers, people who are making these terminals into avatars, are tasked with designing a consistent experience across all scales.
It's a nightmare.
To me, this means that one of the biggest service avatar interaction design challenges is deciding what a given device is NOT going to do.

But saying no is really hard. As Chloe talked about yesterday, consumer electronics companies add the equivalent of a tablet PC to the front of a refrigerator because it's technically easy. The problem is that they don't think through how this computer will make the refrigerator a better REFRIGERATOR.
If we think in terms of networked devices, we encounter the question of how a service avatar of an online service make this fridge better? When Chloe presented her idea, she was absolutely correct in focusing on having the fridge know what food is in it so that it can become the avatar of an online grocery store service. The key insight is to create a service that focuses on what the fridge does, not what a computer can do. The challenge is to make the fridge an avatar to the service, not another general purpose computer that has to be managed.
As we've seen, no consumer electronics company has managed to do this successfully. I've been thinking about this for a long time and joking about it as a repeated failure. However, as I was writing this in the hotel room today, I realized that there is a model for this service that might just work.

It's the Hotel Mini-bar. So, Chloe, if we can figure out how to convert this this model to something that everyone will want to have in their house, we've got a huge business waiting for us. Let's talk.

More practically, the Nest thermostat is a smart home thermostat that's an avatar of their online service. Yes, as a computer it's probably computationally the equivalent of an iPod Nano, but they're not trying to make another random small computer stuck to your wall. Instead, it's a networked thermostat. It doesn't do ANYTHING except try to be the best way to keep comfortable and save energy, using its status as a service avatar to do that.
They could have made it an invisible box that you control through an Android app, or a tablet that hangs in your hallway, but why? It's much easier to think of it as a thermostat. It's focused on the context in which it's used.

They also have other avatars for the same service. Each one is focused on maximizing the value that's possible in the context in which it's used. What is good about a computer with a high resolution screen? Well, you can use the large screen to see a complex schedule on it. The designers used the affordances that are available in the way that makes sense given what people want to do in context in which the avatars are going to be used. It sounds like straightforward user centered design, but it's surprisingly confusing about what the right context is, where the right places to say no are, given everything that's possible.

A second key interaction challenge deals is how to manage service avatars ability to behave on their own. When you had an unconnected computer on your desk, or a simple feature phone, you were pretty sure you knew what it was doing most of the time. The more connected a device, the more it does things without asking you, without you knowing. Designing interactions with devices that have their own behaviors is quickly becoming a significant interaction design challenge.
Let me give you a simple example. This is the Water Pebble. It's a shower timer that aims to reduce water usage. When you first use it, you push a button and take a shower. From then on it glows green while your shower time is fine, yellow when you're almost done, red when you should stop, and blinking red when you're really over. The interesting part is that, after a while, it starts slowly reducing the amount of time it gives you so that you progressively build a habit of using less water.
My personal experience with it, however, is that its algorithm for behavior change doesn't match my ability to actually change. It reduced the amount of time it gave me to shower, and I was following along with it, until my change curve deviated from its. Instead of helping me change my behavior, it just sat there at in the shower drain blinking red and mocking me for not being good enough. I couldn't reason with it, I couldn't get it to change its algorithm to match my capabilities, so I stopped using it.
The interaction design challenge is how to let a user negotiate with this device that's making decisions for me. This is a simple ubiquitous computing device, but what if it was a service avatar that controlled the actual amount of water I used. I would now need to negotiate with

You can see how iRobot solved this with their Roomba robotic vacuum. They initially gave you four different ways, four different buttons for selecting what kind of mission the roomba was supposed to go on. Of course the robot can do much more than that, but they watched people use the robots and determined what kinds of activity was most requested, what kind of behavior you could expect from the algorithm.

Then they revised it based on further research, essentially down to one button. That's not minimalism for the sake of minimalism, it's saying no to functionality based on an understanding of context.

The next interaction design challenge is how to deal with interactions with data streams, rather than data files. Traditional computer devices produce files, and over the last 30 years we've developed a number of different mechanisms for dealing with them. Today's modern file browsers resemble search engines more than they do the original Mac Finder and that kind of works. It's not great, but it's functional.
Service avatars, because they're autonomous networked devices, do not produce files. Their basic unit of data in a service is the data stream. They produce continuous streams of information, rather than single units of information. Think of it as a change from a world of static Web pages to dynamically generated sites. It's a completely different design philosophy.
Here's Pachube, an online data brokerage for what I would call service avatars. Each one of the 80,000 devices is producing a continuous real time stream of data.
How do you manage one of these? How do you manage twenty?

I think that the financial industry is a great place to look for models for dealing with data streams. Money is one of the oldest services with lots of well known service avatars from credit cards to ATMs to online shopping. There are a lot of good services out there that have very good interactions with streams of money. Mint.com collects the output of a number of different financial data streams and gives you lots of ways to see trends and to control what happens where.
Let's think of streaming video subscriptions. When people are subscribed to twenty different streaming video services, how do you help them manage that? Perhaps the answer is that we should start interacting with all service data like we interact with money.

Finally, we hit the last major interaction problem, which is that these devices can technically work together well, but in practice they're all separate. How can you design these avatars so they use their power and work together to make your life easier? How can you bridge devices to create a single experience that crosses multiple devices?

We're now starting to make headway, most notably in what's called "second screen" user interfaces. The TRON Legacy Blu Ray, for example, has a companion app that listens to the soundtrack and synchronizes interactive content on a second device along with the movie. These are essentially two avatars for the same service, which is the delivery of TRON Legacy. This is the beginning of multi-device, multi-screen user experiences.
Again, we're at the start of figuring how interactions can span multiple devices that are simultaneously working together. Very soon as we have toolboxes of devices, rather than individual all-purpose devices, we're going to have to hook them together, and that's a fantastic interaction design challenge.

The last thing I want to talk about is the most speculative. I want to talk about the shape of service avatars.
Shape is a key component of the user experience, I'm really interested in how the physical shapes of objects change when they use new technologies, and I think we're about to see a big shift in the shapes of the objects in our world.

Let's start with telephones.
The old phone network was one of the first avatar-based services and you can see the effects of that relationship on the physical design of the devices.
If you look at an old phone, you see that it's not built for fashion or for flexibility. It's built for the most common use case and it is built not for annual replacement, but to minimize the need for repair. It is simple and modular and its internal parts didn't change for decades. It is a very conservative product design, for better and for worse.

The minute that phones stopped being owned by the service, they stopped being service avatars and became normal products, their shapes went crazy and the manufacturing quality became incredibly cheap because the entire set of incentives in the design of the device was different.

As we move back to a world of more service avatars, we can see this pattern repeating itself.
Municipal service avatars, the familiar Internet of Things devices such as smart electricity meters and networked parking meters that are being deployed by governments and utilities in large quantities, are very conservative for all the same reasons as the original phones. That's not so surprising.

What's surprising is that because the designers of Call-A-Bike bicycles had many of the same design constraints, constraints that are inherently imposed by the economics of centrally-controlled services, they made the same kinds of decisions. The Call-A-Bike bikes are different than any other bike on earth, but because they are robust, overdesigned, easily repaired they may also be the most conservative.
Does this mean that this is the case for any service avatar design? That the design philosophy has to be ultraconservative?

No, but the other direction is not pretty, either.
Before the advent of LCD TVs the replacement cycle of a CRT-based TV was on the order of 10-15 YEARS. Today, you can see that as the price of LCDs drops on the order of 20% per year, so people are replacing their TVs much more quickly.

This affects how the TV is designed and built. As prices fall, margins shrink and the build quality starts to go down because there's an expectation that consumers will replace the device soon.
Vizio, a low-end TV maker, now regularly tells people that they must replace their TVs if those TVs are older than 12 months. Instead of a 15 year replacement cycle, Vizio is working on a 12 MONTH replacement cycle for TVs.
In other words, like the Garfield phone, when you buy the avatar of a service, you are just buying a frame. Thus, the design incentives are to make it as cheap as possible with gimmicks, because the makers know there is no real value in the avatar, it's all in the service.

Neither of these options is appealing to me. You either get conservative or disposable. That's a bad choice. If we had a Zara clothes subscription service, does this mean that their choices would be to build clothes that were built like tough work clothes or made of paper?
I hope not.
As I said, in the opening, the physicality of objects matters. I think that the answer is for us as designers to reinvent business models. We are the ones who have the tools to satisfy consumers' desire for self-expression, elegance, variety and functionality, while still making products that are designed to be useful for many years

It's the beginning of a profoundly new world, with these emerging technologies shaping the objects in our world, our relationship to those objects and how those objects are changing our expectations.
Because we are interaction designers, we will be the people designing the devices, the services and the world. We have a great responsibility.
We, those who grew up on the net and who design it, will be the ones who create ubiquitous computing, not the roboticists or network engineers, and ubicomp will fundamentally change the world and us along with it. Like Jon said yesterday, it is our responsibility to use our knowledge of people and technology to create new business models, to start companies, to take huge risks, and to be thoughtful about the implications of what we're doing without ever forgetting that we have no idea what's going to happen next.

Thank you.

Web Directions South graciously invited me to keynote their 2011 conference this year. I took the title of the conference somewhat literally and decided to roll up a bunch of themes that have been rattling around my head, and my presentations, to talk about what direction the Web is going, as it relates to ubiquitous computing. I also wanted to touch on the fact that as designers we create technology and although we can't understand how it works, we generally don't know what it means. I tried to provide some ideas and some guidance about that.

You can download a 1MB PDF of my presentation with slides and a full transcript.

Here it is on Slideshare (click through and look at the speaker notes to see the transcript):

And here on Scribd:

Unintended Consequences: design [in|for|and] the age of ubiquitous computing

Here's the full transcript:

Good morning! Thank you very much for inviting me. I've heard great things about this event for years and it's an honor to be here. Today I'll be talking about ubiquitous computing and, very broadly speaking, design.
First, let me tell you a bit about myself. I'm a user experience designer. I was one of the first professional Web designers in 1993. I've worked on the design of hundreds of web sites and many digital consumer products. I also regularly work with companies to help them create more user centered design cultures so they can make better products themselves.
I sat out the first dotcom crash writing a book based on the work I had been doing. It's a cookbook of user research methods.
And 2001 I co-founded a design and consulting company called Adaptive Path.
...and three years later I left it, and I left the Web altogether, to found a company with Tod E. Kurt called ThingM in 2006.
We weren't sure what we were going to be but it's turned out that we're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers.
This talk is based on my book on ubiquitous computing user experience design. It came out last September and it's called "Smart Things" and it's published by Morgan Kaufmann.
I want to start with a little history. I love the history of technology. This example comes from Harold Innes, who was a political economist and Marshall Mcluhan's mentor, wrote about technologies and empires. He has an interesting take on papyrus. According to him, it nearly brought down the Ancient Egyptian empire, and ended up changing it forewver. Before papyrus, writing in ancient Egypt was the process of slowly inscribing information permanently on immobile things like obelisks and tomb walls. Information moved slowly, formally. It was easily controlled and constrained.

When papyrus was invented, it seemed like a great idea for those in power. The pharaoh could administer his empire from a central location and wouldn't have to rely on messengers. Now he could send lots of precise instructions and scribes could write down complex ideas, such as those about geometry. But papyrus is not stone. It's easier to write on, orders of magnitude easier. So, people wrote more. A lot more. They were writing so much that they needed a less formal florid writing system, and more people learned to read and write. Suddenly, and by suddenly I mean over the course of hundreds of years, this meant that knowledge, and the control that comes with it, was no longer be centrally controlled. People started to get strange ideas. They started to ask why it was only the Pharoah who got to go to heaven. Scribes, the nerds of their era, were suddenly quite powerful. Surprisingly powerful. Dangerously powerful.

The Pharaoh--and I can't remember which dynasty this was, maybe the 19th?--decided that this was really endangering the stability of the Empire, which was under a lot of stress anyway. He needed to do something drastic. He made the all the Scribes report directly to him. They were elevated to the same level as priests and the position became hereditary and bureaucratic. No one else was allowed to write. Amazingly, this worked, and the literacy that was
The interesting thing is that the people who invented papyrus did not create it to threaten Egypt. Quite the contrary. And the scribes, they were just producing content. Moving symbols around. They were not intending to undermine their government.

No one involved intended to nearly topple Egypt with papyrus. There was nothing inherent in the technology that could have predicted this. No, it's that technology always, always has unintended consequences.
We who make technology have a strange perspective in its role in the world. We feel that because we make it, we understand it. We like to think we can predict where it will go and what it will do.

The problem is that our perspective is tiny and incremental. We usually miss the real deeply transformative change that happens outside our frame of reference. Often it's the people who create a technology that are the most surprised by its effects.

These are two small piece of Scott Weaver's toothpick sculpture of the Bay Area.
The whole thing looks roughly like this. It took him 30 years and a bazillion toothpicks.

As technologists, as human beings, really, we are great at seeing the details, but in many ways we're cognitively equipped to to see the whole. We're terrible at seeing emergent phenomena that come from the confluence of thousands of small things. Big social waves brought on by technology have to be nearly on top of us before we see them.

We're currently in the upslope to such a shift brought on by something familiar, something that we may think we have a handle on, but which is creating deep social shifts we couldn't have predicted.
I'm of course talking about Moore's Law, since that's where all conversations about the implication of digital technology start. When people talk about Moore's Law, it's often in the context of maximum processing power. But it's actually something different. It's actually a description of the cost of processing power. It's a model of how much more processing power we can fit into a single chip that's priced at a predictable pricing point this year than we could last year. This means that it's not just that processors are getting more powerful, it's that PROCESSING is getting cheaper.

For example, at the beginning of the Internet era we had the 486 as the state of the art and it cost $1500 in today's dollars. It's the processor that the Web was built for and with. Today, you can buy that same amount of processing power for 50 cents, and it uses only a fraction of the energy. That decrease in price is the same orders of magnitude drop as the increase in speed. This is not a coincidence, because both are the product of the same underlying technological changes.

What this means in practice is that embedding powerful information processing technology into anything is quickly approaching becoming free.
We see this most readily as a proliferation and a homogenization of digital devices because virtually any device can now do what every other device does. This is why we're seeing all of this churn in form factors, since the consumer electronics industry is trying to figure out how they can sell yet one more screen of a different size. Four years ago it was smart phones, three years ago it was all netbooks, two years ago it was tablets, now it's 7-inch tablets and connected TVs. They're all essentially the same device in different form factors.
That's fine, but it's the most primitive of the transitions that's happening.
Simultaneously, the number of wireless networks in the world grew by several orders of magnitude.

This is a video by Timo Arnall that envisions how saturated our environment is with networks, and it's not even counting the mobile phone network, which covers just about everything. This means that virtually any device, anywhere can share data with the cloud at any time. People right now are excited about moving processing and data storage to the cloud and treating devices as terminals. That's certainly interesting, but it's also just the tip of the iceberg. That's like saying the steam engine is really great for pumping water out of mines. Yes, it's good at that, and also creating the industrial revolution.
It is thus no longer unthinkable to have an everyday object use an embedded processor to take a small piece of information--say the temperature, or the orientation of a device, or your meeting schedule--and autonomously act on it to help the device do its job better. Information processing is now part of the set of options we can practically consider when designing just about any object.

If you look at what happened when the price of writing fell, or when extracting aluminum became two orders of magnitude cheaper in the late 19the century, or when electric motors became significantly cheaper and smaller in the 1920s you see dramatic material and societal change. When something becomes cheap enough, when cost passes a certain tipping point, it quickly joins the toolkit of things we create our world with.
In other words, information has become a material to design with.

And with that, we have entered the world of ubiquitous computing, the world Mark Weiser roughly .
Because we have information as a design material, we no longer think it's crazy to have a processor that creates behavior in a toy, or for a bathroom scale to connect to a cloud service, or for shoes to collect telemetry.

This capability of everyday objects to make sophisticated autonomous decisions and acting using arbitrary information is new to the world and it is as deep an infrastructural change in our world as electrification, steam power, and mechanical printing. Maybe it's as big of a deal as bricks. Seriously, it's a huge change in how the world works, and we're just at the beginning of it.
Today it's relatively simple to make a device sense the world with a great deal of precision.

There are thousands sensors that convert states of the world into electrical signals that can be manipulated as information. This also includes sensors that sense human intention. We call these "buttons", "levers", "knobs" and so on.
Our things can make physical changes in the world based on input. Devices made from the perspective of treating information as a design material can autonomously affect the world in a way that no previous material was capable of.
Information can be used to store knowledge about the state of the world and act on it later. This could be just a single piece of data.
Or it can encode very sophisticated knowledge about the world. This is a Blendtech programmable kitchen blender. With it you can program a specific sequence of blender power, speed and duration and associate that sequence with a button on the blender. it allows you to embed experience and knowledge about food processing into the tool, which can then produce that as a behavior, rather than requiring the operator to have that knowledge and develop the experience.

Why do this? Well, if you're Jamba Juice, which is a large US smoothie chain, your business depends on such programmable blenders so their staff don't have to be trained in the fine points of blending and their product is always consistent. Their profit margins depend on knowledge that's encoded into their blenders, knowledge that's accessed with a single button.
This is the control panel of Blend Tech's home blender. Blenders used to have buttons for different speeds. They described WHAT you were doing. Now, with embedded knowledge, it's about the desired end result. It's about WHY. The software handles the what.
One of the most transformative qualities of information is that it can be duplicated exactly and transmitted flawlessly. This has already changed the music and video industry forever, as we know.

But it also means that device behavior can be replicated exactly. We've become acclimated to it, but--stepping back--the idea of near-exact replication in a world full of randomness and uncertainty is a pretty amazing thing, and is a core part of what makes working with information as a material so powerful.

Image: N-Trophy, 2000-2003, Kelly Heaton, Feldman Gallery: http:// www.feldmangallery.com/pages/exhsolo/exhhea03.html
Finally, and most profoundly, things made with information do more than just react, they can have behavior.

Information enables behavior that's orders of magnitude more complex than possible with just mechanics, at a fraction of the cost. This is a modern small airplane avionics system. It consists of a bunch of small fairly standard computers running special software. It's a bit like a flight simulator that actually flies.
Found on: http://www.vansairforce.com/community/showthread.php?t=51435
Compare that to a traditional gyroscopic autopilot, what it replaced. Every component is unique, it does very little, and to change its behavior you have to completely reengineer it.

When you make something with information, you enable that thing to exhibit behaviors that are vastly more sophisticated than what was possible with any previous material.
That is the wave that's basically on top of us.
So what can we as designers do in this situation?
Well, we're possibly the luckiest ones.

For the last 20 years we've been building a digital representation of the world on the Internet. We call the the Web, and if you look at it as a unit, it's a rough and unorganized, but fairly complete model of most things in the world and how they interact.

Until now, however, it was disjoint from the thing that it was modeling. We left it up to people to make the connection between this map of the world with the world itself. We had to resort to things like stickers to tell people in the real world that a given object, or location, had an information shadow in the cloud.
But that's quickly changing.
Here's Toyota and Salesforce's plan for having your car continuously embedded in both Toyota corporate's network and your social network. The factory can update the car firmware remotely and the car can text you when it's done charging. The information shadow of the object, it's representation in the cloud, and the object have been glued together.
For Web designers this is great news. As the model of the world and the world merge, as the map and the territory become increasingly intertwined, who knows the most about the map? It's us. We're been swimming in it longer than anyone else. And as things are increasingly made using information as a material thanks to the inclusion of cheap processing and networking, we're the ones who know how to design for it.
Colliding galaxies, NASA
Because we're way ahead of the curve in terms of figuring out how digital things should talk to each other.

Everything that communicates needs to do in some standard way, and increasingly that way looks a lot like the Web. Here's a slide from a project by Vlad Trifa and Dominique Guinard of ETH Zurich. They've build a middleware layer that makes every physical objects look basically like a Web site. They call it, appropriately, The Web of Things. It doesn't sound particularly farfetched. They're just applying stable technical standards that were developed when Web servers were as powerful as today's smart TVs to things like, well, smart TVs.

This allows us to transfer our skills easily, since we can now mash up objects how we mash up web sites.
That's a way of treating devices from afar as you would Web sites, but people's use of devices close by is also becoming more Web-like.

When devices are used to access online services people begin to see through them to the online world they provide access to, rather than looking at them as tools in their own right. In many
ways we no longer think of experiences we have on devices as being "online" or "offline," but as services that we can access in a number of different ways, unified by brand identity and continuity of experience. Our expectation is now that it's neither the device nor the software running on it that's the locus of value, but the service that device and software provide access to.
These devices become what I call "service avatars." A camera becomes a really good appliance for taking photos for Flickr, while a TV becomes a nice Flickr display that you don't have to log into every time, and a phone becomes a convenient way to take your Flickr pictures on the road.

Thus, the service and the device become increasingly inseparable and we who create the services effectively control the devices.
For example, you can now get Netflix on virtually any terminal that has a screen and a network connection. You can pause a Netflix movie on one terminal and then upause it on another.
Because to the Netflix customer, any device used to watch a movie on Netflix is just a hole in space to the Netflix service. It's a short-term manifestation of a single service. The value, the brand loyalty, and the focus is on the service, not the frame around it. The technology exists to enable the service, not as an end to itself.

This is one way that objects in the world and the digital online map are becoming the same thing, a thing that we as interaction designers, control.
Here's a telling ad from Amazon for the Kindle, which is one of the purest examples of a service avatar based user experience. This ad is saying "Look, use whatever avatar you want. We don't care, as long you stay loyal to our service. You can buy our specialized device, but you don't have to."
Jeff Bezos is now even referring to it in these terms.

This leads to another experience design conclusion. The core of the product is not the web site that you're designing, or the product you're designing--it's not any of the avatars of the service. The core is the service that lies underneath. The avatars reflect that service, they deliver the product in context- appropriate ways, and their design is very important since they are how people experience the service, but the most important part of the design is the itself.

Thus, when we are designing FOR the Web, we are increasingly designing for the world.
So what's the upshot of all of this? How do these pieces fit into place?
It's still pretty early, and--like I said, we're terrible at identifying emergent phenomena--so we don't really know what this ubicomp elephant looks like. We do, however, have some pointers to what kinds of changes we could see.
Source: Banksy's elephant.
For example, what happens when you mix information shadows and service avatars? You get a blurring between what's a product and what's a service.

When you sign up with a car sharing company like Flexicar or GoGet you become a subscriber to their service.

Each specific car is an avatar of its respected service, actively connected to the service at all times. You can use it any time you want, but you can only open the car and start the engine the service allows it. Your relationship with these cars becomes something different than either renting a car or owning one, sharing elements of both. It's a new kind of relationship that we don't yet have a good word for. And it's a relationship that's created by the capabilities of underlying technologies that didn't exist or were impractical 20 years ago.
This is the German Call-a-Bike program, run by the rail service. You need a bike, you find one of these bikes, which are usually at major street corners. You use your mobile phone to call the number on the bike. It gives you a code that you punch in to unlock the bike lock. You ride the bike around and when you've arrived, you lock it. The amount of time you rode it automatically gets billed to your phone, by the minute. Each bike is an avatar of the Call-A-Bike service.
Photo CC by probek, found on Flickr.
Here's another example that points to some exciting possibilities and that also straddles this model of not quite ownership and not quite rental. Bag, Borrow or Steal is a designer purse subscription site. It works like Netflix, but for really expensive handbags.
It's fashion by subscription. From a user-centered design perspective, it's great. Here's a class of infrequently-used, highly desired, expensive objects whose specific instantiation changes with the seasons. You don't want a specific bag as much as you want whatever the current appropriate thing to fill the dotted line is, but actually keeping up with that fashion is expensive.

This service lets you own that bag possibility space without actually owning a single bag.
Photo CC by bs70, Flickr
Here's another one called Rent the Runway that has expanded this idea to dresses and accessories.
How long until you get a subscription to Zara and instead of buying your clothes, you just pay a monthly fee to get whatever is seasonal for your type of work in your part of the world at your price point.
We already have Exactitudes and people seem quite comfortable with it. Why not turn it into a subscription business model for Zara?
Another effect, and one which may be the most profound of all, is how our increasing reliance on embedded algorithms shifts relationships of authority and responsibility. This isn't necessarily bad--I, for one, am happy to let Google Maps plot routes for me since it only gets it spectacularly wrong every once in a while--but the more we embed sensors in our world and use automatically processed information to make material changes in the world, the more power we implicitly give algorithms and the more authority we give their designers.

For example, San Francisco has instituted a dynamic parking pricing system called SFPark. Sensors that look like speed bumps are embedded in the pavement. They sense whether a car is in a given parking space or not. This information is uploaded to the cloud where three things happen to it: it serves as the data source for an app that shows drivers where there are empty spaces, it tells meter maids where there are cars with expired meters, and--most interestingly --it uses the parking frequency data to adjust parking prices dynamically. Their stated goal is that the algorithm will price the parking so that there's always two available spaces on every block. Theoretically, a spot in a busy part of town that costs 50 cents an hour at 5AM may cost $50 an hour by 1PM. The people that run this program in San Francisco understand the potential danger of letting such an algorithm run completely free and they've intentionally limited both the price range and how often it changes, but the fact that they felt they had to do that shows that a public negotiation with algorithms that control the world has already begun. You can see a similar negotiation happening with smart electrical meter pricing.
This kind of negotiation is happening all the way to the personal level, down to individuals and their relationship with themselves.

Right now the Quantified Self movement is quite popular in the San Francisco Bay Area. People are using a wide variety of sensors to measure things about themselves so that they can optimize their bodies and lives. Here's the cloud- connected pedometer from Fitbit, Bodymedia's multi-sensor cuff. The sleep sensor from Zeo. They're all designed to collect data about you, then process it, perhaps share it, and visualize it. They're great examples of service avatars made with information as a material. But there's something about them that unsettles me.

At their core, they're shifting intrinsic rewards, the positive internal drive for being healthier, getting better sleep, being more fit, to extrinsic rewards-- making numbers go up. But those extrinsic rewards are controlled by algorithms, rather than their owners' judgment. What these products are saying, in effect, is that we can become the people we want to by giving up some of the control of our lives to these digital devices. Perhaps that's true--people depend on a lot of tools--but what results is a hybrid between a person with goals and a set of algorithms that purports to tell them whether those goals have been achieved. This is likely to have many unintended consequences. We trust algorithms and sensors because they look objective, but are they? How do we know?
This is the Water Pebble. It aims to reduce water usage by timing your shower and telling you then you hit your designated shower time. The way it works is that when you first use it, you push a button and take a shower. That sets the baseline. From then on it works like a shower timer. The algorithmic part of it comes in when, after a while, it starts slowly reducing the amount of time it gives you, so that you progressively build a habit of using less water.

My personal experience with it, however, is that its algorithm for behavior change doesn't match my ability to actually change. It reduced the amount of time it gave me to shower, and I was following along with it, until my change curve deviated from its. Instead of helping me change my behavior, it just sat there at in the shower drain blinking red and mocking me for not being good enough. I couldn't reason with it, I couldn't get it to change its algorithm to match my capabilities, so I stopped using it.

I'm not saying that we shouldn't enter into these relationships, but that they represent a deep shift in how we relate to the world. We shift our trust and the responsibility of making sense of the world to algorithms more than our own capabilities. We are likely going to spend the rest of our lives negotiating power relationships with embedded devices, in a way that no people have ever
And we can expect many unintended consequences. The designers of Facebook, Twitter, YouTube and text messaging did not, and could not have predicted, a new papyrus-level crisis in Egyptian government. And yet they provided the medium through which that revolution happened, largely confirming Ethan Zuckerman's assertion that any technology that can be used to share cute cat pictures can be used to overthrow a government.

We, those who grew up on the net and who design it, will be the ones who create ubiquitous computing, not the roboticists or network engineers, and ubicomp will fundamentally change the world and us along with it. We have tremendous power and enormous responsibility. And it's our responsibility to enjoy ourselves, make great stuff, take huge risks, and be thoughtful about the implications of what we're doing without ever forgetting that we have no idea what's going to happen next.
Thank you.

I was again honored to be asked to speak at GigaOM Mobilize this year. Last year I spoke about service avatars, but when GigaOM's Surj Patel asked me to talk about the future of the Internet of Things, I was somewhat stymied. Looking over what happened since last year, I haven't seen many major technological or commercial events that point in any particular direction (except the acquisition of Pachube, which is more validation of the value of open M2M communication than a trend in itself).
However, when I thought about it, I saw that there was a trend. More than anything about technology or its use was a set of small steps in a variety of industries that added up to more than the sum of their parts. Specifically, I saw that six factors were pointing toward the beginning of an entrepreneurial ecosystem for hardware that had many of the key elements of Lean Startups which have proven to be a very successful model for creating new products and companies.

You can download a 720K PDF of the presentation and transcript. GigaOM's Matthew Ingram also covered my talk.

Here it is on Slideshare (click through and look at speaker notes to get the transcript):

Here it is on Scribd:

The Internet of Things to Come: elements of a ubiquitous computing innovation ecosystem

The transcript is as follows:

Good morning. Thank you for inviting me. It's an honor to be back. Today I'm going to be talking about the Internet of Things. Or, more specifically, how I believe that the landscape in which we create ubiquitous computing devices, such as the things that we call The Internet of Things, is about to fundamentally change.

First, let me tell you a bit about myself. I'm a user experience designer. I was one of the first professional Web designers in 1993. I've worked on the design of hundreds of web sites and many digital consumer products. I also regularly work with companies to help them create more user centered design cultures so they can make better products themselves.

I sat out the first dotcom crash distilling my experience into a cookbook of user research methods.

And 2001 I co-founded a design and consulting company called Adaptive Path.

...and three years later I left it, and I left the Web altogether, to found a company with Tod E. Kurt called ThingM in 2006.

We weren't sure what we were going to be but it's turned out that we're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers.

Last year my book on ubiquitous computing user experience design was published. It's called "Smart Things" and it's published by Morgan Kaufmann.

I also organize an annual summit of people developing hardware design tools for non-engineers. It's called Sketching in Hardware and it's this event that's probably most influenced the talk I'm going to give today.

Talking about The Internet of Things is a challenge because there are so many different definitions. This is Time Magazine's illustration of the Internet of Things for their "Best Inventions of 2008" edition. I love this illustration is because it makes no sense no matter how you think about it, which is actually quite an accurate representation of how confusing the many definitions of the Internet of Things are right now.

Gartner has put it on their hype cycle, which means that all kinds of people are describing what they're doing as The Internet of Things, regardless of what they're actually doing.

I can't describe to you what the Internet of Things is, or is going to be. What I can do is tell what components I think are going to be in it, and how I think it's going to come about.
Its components will likely include digital identification through RFIDs or other technologies. These can, for example, tell you where your food was grown...
Source: Yottamark

It will almost certainly include pervasive networking that's used to collect telemetry generated by a wide variety of devices and then store and process it in the cloud. This will, for example, allows your car to be continuously embedded in both Toyota's corporate network and your social network..12!
It will also include embedded sensing in everyday objects that connects them to the cloud. This is Green Goose's sensor platform. They're based here in San Francisco and they put a sensor, a processor, a battery and a wireless communication chip into a puffy sticker that you can put on anything.

Combining these components into services will challenge existing industries. Car sharing, for example, is so heavily dependent on Internet of Things technologies it's hard to imagine it working otherwise.

The Internet of Things will then likely revolutionize infrastructures such as meter reading and parking collection, changing how we relate to our world on the scale of cities.
These are undeniably all pieces of the Internet of Things elephant..15!
The problem is that we're still pretty blind about what the elephant really looks like.
We're about to find out a lot more. There are a number of pieces that have recently begun to fall into place which I believe will create an ecosystem for the rapid development of Internet of Things products, technologies and companies, and that's what I'd like to talk about today.

Here are the six things I think make up this ecosystem.

I'm going to start with the nerdiest stuff first. Semiconductor manufacturers are putting increasingly more functionality on chips. Things that used to take five chips, as this diagram from Renesas Electronics shows, can now be done on one chip. This has all kinds of benefits from an assembly standpoint, but it also has an additional benefit. It creates an abstraction layer around a unit of functionality, in this case an LCD driver, to creates a single building block that's meaningful in human terms, rather than just electronic terms.

This is the start of object-oriented hardware. Each block is an atom of functionality that communicates with other blocks over a local network.

One block can do all of the work to connect to any phone network in the world.

Another is a complete GPS system.

Yet another is a multiaxis accelerometer that does the necessary math to clean up the signal.
This abstraction of knowledge into silicon means that rather than starting from basic principles of electronics, designers can focus on what they're trying to create, rather than which capacitor to use or how to tell the signal from the noise.

Assembling electronics has gotten very cheap. It's not just that it's cheap to ship stuff to Asian factories, but it's gotten surprisingly inexpensive to assemble hardware in medium sized runs yourself. Not ten units, which you can do by hand, and not a million, which requires a serious setup, but, say 1,000, or 5,000. This puts the idea of making small run electronics into cottage industry magnitude and brings it back closer to the hands of designers.
This is one of Sparkfun Electronics pick and place machines.

This is Adafruit's, who work out of a loft in New York. Source: Adafruit

This is DIYDrones' the manufacturing company Chris Anderson of Wired Magazine runs in his space time. These are small companies that are nevertheless big enough that they decided to make their own electronics, because it's now a reasonable business decision.

After twenty years of the Web, there's a lot of familiarity with it, and each new generation of designers and developers is more immersed in Web-like ideas. They increasingly think of digital technology as inherently anchored to the cloud and intuitively understand the possibilities that networked connections provide. There are embedded hardware products, hardware objects, that will do all of the provisioning of a service in the cloud once a connection is made. I grabbed this image from Arrayent, who is a company that makes a little hardware blob that connects virtually anything, in this case a smoke detector to their cloud service.

Moreover, there are now services such as Pachube, which was recently acquired, that allow an arbitrary data stream from any net connected device to share that stream with any other device. Pachube will do the buffering, the protocol translation, the analytics, everything. One device publishes an output stream, another device then subscribes to it it. It's a system that has its roots in Web mashups, now mapped to hardware

One of the most exciting changes is the movement of hardware development tools online. Hardware development used to be a solitary activity done in a lab with an oscilloscope and a soldering iron. Now it's becoming increasingly a social activity thanks to a new generation of online tools.
Upverter, a Y Combinator-funded startup that just launched their beta, is a product that integrates electronic design with social collaboration. It's like SourceForge, or GitHub for hardware.

This is Fritzing, a open source project for online social hardware design. They will even print the circuit board for you and mail it to you.

Once you have social collaboration and the publishing and subscription of designs, schematics and code, you have the equivalent of View Source for hardware design. That, in turn, means that designers no longer have to start from scratch or from electronic textbooks or worry about asking noob questions on discussion boards. It's a model taken directly from how the Web grew.

The Arduino platform is probably the most mature and successful product to have come out of this type of collaborative technology environment.

It has become the reference platform that people extend to accomplish specific things. Here's the Ardupilot drone controller from the DIYDrones folks.

Here's one from All Power Labs that's used for precisely controlling an alternative energy gasifier unit.

Here's Google's Open Accessory development platform. It's also based on the Arduino.
There were microcontroller platforms before, but the Arduino's popularity and flexibility makes it the Linux of Internet of Things hardware. It is not the killer app, but it forms the bedrock on which applications are built.

The final component of the ecosystem is probably the most important and least developed. It's a marketing and distribution mechanism that allows people to sell hardware in low volumes so that they can gauge interest and generate operating income.

Kickstarter, in this instance, acts like a group buying site for products that don't exist yet, giving developers feedback about the popularity of their idea and teaching them how to position it for a market before they've made a single product.

Etsy allows very small run electronic products.

Even fab.com, which sells limited edition high design products like rugs and backpacks sells low run
electronics.

These channels are immature, but they're becoming increasingly popular. In effect, they're doing an end run around the traditional consumer electronic ecosystem to address the long tail of electronics buyers. That also happens to be where much of the greatest innovation happens.

We've already seen such a combination of inexpensive infrastructure technologies coupled with online collaboration systems and a deployment and distribution mechanism that allows for rapid iteration with low overhead.

It's the core of the Lean Startup philosophy that's proven so successful in creating a bunch of new companies and services. If we look at Eric Ries' definition of what makes a lean startup, we can see all the pieces in this new ecosystem.

The tools are free and open. The costs for testing and assembly are low.

Object oriented hardware and social tools enable rapid iterative development, while cloud computing allows for rapid deployment of associated services.

Although they're immature, we're getting increasingly more low volume sales channels to test out ideas. I've singled out Kickstarter because in addition to sales, it provides feedback even before there are any sales, which is even more in line with the lean startup philosophy.

In the end what I am describing here is not the Internet of Things, or ubiquitous computing, but it is the innovation ecosystem that will lead to the Internet of Things.

Thank you.

[I'm speaking at Web Directions South in Sydney in October. Here's the abstract for the talk I plan to give there.]

Let's start with the assumption that computing and networking are as cheap to incorporate into product designs as plastic and aluminum. Anything can tweet, everything knows about everything. The cloud extends from smart speed bumps to exurban data systems, passing through us in the process. We're basically there technologically today, and over the next [pick a date range] years, we'll be there distribution-wise.

Here's the issue: now that we have this power, what do we do with it? Yes we can now watch the latest movies on our phones while ignoring the rest of the world (if you believe telco ads) and know more about peripheral acquaintances than we ever wanted. But, really, is that it? Is it Angry Birds all the way down?

Of course not. Every technology's most profound social and cultural changes are invisible at the outset. Cheap information processing and networking technology is a brand new phenomenon, culturally speaking, and quickly changing the world in fundamental ways. Designers align the capabilities of a technology with people's lives, so it is designers who have the power and responsibility to think about what this means.

This talk will discuss where ubiquitous computing is today, some changes we can already see happening, and how we can begin to think about the implications of these technologies for design, for business and for the world at large.

fovea.jpg
Tish Shute of Ugotrade generously invited me to present at Augmented Reality Event 2011 yesterday in a session on augmented reality user experience. My time slot was relatively short, and she challenged me to talk outside of the usual topics, so I chose to talk about something that's been interesting me for a long time: the use of non-visual senses for communicating information about the information shadows around us. In the process, I humbly decided to rename "augmented reality" (because I'm somewhat obsessed with terminology). My suggested replacement term is somatic data perception. Also, as an intro to my argument, I decided to do a back of the envelope calculation for the bandwidth of foveal vision, which turns out to be pretty low.

Here is the Slideshare version:

Scribd, with note:
Somatic Data Perception: Sensing Information Shadows

You can download the PDF(530K).

Here's the transcript:

Good afternoon!


First, let me tell you a bit about myself. I'm a user experience designer and entrepreneur. I was one of the first professional Web designers in 1993. Since then I've worked on the user experience design of hundreds of web sites. I also consult on the design of digital consumer products, and I've helped a number of consumer electronics and appliance manufacturers create better user experiences and more user centered design cultures.


In 2003 I wrote a how-to book of user research methods for technology design. It has proven to be somewhat popular, as such books go.


Around the same time as I was writing that book, I co-founded a design and consulting company called Adaptive Path.


I wanted to get more hands-on with technology development, so I founded ThingM with Tod E. Kurt about five years ago.


We're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers. We also make prototypes of finished objects that use cutting edge technology, such as our RFID wine rack.


I have a new startup called Crowdlight.


[Roughly speaking, since we filed our IP, Crowdlight is a lightweight hardware networking technology that divides a space into small sub-networks. This can be used in AR to provide precise location information for registering location-based data onto the world, but it's also useful in many other ways for layering information in precise ways onto the world. We think it's particularly appropriate for The Internet of Things, for entertainment for lots of people, and for infusing information shadows into the world.]


This talk is based on a chapter from my new book. It's called "Smart Things" and it came out it September. In the book, I describe an approach for designing digital devices that combine software, hardware, physical and virtual components.

Augmented reality has a name problem. It sets the bar very high and implies that you need to fundamentally alter reality or you're not doing your job.
This in turn implies that you have to capture as much reality as possible, that you have immerse people as much as possible.


This leads naturally to try to take over vision, since it's how we most perceive the world around us. If we were bats, we would have started with hearing, if we were dogs, smell, but we're humans, so for us reality is vision.


The problem is that vision is a pretty low bandwidth sense. Yes. It's possibly the highest bandwidth sense we have, but it's still low bandwidth.


This morning I decided to do a back of the envelope estimate of how much bandwidth we have in our vision. This is a back of the envelope estimate by a non-scientist, so excuse it if it's way off. Anyway, I started with the fovea, which typically has between 30,000 and 200,000 cones in it.


To compensate, our eyes move in saccades which last between 20ms and 200ms, or 5 to 50 times per second.


So this leads to a back of the envelope calculation of eye bandwidth between 100 bits per second and 10K bits per second

That's around 4 orders of magnitude slower than a modern front-side bus.

The brain deals with this through a series of ingenious filters and adaptations to create the illusion of an experience of all reality, but at the core there's only a limited amount of bandwidth available and our visual senses are easily overwhelmed.

In the late 70s and early 80s a number of prominent cognitive scientists measured all of this and showed that, roughly speaking, you can perceive and act on about four things per second. That's four things period. Not four novel things that just appeared in your vision system--that takes much longer--or the output of four new apps that you just downloaded. It's four things, total.

This is a long digression to my main point, which is that augmented reality is the experience of contextually appropriate data in the environment. And that experience not only can, but MUST, use every sense available.


Six years ago I proposed using actual heat to display data heat maps. This is a sketch from my blog at the time I wrote about it. The basic idea is to use a peltier junction in an arm band to create a peripheral sense of data as you move through the world. So that you can have it hooked up to Wifi signal strength, or housing prices, or crime rate, or Craig's List apartment listings, and as you move through the world, you can feel if you're getting warmer to what you're looking for because you arm actually gets warmer. This allows you to use your natural sense filters to determine whether it's important. If it's hot, it will naturalliy pop into your consciousness, but it'll only be there otherwise if you want it, and you can check in while doing something else, just as when you're gauging which direction the wind is going by which side of your face is cold, and you're not adding additional information to your already overstuffed primary sense channels.

If AR is the experience of any kind of data by any sense then we have the options to associate secondary data with secondary senses to create hierarchies of information that match our cognitive abilities.

For me, augmented reality is the extension of our senses into the realm of information shadows where physical objects have data representations that can be manipulated digitally as we manipulate objects physically. To me this goes further than putting a layer of information over the world, like a veil. It's about enhancing the direct experience of the world, not to replace it, and to do it in a way that's not about being completely in the background, like ambient data weather, or about taking over our attention.

So what I'm advocating for is a change in language away from "augmented reality" to something that's more representative of the whole experience of data in the environment. I'm calling it "Somatic Data Perception" and I close on a challenge to you. As you're designing, think about what IS secondary data and what are secondary, and how can the two be brought together?

Thank you.

Ads

Archives

ThingM

A device studio that lives at the intersections of ubiquitous computing, ambient intelligence, industrial design and materials science.

The Smart Furniture Manifesto

Giant poster, suitable for framing! (300K PDF)
Full text and explanation

Recent Photos (from Flickr)

Smart Things: Ubiquitous Computing User Experience Design

By me!
ISBN: 0123748992
Published in September 2010
Available from Amazon

Observing the User Experience: a practitioner's guide to user research

By me!
ISBN: 1558609237
Published April 2003
Available from Amazon

Recent Comments

  • Katherina: Information not just material. In our days it is a read more
  • tamberg.myopenid.com: Hi Mike, totally agree on building the IoT in a read more
  • Mutuelle: Man is the reflections of his thought, some name it read more
  • Amanda Carter: You obviously placed a great deal of work into that read more
  • Molly: You might find it interesting to connect with return of read more
  • George: You might want to change "Size" to "form" for terminal. read more
  • Mike: Thanks for the reminder, Robin. I'm aware of that article, read more
  • Robin: It's a slightly different argument (it predates most work in read more
  • Tim: This reminded me of the Pleo video Mark posted awhile read more
  • michael studli: i was wonting to know is the game fun to read more

About this Archive

This page is an archive of recent entries in the Social effects category.

Smart Objects is the previous category.

Find recent content on the main index or look in the archives to find all content.