Results tagged “terminology”

fovea.jpg
Tish Shute of Ugotrade generously invited me to present at Augmented Reality Event 2011 yesterday in a session on augmented reality user experience. My time slot was relatively short, and she challenged me to talk outside of the usual topics, so I chose to talk about something that's been interesting me for a long time: the use of non-visual senses for communicating information about the information shadows around us. In the process, I humbly decided to rename "augmented reality" (because I'm somewhat obsessed with terminology). My suggested replacement term is somatic data perception. Also, as an intro to my argument, I decided to do a back of the envelope calculation for the bandwidth of foveal vision, which turns out to be pretty low.

Here is the Slideshare version:

Scribd, with note:
Somatic Data Perception: Sensing Information Shadows

You can download the PDF(530K).

Here's the transcript:

Good afternoon!


First, let me tell you a bit about myself. I'm a user experience designer and entrepreneur. I was one of the first professional Web designers in 1993. Since then I've worked on the user experience design of hundreds of web sites. I also consult on the design of digital consumer products, and I've helped a number of consumer electronics and appliance manufacturers create better user experiences and more user centered design cultures.


In 2003 I wrote a how-to book of user research methods for technology design. It has proven to be somewhat popular, as such books go.


Around the same time as I was writing that book, I co-founded a design and consulting company called Adaptive Path.


I wanted to get more hands-on with technology development, so I founded ThingM with Tod E. Kurt about five years ago.


We're a micro-OEM. We design and manufactures a range of smart LEDs for architects, industrial designers and hackers. We also make prototypes of finished objects that use cutting edge technology, such as our RFID wine rack.


I have a new startup called Crowdlight.


[Roughly speaking, since we filed our IP, Crowdlight is a lightweight hardware networking technology that divides a space into small sub-networks. This can be used in AR to provide precise location information for registering location-based data onto the world, but it's also useful in many other ways for layering information in precise ways onto the world. We think it's particularly appropriate for The Internet of Things, for entertainment for lots of people, and for infusing information shadows into the world.]


This talk is based on a chapter from my new book. It's called "Smart Things" and it came out it September. In the book, I describe an approach for designing digital devices that combine software, hardware, physical and virtual components.

Augmented reality has a name problem. It sets the bar very high and implies that you need to fundamentally alter reality or you're not doing your job.
This in turn implies that you have to capture as much reality as possible, that you have immerse people as much as possible.


This leads naturally to try to take over vision, since it's how we most perceive the world around us. If we were bats, we would have started with hearing, if we were dogs, smell, but we're humans, so for us reality is vision.


The problem is that vision is a pretty low bandwidth sense. Yes. It's possibly the highest bandwidth sense we have, but it's still low bandwidth.


This morning I decided to do a back of the envelope estimate of how much bandwidth we have in our vision. This is a back of the envelope estimate by a non-scientist, so excuse it if it's way off. Anyway, I started with the fovea, which typically has between 30,000 and 200,000 cones in it.


To compensate, our eyes move in saccades which last between 20ms and 200ms, or 5 to 50 times per second.


So this leads to a back of the envelope calculation of eye bandwidth between 100 bits per second and 10K bits per second

That's around 4 orders of magnitude slower than a modern front-side bus.

The brain deals with this through a series of ingenious filters and adaptations to create the illusion of an experience of all reality, but at the core there's only a limited amount of bandwidth available and our visual senses are easily overwhelmed.

In the late 70s and early 80s a number of prominent cognitive scientists measured all of this and showed that, roughly speaking, you can perceive and act on about four things per second. That's four things period. Not four novel things that just appeared in your vision system--that takes much longer--or the output of four new apps that you just downloaded. It's four things, total.

This is a long digression to my main point, which is that augmented reality is the experience of contextually appropriate data in the environment. And that experience not only can, but MUST, use every sense available.


Six years ago I proposed using actual heat to display data heat maps. This is a sketch from my blog at the time I wrote about it. The basic idea is to use a peltier junction in an arm band to create a peripheral sense of data as you move through the world. So that you can have it hooked up to Wifi signal strength, or housing prices, or crime rate, or Craig's List apartment listings, and as you move through the world, you can feel if you're getting warmer to what you're looking for because you arm actually gets warmer. This allows you to use your natural sense filters to determine whether it's important. If it's hot, it will naturalliy pop into your consciousness, but it'll only be there otherwise if you want it, and you can check in while doing something else, just as when you're gauging which direction the wind is going by which side of your face is cold, and you're not adding additional information to your already overstuffed primary sense channels.

If AR is the experience of any kind of data by any sense then we have the options to associate secondary data with secondary senses to create hierarchies of information that match our cognitive abilities.

For me, augmented reality is the extension of our senses into the realm of information shadows where physical objects have data representations that can be manipulated digitally as we manipulate objects physically. To me this goes further than putting a layer of information over the world, like a veil. It's about enhancing the direct experience of the world, not to replace it, and to do it in a way that's not about being completely in the background, like ambient data weather, or about taking over our attention.

So what I'm advocating for is a change in language away from "augmented reality" to something that's more representative of the whole experience of data in the environment. I'm calling it "Somatic Data Perception" and I close on a challenge to you. As you're designing, think about what IS secondary data and what are secondary, and how can the two be brought together?

Thank you.

I was doing some writing for my upcoming Device Design Day talk and started to make a list of two common kinds of smart things that I've been seeing out in the world. For lack of better terminology, I'm calling these appliances and terminals. I haven't yet processed all of these ideas, but here is an initial stab at distinguishing two major classes of smart thing.

Appliances Terminals
Most functionality is Local Remote
Technical capabilities Narrow. Technology is only included if it supports core purpose. Broad. Many possible sensors and actuators are included in case they're needed by a service.
Effectiveness High. They're very good at the small number of things they do. Low. They're OK at many things.
Interface complexity Low. A narrow vision means the interface is relatively straightforward. High. The general-purpose nature of the devices means that the burden of efficacy is on the interface design.
A group of them that is interoperating is called... An ensemble A service
A single member of the group is called... An instrument An avatar
Barriers to interoperability High. Unless they're designed to work together from the start Theoretically low: they're designed to be avatars of the same service. In practice: high. Cross-avatar UX is still at an infancy.
Distinguished from each other by Specific function Size
Strength of links between linked devices Low. Connecting appliances that aren't designed to be connected is difficult. High. In theory. Theoretically service avatars should easily communicate, but that's not often the case in practice.
Examples Digital pedometers, Internet connected bathroom scales, networked parking meters, cars, Nike+iPod, cameras. smart phones, netbooks, laptops, connected TVs

IMGP3632 IMGP3711

As has been obvious in the recent past, I've been a bit focused on how and why disciplines, especially disciplines relating to ubiquitous computing, are named what they are. I'm not a language precision pedant most of the time--words mean what we want them to mean, when want them to mean those things and to the people we want to understand--but the titles of large ideas have a particularly strong impact on how we think about them. They, in effect, set agendas. If the scientists had called Global Warming something else, say "Global Weather Destabilization," that would have changed a lot of our expectations for it. People wouldn't nitpick about whether one degree is a lot or a little or whether an unusually cold winter in Michigan means that it's all a sham.

Similarly, what we call disciplines we involve ourselves in sets a lot of expectations for the agenda of those disciplines. Lately, I've been thinking about why "ubiquitous computing" has such problems as a name. When I talk about it, people either dismiss it as a far-future pipe-dream, or an Orwellian vision of panoptic control and dominance. I don't see it as either. I've never seen it as an end point, but as the name of a thing to examine and participate in, a thing that's changing as we examine it, but one that doesn't have an implicit destination. I see it as analogous to "Physics" or "Psychology," terms that describe a focus for investigation, rather than an agenda.

Why don't others see it the same? I think it's because the term is fundamentally different because it has an implied infinity in it. Specifically, the word "ubiquitous" implies an end state, something to strive for, something that's the implicit goal of the whole project. That's of course not how most people in the industry look at it, but that's how outsiders see it. As a side effect, the infinity in the term means that it simultaneously describes a state that practitioners cannot possibly attain ("ubiquitous" is like "omniscient"--it's an absolute that is impossible to achieve) and an utopia that others can easily dismiss. It's the worst of both worlds. Anything that purports to be a ubiquitous computing project can never be ubiquitous enough, so the field never gets any traction. The mobile phone? That's not ubiquitous computing because it's not embedded in every aspect of our environment and doesn't completely fade into the background. A TiVo can't be ubiquitous computing because it requires a special metaphor to explain it. The adidas_one shoe isn't ubicomp because it doesn't network.

The problem is not with the products, it's with the expectations that the term creates.

I see this problem with a lot of terms: artificial intelligence has "intelligence" as part of it, so nothing can be AI until it looks exactly like what we would call intelligence. Machine learning, that's not AI because it's just machines doing some learning. That's not intelligence. Pervasive computing can't exist until we have molecule-sized computers forming utility clouds, because nothing can be pervasive enough until then. Ambient intelligence is an amazingly bad term using this metric: TWO words with implied infinities.

As Liz (Goodman, my wife and fellow ubicomp researcher ;-) points out, when these terms are coined, they are created with a lot of implicit hope, with excitement and potential designed to attract people to the potential of the ideas. But after the initial excitement wears off (think AI in the 1970s) they create unmeetable expectations as the initial surge of ideas gives way to the grind of development, and setbacks mean that the results are never as ubiquitous, intelligent, pervasive, or whatever, as observers had been led to believe. AI was doomed to be a joke for a decade (or more) before they renamed themselves something that implicitly promised less, so they could deliver more.

So what to do about this? Well, I've done a couple of things: I've used one term ("ubiquitous computing") rather than creating ever more elaborate terms to describe the same thing, and I've tried to use it to describe the past as well as the future. In my past couple of lectures I've been arbitrarily setting the beginning of the era of everyday ubicomp as having started in 2005. It's not something in the future, it's something that's in the past and today. Is that a losing battle? Do we need to rename "ubicomp" something like "embedded computing product design," something that promises less so that it can deliver more? Maybe. I still like the implicit promise in the term and its historical roots, but I recognize that as long as it has an infinity in part of its term, there will always be misunderstandings. Some people (like the folks in New Songdo City) will actually try to create the utopian vision, and invariably fail. Some will criticize the field for even trying, while at the same time doing the same thing under a different name.

Me, I'm going to keep calling it "ubiquitous computing" or "ubicomp" until it's either clear that the costs of sticking with the name overweight the benefits I believe it has, or until a better term, one that's less likely to let everyone down, comes along.

(the title of the blog post references Finite and Infinite Games, a book I've never read, but which friends of mine tell me is quite good)

[2/18/09 Update: Michiel asked me (in email, because I have blog comments turned off) what I thought about "The Internet of Things" as a term. I've written about it before and I think it's a pretty good term. It's not as unbounded as the terms I mentioned. "Internet" is something people are familiar with and "things" is a large set, but not an infinite one. There's some internal confusion because "the internet" is seen as ephemeral, and it's hard to imagine how that ephemeral idea translates to the very literal world of "things." Likewise, there's an implication that all things will become part of this new internet, which is also potentially confusing. However, those criticisms aside, I don't think it's a bad term, but only if it's defined well and used precisely. I don't think it's exactly the same idea as ubiquitous computing, for example, since I see it as more about individual object identification and tracking, rather than smart environments, or ambient displays. If it starts to be yet another synonym for ubicomp, its value will diminish.

William sent me the following note:
Interesting observations! Two related bits:

One is Martin Fowler's thinking on "semantic diffusion":

http://martinfowler.com/bliki/SemanticDiffusion.html
http://martinfowler.com/bliki/FlaccidScrum.html

Another was a recent conversation I had at the Prediction Markets conference with an econ professor. He mentioned that the incentives are such that whenever a term develops a positive value, people attach themselves to it until its value swings negative. I think that basic model is too simple, but from it you can develop a richer model that explains a lot of what people get up to with terms.

I like the economic idea, though I agree that it (feels) too simple. ]

One of the reasons I haven't posted to this blog in months (and likely won't post anything original to it for months more) is because most of my time is spent writing my ubicomp user experience design book. The chapter I'm currently working on touches on service design, so I decided to do a little research about it. Three days and several hundred papers later, I think I've sorted out some parts of it, which turned into two sidebars for the current chapter. I present the sidebars to you in their raw, first-draft form because I think they may be useful (and continue my obsession with clearly defining and understanding the terms we use).

Sidebar: Software services vs. end-user services

Defining what people mean by service often means wading through a lagoon of terminology. There are two fundamental ways of looking at a service: from the perspective of the technology and from the user experience perspective. They share the core concept that a service is something atomic and coherent. That it is something that is seen as a single unit from which other units are built.

That's where the two concepts diverge:


  • From the technical perspective, a service is an atomic unit of functionality. Something that is kind of like a superset of a well-constructed object in object-oriented programming. This is the meaning of the term as used in the definitions of things like Service Oriented Architecture (SOA): "Services are collections of capabilities." Footen and Faust (2008)
  • From the user experience perspective, a service is an atomic unit of activity. It is the elements that would be connected by an end-user when describing something that helps accomplish a specific goal. "A chain of activities that form a process and have value for the end user." (Saffer, 2006)

Some of the confusion about the definition of "service" comes from the fact that end-user services may be composed of a number of software services, so service designers looks at them as unified experiences, whereas software architects look at them as combinations of things they consider to be different. Inverting the definition also causes confusion, since a single software service (such as file storage) may take part in a number of end-user experiences, each of which is perceived as a different service by its customers.
Additional confusion arises because the concepts of service design overlap with those of brand management, which also attempts to unify user experiences across a range of technologies (or touchpoints).

Sidebar: Top-down, holistic service design

While doing research for this chapter, I came across a number of similar concepts in different disciplines. The idea is that design should be vertically integrated, that every product (more or less) is part of a larger system and needs to be designed within the context of that system. The extreme example is Disneyland, where Disney controls virtually every aspect of a visitor's engagement with the world. All of these ideas share the core philosophy that there isn't a single path that ends with a product being purchased and consumed, but an ongoing relationship between users and organizations that is maintained through engagement with a range of designed experiences (which could be tangible products, media messages, environments or personal interactions). This top-down holistic design philosophy is comparable to that advocated by cybernetics and systems science in the mid-20th century, now updated for modern technologies and business contexts. Space does not permit a detailed discussion of all the different approaches in current use, but I wanted to briefly mention them and identify what I see as their key differences.
  • Product-Service Systems (Mont, 2004) emphasize the potential of efficiencies created by designing products and services together, especially ecological efficiencies.
  • Service Science Management and Engineering, aka SSME (with D sometimes added for design) or service science (Maglio et al, 2006) is IBM's approach to creating a systematic discipline for understanding and building systems that encompass people, technology, organizations and shared information.
  • Service design (Blomberg and Evenson, 2006) is a term used in the design world to describe a practice that designs products in the context of the key value that the organization creating the product intends to provide the end-user.
  • Service Blueprinting (Bitner et al, 2008) is a notational technique for visualizing the relationship between service components.
  • Integrated Marketing Communication (IMC) (Schultz and Kitchen, 1997) is an approach that ties together all communications between an organization and its audience into a single unified strategy. If products and services are considered to be a type of communication, then this approach includes them, too.
  • The Elements of User Experience (Garrett, 2000) is a conceptual system for interaction designers that places a range of design practices in a unified user experience model.
  • Transmedia storytelling (Jenkins, 2006) describes the practice of create a unified experience across a number of media and products. Like IMC, it's pretty far from the core focus of technology in much service discussion, but I believe there's a relationship. Stories aren't services, but storytelling is, and since digital technology plays such a large role in contemporary storytelling, there's a practical connection as well.

[Basically, in these two sidebars I'm saying that there's one elephant, it's not really a new elephant, but it may be a newly-relevant elephant, and all of these different terms are descriptions for different parts of a single elephant.]

[1/29/09 Update: after a request, I figured I'd post a mini-bibliography to this. Here are all of the books and papers I managed to get into Zotero as somehow related to the topic, though they're not all the papers and books I looked at]

Mini-bibliography of service [design|system|science|development]


Bitner, M. J., A. L. Ostrom, and F. N. Morgan. 2008a. Service Blueprinting: A Practical Technique for Service Innovation. CALIFORNIA MANAGEMENT REVIEW 50, no. 3: 66.
---. 2008b. Service Blueprinting: A Practical Technique for Service Innovation. CALIFORNIA MANAGEMENT REVIEW 50, no. 3: 66.

Blomberg, J., and S. Evenson. 2006. Service innovation and design. In Conference on Human Factors in Computing Systems, 28-31. ACM New York, NY, USA.

Carbone, L. P., and S. H. Haeckel. 1994. Engineering Customer Experiences. Marketing Management 3, no. 3: 8-19.

Erl, Thomas. 2007. SOA.

Footen, John, and Joey Faust. 2008. The Service-Oriented Media Enterprise.

Gillespie, B. 2008. Service Design via the Global Web: Global Companies Serving Local Markets. DESIGN MANAGEMENT REVIEW 19, no. 1: 44.

Glushko, R. J. Designing Service Systems by Bridging the “Front Stage” and “Back Stage”.

HOLMLID, S., and S. LINKÖPING. INTERACTION DESIGN AND SERVICE DESIGN: EXPANDING A COMPARISON OF DESIGN DISCIPLINES.

Jonas, W., N. Morelli, and J. Münch. Designing a product service system in a social framework–methodological and ethical considerations.

Maffei, S., and B. Mager. INNOVATION THROUGH SERVICE DESIGN. FROM RESEARCH AND THEORY TO A NETWORK OF PRACTICE. A USERS’ DRIVEN PERSPECTIVE.

Maglio, P. P., S. Srinivasan, J. T. Kreulen, and J. Spohrer. 2006. Service systems, service scientists, SSME, and innovation. Communications of the ACM 49, no. 7: 81-85.

Mont, O. 2004. Product-service systems: Panacea or myth. The International Institute for Industrial Environmental Economics (IIIEE), Lund University: Lund, Sweden: 233.

Mont, O. K. 2002. Clarifying the concept of product–service system. Journal of Cleaner Production 10, no. 3: 237-245.

Morelli, N. 2002a. The Design of Product Service Systems from a Designer's Perspective. Common Ground 2002.
---. 2002b. The Design of Product Service Systems from a Designer's Perspective. Common Ground 2002.

Pires, G., P. Stanton, and J. Stanton. 2004. The Role of Customer Experiences in the Development of Service Blueprints. In ANZMAC 2004 Conference.

Schultz, D. E., and P. J. Kitchen. 1997. Integrated Marketing Communications in US Advertising Agencies: An Exploratory Study. Journal of Advertising Research 37, no. 5: 7-18.

[1/29/09 Update 2: Jeff Howard pointed me to a comprehensive annotated bibliography of service design that he has compiled. Thanks, Jeff!]

PICNIC

I spent last week in Amsterdam at the PICNIC conference. Vlad Trifa of SAP/ETH invited me to present at an Internet of Things special session he organized, and it was one of the highlights of the conference. His timing was impeccable, with the session arriving just days after Cisco's Internet of Things consortium (IPSO Alliance) announcement (which, tangentially, now canonizes that term as yet another name for ubiquitous computing, though, as a term, you could certainly do worse and you could argue that it's the sub-1m granularity of ubicomp). It was great to share the tiny cafe stage with folks representing a wide range of organizations from giant conglomerates and emerging players to other fledgling startups.

I gave a talk entitled Shadows and Manifestations (440K PDF) (Matt Jones has posted a video of the talk on Vimeo--thanks Matt!) that focused on several of the ways I've been thinking about ubicomp UX design (and, by extension, Internet of Things UX design). If you've collected the whole set, there's little that's totally new. I have expanded my thinking on information processing is a material with some historical parallels to other materials and I have included some newer thinking about the implications of digital identification technology and ubicomp.

The most interesting result of the session for me was the high degree of similarity between the various ideas. This could just be a product of Vlad's curatorial process, but there were uncanny resonances between a number of the ideas in the presentation, and many ideas came up repeatedly. My presentation was roughly in the middle of the session, and I spent the whole first half frantically updating my slides in response to what others were saying. It was clear that in this group it wasn't necessary to talk in detail about ubicomp as an emergent property of the economics of CPU prices, that devices become intimately coupled with services, or that networks of smart things generate whole new universes of services.

The Internet of Things and Directories

One of the ideas that emerged in multiple presentations in conversations is for a device information brokerage and translation service. The idea is that a central service brings together information generated by all of these smart devices in a standard way and in a predictable location to facilitate mashups between various devices. Violet, tikitag, OpenSpime and Pachube, all of whom were represented, all essentially share this idea.


This got me thinking about other such systems I've seen, and I realized that I've seen this pattern before at least twice before: in Internet hostname resolution and P2P file sharing. For host resolution, before DNS there was HOSTS.TXT, a canonical file that stored all of the addresses of all the computers on the Internet. Eventually, this became untenable and the distributed system we know today was devised. In DNS there is no single central authority, but distributed authorities and protocols that extracts the canonical answer from a web of connections. Similarly, the P2P file sharing world started with Napster, which had a single server (or service) that knew where all the files were and redirected various queries in a top-down way. However, the deficiencies in that approach gave way to the distributed indexes of Gnutella. We're now seeing the same thing with BitTorrent, which relies on trackers to connect someone who has data and someone who wants it, but is moving a distributed trackerless model.

The unifying pattern here is:


  1. Create a service that runs on a number of devices
  2. Create a central phonebook so that those devices can find each other
  3. Create a distributed phonebook that is as distributed as the devices it indexes

Now the question becomes: what happens between steps 2 and 3? Why is there a repeated emergent pattern that such go through? I have a theory in two parts:

  1. A crisis happens if the service is successful. The centralized server model becomes too resource-constrained (i.e. it's overloaded beyond a "reasonable" cost of upgrading).
  2. This is a necessary evolution. Systems that start out with distributed indexes are significantly more complex than centralized ones. This requires a lot of implementation overhead on the part of the server and device designers and is brittle because it assumes feature priorities that may not match actual needs. In other words, if there isn't a perceived need, people don't want to write a bunch of code for scaling a service they aren't sure about. Moreover, it's not clear what the system should do, and initial assumptions are notoriously error-prone. I've seen a number of protocols that attempt to abstract a problem before it's clear what the problem is.

Google Protocol Buffers as a device communication standard

Finally, this made me think back to a discussion I started having with Bjoern, Tod and some of the other Sketching folks, which is the use of Google's new Protocol Buffers as a meta-protocol for devices to speak to each other. The point of Protocol Buffers is that they are simultaneously flexible and lightweight, which is valuable both when you're moving huge amounts of data around AND when you have very little processing power.

Google lists as advantages that:


  • [Protocol buffers] are simpler
  • are 3 to 10 times smaller
  • are 20 to 100 times faster
  • are less ambiguous
  • generate data access classes that are easier to use programmatically

Google uses them to reduce the amount of code they have to throw around between services on the back end, but I thought that this describes many of the same constraints that small devices have. So my (super nerdy, sorry designers) question for all of the folks working on brokerage services: why not support/encourage the use of Protocol Buffers as the preferred format-independent data interchange mechanism?

[Tangentially, "Protocol Buffers" is a terrible name; it is simultaneously generic and overly specific and I think that the name will significantly hurt its adoption.]

I just got a pamphlet inviting me to the 2007 Semantic Technology conference, which has a curious illustration on page 3.

IMGP1059.JPG

The illustration shows the "evolution" of the Internet, really the Web, since what its creators do is show how "Web 2.0" becomes "Web 4.0." Or something. Basically, I read it as a kind of recasting of classical hard AI opportunistically in the language of modern Web development. You can see that there's an arrow that points to the upper right (connecting, somewhat confusingly, Web 1.0 and Web 4.0, while bypassing Web 2.0 and Web 3.0), which reads "Agent Webs that Know, Learn & Reason as Humans Do."

This is all happening along two primary axes, "Increasing Social Connectivity" and "Increasing Knowledge Connectivity & Reasoning." The first one is clear, it's the primary driver of the flowering of Web 2.0: people are social, so the information they use can be social, too. The second one seems reasonable as a label--yes, we are increasing in the amount of data that's available to us, so we're probably increasing the amount of knowledge. "Reasoning," however, assumes a lot. If you look at the Web 1.0 and Web 2.0 clusters, I don't know if "Enterprise Portals" actually exhibit any appreciably more more "reasoning" than "Databases," as the graph seems to imply.

But this is nitpicking. The interesting thing for me about this graph is how it misses specialized devices almost entirely. "Blogjects" and "Spimes" show up in Web 4.0, yet mobile phones don't show up in Web 1.0/2.0 at all, much less fuzzy logic rice makers. My biases are well known, but if we're to read the projected dates, it appears that "Artificial Intelligence" will show up before ubicomp. I think that's wishful thinking. AI has been 10 years away for 50 years. Devices that employ a limited understanding of semantic relationships between objects in the world are much more likely to appear before reasoning "Intelligent agents" or "bots" and they will look little like top-down models of human cognition. They're going to be like the Roomba, much closer to insects, and behaving as "irrationally" as insects do while functioning much more effectively than systems that try to reason. They will most definitely be part of the evolution of the Internet, too, but it'll be the Internet of things, which will project the Semantic Web into everyday life, rather than leaving it inside some networked abstraction, as I feel this chart implies.

After all of the observation- and analysis-based discussions of terminology on this blog, I decided to do a little experiment to see if there was any data that could be collected. To get an idea of how much people used which term, I first thought that I should just tally the Google references to each of the terms that refer to ubicomp and related concepts. Then I realized that that technique suffered from the unpredictable nature of the Google estimation algorithm and it didn't recognize that some of the terms have been around a lot longer than others, so there are probably more documents that refer to them, even if they're not as popular in the field today.

I decided that a better way to do this would be to buy some search engine keywords (specifically "ubiquitous computing," "ubicomp," "pervasive computing" and "ambient intelligence") and watch how many times the keyword was searched-for and how many times people clicked on it.

Here are the results from 11 days of keyword placement on Google:

OK, what do we see? I'm going to treat clicks as active interest and impressions as a kind of semi-active interest, though I acknowledge this is projecting a lot on the audience and may be conflating several factors in terms of audience composition and intention. In this analysis, "ambient intelligence" is a nonstarter in terms of active interest, although as many people searched for it as "ubicomp." Maybe it's just an academic term and my ad for ThingM (there needs to be something people can click on ;-) wasn't interesting to academics. Still, it's interesting to note that no one clicked on an ad that mentioned "ubicomp" when they had searched for "ambient intelligence."

The next finding, though, I think is the most interesting: "pervasive computing" was searched-on as much (actually a little more) as "ubiquitous computing," but clicked significantly less. First of all, I was surprised that it was searched for as much, but it gets clicked on less than "ubiquitous computing," and that puzzles me. It shows different levels of interest, or different audiences, between the terms. Moreover, the cost-per-click on it is significantly higher, which means that other people are trying to buy the term (I don't think I had any competition for any of the other terms).

Anyway, out of this small thicket of numbers come more questions than answers. Some things we can extract reliably: "ubiquitous computing" and "pervasive computing" are both roughly equally as popular in terms of interest. "Ambient intelligence" is not so popular. "Ubiquitous computing" creates more active interest than any of the other terms, and "ubicomp" is not in nearly as active use (despite its greater popularity as a tag on del.icio.us; for comparison: ubiquitous computing).

Continuing my project of observing how terminology shifts to describe the process of researching and designing the user experience of ubiquitous computing, I noticed a blurb in the latest issue of the IDSA's "design perspectives" newsletter. In it, they note a new service launched by RAHN, Inc., which RAHN calls "Quantitative Ethnographics (QE)." They claim this "integrates performance metrics into the analysis and illustrates innovation's positive impact on a prospective client's customer."

Apart from the error of assuming a "positive impact" before starting research, it's interesting to me how RAHN seems to be using the current vogue for the use of "ethnographics" as a term to describe user research, but modifying it by using the language of measurement (presumably because numbers and figures look better in client reports). Measurement--and the "finding of an average" that it implies--is kind of the opposite of the goal of traditional ethnography, which aims to describe culture in its complexity. That doesn't actually seem to be the point anymore. "Ethnographics" has come to mean "we go onsite and look at people." It has ceased to have the meaning it once had as an anthropological practice, and has been repurposed by the design community.

Is this is a good thing? I don't know, but it's a thing.

Excuse me while a rant a bit:

<rant>

We have a new entry in the terminology haze that surrounds ubiquitous computing, Palpable Computing. Hooray! Another word for roughly the same thing, but with a twist that could only have looked good on an EU grant application:

Palpable denotes that systems are capable of being noticed and mentally apprehended. Palpable systems support people in understanding what is going on at the level they choose. Palpable systems support control and choice by people.

Their claim is that they're inverting "ambient computing," which is supposedly invisible, with a vision of computing that's more, well, tangible. They position a set of ideas that claim to show how this approach complements "ambient computing," which I find difficult to see, since there's no really developed set of ideas about what "ambient computing" is (maybe inside all the Disappearing Computer project paperwork there is, but certainly not in common use or practice outside the community of people who were given grant money by that project). Moreover, I don't see how the terms they're using as complements relate to the things they're claiming to complement:

AMBIENT COMPUTING   complemented with    PALPABLE COMPUTING
invisibilityvisibility
scalability understandability
constructionde-construction
heterogeneitycoherence
changestability
sense-making and negotiationuser control and deference
(from here)

Maybe I haven't read enough about it, but it seems to me--at first blush--like a syntactic land grab and a linguistic distinction created to justify continued funding more than an attempt to clarify concepts and move the field forward. It's kind of a shame, and I certainly don't see how it's going to satisfy their project goals, a number of which, at least, seem to be jumping to try and create technology before they've finished making a philosophical argument:

  • an open architecture for palpable computing
  • a conceptual framework to understand the particulars of palpable technologies and their use.
  • design and implementation of a toolbox for the construction of palpable applications
  • development of a range prototypes of palpable applications
  • gaining a firm understanding of a range of practices into which palpable technologies may be introduced.

Further, as Liz points out, it may represent a rethinking, a retrenching, after an initially overly reductionist reading of Weiser and Norman . That reading may have led to the idea of "ambient intelligence" representing literal disappearance, rather than a philosophy for distributed information processing that meets people's needs and desires (which are sometimes to have things in the background and other times to not). "Ambient intelligence" may have now proven to be too ambient, and thus needs to be complemented with this new project, which may be as equally reductionist.

</rant>

That rant over, congratulations on the funding and all the best luck to you in your new project, folks.

Following up on my earlier post and attempting to include the terms that Adam and Bruce introduced, I wrote a sentence using all of the appropriate ubicomp buzzwords, as an attempt to create a narrative that ties together the fields, at least semantically. Here it is, in all its gory detail:

To create a world of ambient, ubiquitous intelligence, we will use physical, pervasive, often portable, information processing objects, which we will refer to individually as spimes and appliances, and collectively as everyware.

Liz is right of course when she tells me that it's a questionable exercise, when there is no master narrative and all of these terms emerge as products of different cultures, and Anne is right that creating rigid boundaries around terminology can be exclusionary of ideas and people, but I think it's valuable when trying to wrap my brain around the stuff. Or at least it'll make for magnetic poetry fodder.

Yesterday I was explaining what I do to a friend and started getting caught up in the usual tangle of terminology, so I came up with a structure for the different terms related to the fragmentation of information processing into everyday objects. As I see it, the different terms--pervasive computing, ubiquitous computing, ambient intelligence and physical computing--come from different historical contexts that are based on geography: PARC coined "ubiquitous computing," so it's big on the West Coast; IBM likes "pervasive," they get the East; Philips was responsible for "ambient intelligence," so that's what it's called in Europe. In reality, it's just a blind men and elephant problem. They're all describing the same idea, but alliances and territoriality create clusters of terminology. So here's how I described it to my friend:

Term Interrogative Note
Ubiquitous computing How Embedded information processing and network communication will change the world by continuously providing services and support.
Physical computing What This will require them to be embedded in physical objects.
Pervasive computing Where The embedding will need to be if they're to provide the support continuously.
Ambient intelligence Why And the goal of the project is to create an environment that supports our goals through distributed reasoning.

"Who" is, of course, left as a a big question, but that's why there are so many anthropologists involved now, I suspect.

The definitions aren't totally separate, but it's an interesting exercise to see the focus of the groups who fly a particular flag. I still think it's all the same elephant and that maybe it needs an even yet different term. There's great value in creating a good term that encapsulates a set of ideas, but it has to accurately capture the essense of an idea as it is perceived by others to take off. Which means it needs to be externally-focused, and not about the process. I feel that none of these terms is sufficiently strong in that department, though I'm going to use "ubiquitous computing" and "ubicomp" for now, since I'm from the west coast and Weiser deserves mad props for having seen it first.

[1-31-06 update: Anne has written a typically thoughtful and insightful commentary to this note, to which I've replied. Thank you, Anne!]

[2-5-06 update: Doh! Peter tells me that if I had been paying attention, I would have noticed that my alma mater, Wired, is also on the disambiguation tip. More general than my take, but still. Peter generously said "must be in the air."]

1

Ads

Archives

ThingM

A device studio that lives at the intersections of ubiquitous computing, ambient intelligence, industrial design and materials science.

The Smart Furniture Manifesto

Giant poster, suitable for framing! (300K PDF)
Full text and explanation

Recent Photos (from Flickr)

Smart Things: Ubiquitous Computing User Experience Design

By me!
ISBN: 0123748992
Published in September 2010
Available from Amazon

Observing the User Experience: a practitioner's guide to user research

By me!
ISBN: 1558609237
Published April 2003
Available from Amazon

Find recent content on the main index or look in the archives to find all content.