BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Surrounded By Devices, We Inhabit A World of Increasing User-Centricity

This article is more than 10 years old.

User-centric computing is a theme we can expect to hear articulated in many ways next week at the Consumer Electronics Show (CES) in Las Vegas.

The simple view of the shift from device-centric to user-centric computing goes like this: when all we had was one device — a PC, first to do our work and later to connect to the Internet — we adapted to the device.  We learned how to wrestle it into more or less obeying our will.  We became skilled at the arcane keystrokes of DOS commands and Lotus 1-2-3 in order to do productive work.  We went to the machine.

Now, the machine is starting to come to us.  As soon as you have more than one device in your life, they must necessarily point to you.  Cloud services increasingly coordinate devices for us so that our “state” — the exact condition of all our stuff at any one moment — migrates seamlessly from one device to another other.

A good example would be reading an eBook.  If you stop reading on a certain page on your laptop, you should be able to open your eReader and be on the same page.  Ditto for your phone.  This idea that “state” follows you around puts you at the center of your own universe.  The devices are all around you.

The magic glue that keeps all this together is the Cloud, in this case acting as your alter ego, a digital representation of you and your stuff: all those preferences, settings, privileges, programs, subscriptions, documents, photos, videos, music, and backup services.  You are in the middle, surrounded by your stuff.

At CES, we will see how the pantheon of devices is growing, how its definition is expanding to include cars (Audi of America will be there as well as Ford Motor Company, Garmin International, Hyundai Motor Company, Johnson Controls, Subaru of America, Toyota Motor, and literally hundreds of less-well-known names), home appliances (Haier America, Whirlpool, Westinghouse, Philips Home Control, Panasonic, Qualcomm, and many others), useful everyday services (such as the National Weather Service and Rand McNally), and connected entertainment (e.g., Cambridge Audio, Panasonic, Altec Lansing, Samsung, and hundreds more).

Standards groups like HomeGrid Forum and HomePlug Powerline Alliance will also be at CES promoting ways to connect all these things.  And new possibilities for interacting with your devices will be shown by companies like Movea.  A whole section of the show will be devoted to digital health and fitness products, which will soon tie your exercise, diet, and weight data to the rest of your stuff.

Long-time attendee Advanced Micro Devices (AMD) will be advancing its own view of user-centric computing, a vision laid out this week by Mark Papermaster, the company’s chief technology officer.  Papermaster sees us being at what he calls “another inflection point” in the development of technology and society.  “We’re well in to mobile computing,” he said.  “Now, you think of information as an entitlement.”

Papermaster calls the soup of pervasive computing devices and connections in which we swim “Surround Computing,” and a key transition, of which we are just at the beginning, is the move to natural interfaces.  He cites Apple’s Siri and Google Voice as examples of not just voice input but semantic interpretation — the derivation of meaning from spoken words.  Other examples of this new-style input are Microsoft’s Kinect, which interprets gestures.  In many mobile applications, the finger has replaced mouse and key or button entry.

On the output side, things are also moving apace toward a world of truly immersive experiences.  At CES, AMD will be showing its SurRoundHouse, a hyperbolic demonstration of just how immersive computing output can be.  The demo is actually a house, which will be situated outside the Las Vegas Convention Center.  Viewers will enter into what looks like a “farm cabin” with 10 windows, each of which is actually a 55” LED TV.  These TVs are tied together by AMD Eyefinity display technology, which allows them to be run from a single PC with three AMD FirePro graphics cards.

Also at work will be AMD’s Discrete Digital Multipoint Audio, which can render 3D positional audio, in this case via 32 speakers, including four subwoofers.  It is the audio that leads the viewer to integrate the separate images in the windows into a single coherent experience.  The audio fills the gaps between the separate displays and allows the viewer to figure out what is happening as a complex story line unfolds.  Without positional audio, the story is much more difficult to understand.

Total cost of this rig: $35,000, still a bit out of reach for the average consumer.  But the tour de force makes use of technologies at work in commercially available gaming systems today.

As we enter into this new, more-immersive world of user-centric, surround computing, most of us will be unaware of what’s going on “under the hood,” but billions of compute engines will be furiously calculating away to make all of this happen.

Some of these engines will be on the device (or “endpoint, as I like to call them), some will be in the cloud, and others will reside in sensors and actuators, which will take in the world around us and act upon it — at our will or according to artificial intelligence algorithms that figure out the right thing to do faster than we can.  Self-driving cars are a natural extension of this development.

Wouldn’t you rather watch a movie, chat with friends via video link, read a book, or snooze while your car takes you where you want to go in the fastest, safest possible manner?  It would be like having a chauffer who is always polite, doesn’t take days off, needs no food or sleep, and never quits.

Disclosure: Endpoint has a consulting relationship with Advanced Micro Devices.

© 2013 Endpoint Technologies Associates, Inc.  All rights reserved.

Twitter: RogerKay