Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Inside the Intel User Experience Lab

At the User Experience Lab in Santa Clara, Intel techs use a variety of creative methods for testing touch-screen interaction, audio quality, viewing angles, and more.

September 13, 2013
User Experience Lab Tactile Robot

SAN FRANCISCO—When most of us think of computer hardware, we think of only the finished product: how the audio sounds, whether the display has good viewing angles, if its touch screen works the way you expect when you swipe it with your finger. It's easy to take these mundane things for granted, but if you're a technology company you don't have that luxury—if everything doesn't work exactly the way your customer anticipates, your business could suffer. Getting this stuff right is serious business.

Intel proved that during a tour of the company's User Experience Lab (UEL) while I was in the area for the annual Intel Developer Forum (IDF). Located on Intel's campus in Santa Clara, the UEL is filled with techs who use a variety of methods to better understand, in the words of robots engineer Eddie Raleigh,"where the user is coming from." They do this primarily by focusing on the senses of sight, touch, and hearing.

It's with sight and touch that Raleigh began, explaining up front that the tests they run on qualities ranging from smoothness (how a device's screen updates as you're using it) to form factor (is the screen the correct size and weight for its intended use?) can sometimes yield surprising results.  For example, you would assume that under most (if not all situations), a higher frame rate on a display would result in a smoother viewing experience.  But that's not always true, says Raleigh:  An average frame rate number could be deceptive if drop frames are involved, or if there are no frames dropped but the area is "redrawn in an area that was inconsistent with the user expectations."

Once these tests are completed, a team of "human factors engineers" conduct additional scientific studies (with participants pre-screened based on their experience with devices) to obtain even more specific feedback.  That's used to populate a database that helps Raleigh and his colleagues create a perceptual model that tells them what parameters matter for any specific usage, and how that correlates to a specific usage experience.  Intel uses this information to improve both its own devices, and those of its OEM partners.

Raleigh then demonstrated one of the methods by which they can extract the parameters that help the UEL better understand user experience: a robot.  Shaped something like a human arm, complete with wrist and fingers, and enclosed within a safety cage, it's connected to a camera that follows a series of configurable tokens on a screen that assist in directing the arm's motion.  The arm is capable of replicating human movement with considerable precision, which makes it ideal for assessing touch devices, and once Raleigh had powered it on the arm executed a series of familiar gestures (swipes, pinches and zooms) on an actual tablet.

Continue Reading: Sound and Displays>

Sound and Displays

The robot, Raleigh explained, lets them conduct tests in a "repeated and controlled" way, with the ability to simulate faster, slower, or more erratic movements if desired.  A second camera (of 2K quality, and capturing video at 300 frames per second) records the arm's actions so that the UEL folks can see exactly what the user does.  The testing is completely agnostic with regards to operating system and form factor, too: "If it's a touch screen, and it fits within this cage, we can get it testing right away." Footage from the camera is then analyzed and compared with other results, and fed into a spreadsheet that helps techs better understand what went right in testing, what went wrong, and what needs to be improved to better satisfy the user.

IDF13 Bug

Sound was next, as our tour group piled in the UEL's hemi-anechoic chamber, the place where in-depth testing is conducted on speech recognition, audio, and voice quality testing.  ("We didn't go full anechoic," our guide told us, "because it's frankly not necessary for the user experience we're trying to do.)  Along the far wall of the acoustic foam–lined room was a head-shaped device resembling the H.E.A.D. Acoustics system PCMag.com long used for audio testing.  The device (informally known around the UEL as "Andy Hats") can measure pressure on the ear and distance from the mouth, necessary attributes for understanding the many ways humans react with audio devices.

A flick of a switch and Andy began "speaking," explaining that it's capable of using a number of different voice types (gender, nationality, etc.) to better simulate what might be encountered in the real world. Also integrated into the system is a background noise simulator, which makes it possible to replicate the atmosphere in a forest, on an elementary school playground, at a train station, and many other places.  With the door to the chamber closed, it's easy for the human ear to isolate and discern more sounds than is possible even in a fairly quiet environment, so one could immediately see (or, rather, hear) its value in audio testing.

Andy Hats

We hopped back to the sense of sight for our next discussion, with Michael Carroll, who explained how the UEL conducts display testing.  A couple of tools come into play here.  The first is a Minolta CS-1000 spectroradiometer, which measures the absence of brightness, contrast, color gamut, color accuracy, and so on.  The second is a Radiant Zemax imaging sphere that contains a hemispherical mirror.  A device is placed flush against an opening in the box and light (either white or colored) is introduced that in turn helps the UEL techs see what the screen looks like from every angle at the same time (with software taking into account all the necessary geometrical corrections for the curved mirror).  A temperature-controlled camera makes it possible to experiment with long exposures.

Carroll showed us an example of the data that results from using the sphere, and you didn't need to be a trained scientist to be able to easily discern the width of the viewing angles, how different kinds of light registered or how the quality might appear for the user. A consistent pattern meant a consistent picture; a "blobbier" one meant you weren't seeing images as well as you could be.  A display's brightness and contrast could also be determined from the chart.

Poor Display Results

That marked the end of my tour of the User Experience Lab. Though I would have liked the opportunity to see more of the tests in action, this brief taste provided a fascinating glimpse into how one company works to improve aspects of its products that most of us don't think about. That few of us think about them could be the strongest testament there is as to how well the techs here, and at other labs like elsewhere, succeed at what they do.

For a look at some of the other things we saw at IDF, check out our photo blog.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Matthew Murray

Managing Editor, Hardware

Matthew Murray got his humble start leading a technology-sensitive life in elementary school, where he struggled to satisfy his ravenous hunger for computers, computer games, and writing book reports in Integer BASIC. He earned his B.A. in Dramatic Writing at Western Washington University, where he also minored in Web design and German. He has been building computers for himself and others for more than 20 years, and he spent several years working in IT and helpdesk capacities before escaping into the far more exciting world of journalism. Currently the managing editor of Hardware for PCMag, Matthew has fulfilled a number of other positions at Ziff Davis, including lead analyst of components and DIY on the Hardware team, senior editor on both the Consumer Electronics and Software teams, the managing editor of ExtremeTech.com, and, most recently the managing editor of Digital Editions and the monthly PC Magazine Digital Edition publication. Before joining Ziff Davis, Matthew served as senior editor at Computer Shopper, where he covered desktops, software, components, and system building; as senior editor at Stage Directions, a monthly technical theater trade publication; and as associate editor at TheaterMania.com, where he contributed to and helped edit The TheaterMania Guide to Musical Theater Cast Recordings. Other books he has edited include Jill Duffy's Get Organized: How to Clean Up Your Messy Digital Life for Ziff Davis and Kevin T. Rush's novel The Lance and the Veil. In his copious free time, Matthew is also the chief New York theater critic for TalkinBroadway.com, one of the best-known and most popular websites covering the New York theater scene, and is a member of the Theatre World Awards board for honoring outstanding stage debuts.

Read Matthew's full bio

Read the latest from Matthew Murray