About HFI   Certification   Tools   Services   Training   Free Resources   Media Room  
 Site MapWe help make companies user-centric   
Human Factors International Home Page Human Factors International Home Page
Free Resources

UX Design Newsletter – January / February 2014

In This Issue

Steve Fleming

Author: Steve Fleming, CUA, CXA
Steve Fleming is a UX Strategist at Human Factors International and an HFI writer.

Eric Schaffer

Message from Eric Schaffer, Ph.D., CUA,CXA, CPE
Commenting on this article, Dr. Schaffer offers practical advice.

Driving Me Crazy: Research on Vehicle Operations


Driving a car is so common. You sit down, adjust the seat, adjust the mirrors, and push the power button to start it. You look for the dashboard indicators that show the car has started because the engine is silent. Then you tap the stick over to drive and off you go. When you arrive at your destination, you push the park button, push the power button, and you are done.

Of course, this explanation only makes sense if you drive a Toyota Prius, so it isn’t common to everyone. Most of us don’t have a “B” on our gear shifter, either.

When we are always working with digital technology, it is easy to forget about the design of the more physical products and interfaces that we experience every day.

New technology in our cars

When we think about how people interact with their cars, change tends to be slow. Separate accelerator and brake pedals are not an ideal solution, but they will likely be in cars for the foreseeable future. However, a number of current studies are looking at opportunities to automate the experience of operating cars.

One study by Ulrich, et al. (2013) involved allowing the driver to use gestures, instead of discrete button pushes, for interacting with the increasingly complex systems within a car. For example, the user could switch to the radio mode with left-hand controls and use a gesture with the right hand to increase or decrease volume or select pre-set stations.

Schematic drawing of bimanual control prototype from Ulrich, et al. (2013)

The possible benefits of such a control are immense – reducing visual distraction of taking eyes off the road and manual distraction of taking hands off the wheel – as well as providing a larger range of functions. This would be much better than simply “adding more buttons” to the steering wheel!

Although participants didn’t tend to agree on the gestures for each task, they did quickly learn gestures they were taught and reported high levels of satisfaction. It helped that the team leveraged familiar gestures from the growing number of gesture operations outside of operating a car.

In another study, Eisert, et al. (2013) used a simulator to examine “tactile route guidance” where pulses in the seat helped guide drivers along a route through a city. Speed of the pulses and location of the pulses informed the driver when turns were coming up and the location of turns (left or right). They varied the pulse rates and locations and found that drivers didn’t remember routes any better with any of the approaches, but did prefer increased pulse speed as they got closer to the turn. Additional cues to location were not preferred, even though the team thought the drivers might consider the additional information useful. They hypothesize that although multi-modal approaches are good, there is risk in overwhelming a driver’s limited “attentional capacity.”

The technology we bring to our cars

Both these studies show there are ways to potentially address the limits of driver attention. We are all familiar with these limits in the context of distractions while driving – texting or talking on the phone.

Gaspar, et al. (2013) conducted research to see if, when on a mobile phone, giving conversation partners views of the driving scene would mitigate some distraction. Some research has shown that conversations with passengers in a car are less distracting than conversations on mobile phones (Drews 2008 and Charlton 2009).

The team used a driving simulator to understand the effects of providing the remote conversation partner a video feed of the driver and the front view out of the vehicle. They found that drivers were involved in significantly fewer collisions when conversing over this video phone (or with an in-vehicle passenger), than when having a standard conversation over the phone. Their hypothesis is that the video phone provides the same kind of context an in-vehicle passenger has, whereas individuals can modulate their conversations based on the road events and provide some shared situational awareness. Maybe backseat drivers aren’t so bad!

But maybe we should just take the driver out of the equation

With Google’s self-driving cars, maybe we don’t have to worry about being distracted … if we trust the system.

In a non-driving environment, McBride, et al. (2013) assessed trust and acceptance of automated decision-making in a nursing home setting. Novice and experienced nurses working with an automated system identified their level of trust in the system and the team also measured how often the nurse accepted the system’s decision. Interestingly, novice nurses claimed to trust the system less, but accepted its decisions more. The researchers hypothesize that the expert nurses had more confidence in their professional abilities and so were willing to ignore the system’s decision, even though they worked with and trusted technology systems every day on the job. It would be interesting to see level of trust in a self-driving car … and how that correlates with acceptance. After all, we are all above-average drivers – clearly better than all the other drivers out there – just ask us!

Whenever we talk to people about automated systems, the common comment is “great, as long as I can take over manual control.” But would we be any good at that?

Gold, et al. (2013) studied drivers in a simulator taking over an automated car in a situation requiring avoiding an accident by a lane change. When there was less time available to make a decision and take over (5 seconds), drivers resorted to abrupt braking and immediate lane-changes (with minimal checks to see if the other lane is occupied). With more time for response (7 seconds), drivers did think more about their response, but the response wasn’t necessarily safer. The team determined that given enough warning, the driver can take over the driving task and perform acceptably, but can still create a hazard (braking, swerving) rather than avoid one. As one might expect, drivers who were not relying on automation were able to perform better. The challenge moving forward is what additional cues, such as auditory cues, might help a driver more quickly gain control over an automated system in order to prevent an accident.


We live and work in very physical environments and sometimes fail to appreciate all the design considerations and decisions that have to go into designing these things we take for granted.

Applying our understanding of human-machine interaction to the physical space is a great challenge, but an exciting one filled with opportunities.

Hats off to the designers out there working in a space where they can’t just change the web page design and roll out a new one the next day, but have to consider designing a product that might not go into production for a year or more and may truly put health and safety on the line.

Related Information


Charlton, S.G. (2009) Driving while conversing: Cell phones that distract and passengers who react. Accident Analysis and Prevention, 41(1), 160.

Drews, F.A., Pasupathi, M., & Strayer, D.L. (2008). Passenger and cell phone conversations in simulated driving. Journal of Experimental Psychology: Applied, 14(4), 392.

Eisert, J., Garcia, A., Payne, J., & Baldwin, C.L. (2013). Tactile Route Guidance Performance and Preference. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2013 57: 1504

Gaspar, J.G., Street, W.M., Windsor, M.B., Carbonari, R., Kaczmarski, H., Kramer, A.F., & Mathewson, K.E. (2013). Providing conversation partners views of the driving scene mitigates cell phone-related distraction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2013 57: 1209

Gold, C., Dambock, D., Lorenz, L., & Bengler, K. (2013). “Take over!” How long does it take to get the driver back into the loop? Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2013 57: 1938

McBride, M., Carter, L., & Ntuen, C. (2013). Human-Machine Trust, Bias and Automated Decision Aid Acceptance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2013 57: 349

Ulrich, T.A., Spielman, Z., Holmberg, J., Hoover, C., Sanders, N., Gohil, K., & Werner, S. (2013). Playing Charades With Your Car – The Potential of Free-form and Contact-based Gestural Interfaces for Human Vehicle Interaction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2013 57: 1643


Reader comments on this and other articles.
Message from the CEO, Dr. Eric Schaffer — The Pragmatic Ergonomist

Eric Schaffer

Everyone can see that technology is integrating deeply into our lives. It is an unprecedented level of change and places enormous adaptive demands on everyone. I don’t think we could hope to manage the shift without our growing cadre of UX professionals, because it cannot proceed quickly if we just insert technology and hope some of it works.

We need ecosystem-driven innovation and design. Then it can go fast. And my greatest hope is that this new wave of technology will not be impaired by the horrid designs of the past. We see the QWERTY keyboard (practically designed to be slow). We see blue text in hypertext links which are hard to read (small field tritanopia). With a solid UX process, we can hope to not leave behind this sort of legacy of poor designs that have become too established to change.

Leave a comment here

© 1996-2014 Human Factors International, Inc. All rights reserved  |  Privacy Policy  |   Follow us: