Virginia Tech® home

Automotive User eXperiences Research

A major component of the U.S. Department of Transportation’s (DOT) mission is to focus on pedestrian populations and how to enable safe and efficient mobility for vulnerable road users. However, evidence states that college students have the highest rate of pedestrian accidents. Due to the excessive use of personal listening devices (PLDs), vulnerable road users have begun subjecting themselves to reduced levels of achievable situation awareness resulting in risky street crossings. The ability to be aware of one’s environment is critical during task performance; however, the desire to be self-entertained should not interfere or reduce one’s ability to be situationally aware. The current research seeks to investigate the effects of acoustic situation awareness and the use of PLDs on pedestrian safety by allowing pedestrians to make “safe” vs. “unsafe” street crossing within a simulated virtual environment. The outcomes of the current research will (1) provide information about on-campus vehicle and pedestrian behaviors, (2) provide evidence about the effects of reduced acoustic situation awareness due to the use of personal listening devices, and (3) provide evidence for the utilization of vehicle-to-pedestrian alert systems. This project is conducted in collaboration with Dr. Rafael Patrick (ISE), supported by the Center for Advanced Transportation Mobility and ICAT.

Pedestrian Safety

Automakers have announced that they will produce fully automated vehicles in near future. However, it is hard to know when we can use the fully automated vehicles in our daily lives. What would be the infotainment in the fully automated vehicles? Are we going to have a steering wheel and pedals? To make a blueprint of the futuristic infotainment system and user experience in the fully automated vehicles, we investigate user interface design trends in industry and research in academia. We also explore user needs from young drivers and domain experts. With some use cases and scenarios, we will suggest new design directions for futuristic infotainment and user experience in the fully automated vehicles. This project is supported by our industry partner.

infotainment

The objective of this series of research is to investigate the potential of various types of in-vehicle intelligent agents (IVIAs) in the context of automated vehicles equipped with diverse levels of driving automation systems. The primary role of IVIAs proposed in this research is to provide driving-related information to enhance drivers’ situation awareness of their surroundings and promote their understanding of the driving automation system. In this way, the IVIAs can contribute to forming and calibrating appropriate trust levels. Our ultimate goal is to develop specific guidelines for designing IVIAs. Thus, various characteristics of IVIAs have been explored, such as speech characteristics, embodiment, information presentation, empathetic capability, personality, attitude, gesture, etc. Some of the projects have been supported by the UPS Doctoral Fellowship and the Northrop Grumman Undergraduate Research Experience Award.

Milo and NAO robots are used as an embodied intelligent agent.

Nervtech
Milo and NAO robots are used as an embodied intelligent agent.

In addition to the traditional collision warning sounds and voice prompts for personal navigation devices, we are devising more dynamic in-vehicle sonic interactions in automated vehicles. For example, we design real-time sonification based on driving performance data (i.e., driving as instrument playing). To this end, we make mappings between driving performance data (speed, lane deviation, torque, steering wheel angle, pedal pressure, crash, etc.) with musical parameters. In addition, we identify situations in which sound can play a critical role to provide better user experience in automated vehicles (e.g., safety, trust, usability, situation awareness, or novel presence). This project is supported by our industry partner.

Driving Simulator

Vehicle automation is becoming more widespread. As automation increases, new opportunities and challenges have also emerged. Among various new design problems, we aim to address new opportunities and directions of auditory interactions in highly automated vehicles to provide better driver user experience and to secure road safety. Specifically, we are designing and evaluating multimodal displays for hand-over/take-over procedure. In this project, we collaborate with Kookmin University and Stanford University. This project is supported by the Korea Automobile Testing and Research Institute.

Automated Vehicle

Investigating driving behavior using a driving simulator is widely accepted in the research community. Recently, railroad researchers have also started conducting rail-crossing research using the driving simulator. Whereas using the simulator has a number of benefits, the validation of the simulated research still remains to be addressed further. To this end, we are conducting research by comparing the simulated driving behavior with the naturalistic driving behavior data. This project is supported by Federal Railroad Administration under US DOT.

FRA

One of the potential approaches to reducing grade crossing accidents is to better understand the effects of warning systems at the grade crossings. To this end, we investigate drivers' behavior patterns (e.g., eye-tracking data and driving performance) with different types of warnings when their car approaches the grade crossings. Particularly, we design and test in-vehicle auditory alerts to make passive crossings into active crossings. We also plan to examine the effects of in-vehicle distractors (phone-call, radio, etc.) on their warning perception and behavior change. Based on these empirical data, we will improve our warning system design and make standardized design guidelines. This project has been supported by Michigan DOT, US DOT (Department of Transportation), and FRA (Federal Railroad Administration).

eye tracking

The goal of this project is to increase driver safety by taking drivers’ emotions and affect into account, in addition to cognition. To this end, we have implemented a dynamic real-time affect recognition and regulation system. At the same time, in order for the system to accurately detect a driver’s essential emotional state, we have identified a driving-specific emotion taxonomy. Using driving simulators, we have demonstrated that specific emotional states (e.g., anger, fear, happiness, sadness, boredom, etc.) have different impacts on driving performance, risk perception, situation awareness, and perceived workload. For affective state detection, we have used eye-tracking, facial expression recognition, respiration, heart rate (ECG), brain activities (fNIRS), grip strength detection, and smartphone sensors, etc. For regulation part, we have been testing various music pieces (e.g., emotional music, self-selected music), sonification (e.g., real-time sonification based on affect data), and the speech-based systems (e.g., emotion regulation prompt vs. situation awareness prompt). Part of this project is supported by Michigan Tech Transportation Institute and Korea Automobile Testing and Research Institute.

facial detection

In-vehicle touchscreen displays offer many benefits, but they can also distract drivers. We are exploring the potential of gesture control systems to support or replace potential dangerous touchscreen interactions. We do this by replacing information which we usually acquire visually with auditory displays that are both functional and beautiful. In collaboration with our industry partner, our goal is to create an intuitive, usable interface that improves driver safety and enhances the driver experience. This project was supported by Hyundai Motor Company.

In Vehicle Gesture

The Auditory Spatial Stroop experiment investigates whether the location or the meaning of the stimuli more strongly influences performance when they conflict with each other. For example, the word “LEFT” or “RIGHT” is presented in a congruent or incongruent position from its meaning. It can be easily applied to the complex driving environment. For example, the navigation device tells you to turn right, but the collision avoidance system warns you that a hazard is coming from right at the same time. How should we respond to this conflicting situation? To explore this problem space further, we conduct Auditory Spatial Stroop research using OpenDS Lane Change Test to investigate how driving behavior varies under different multimodal cue combinations (visual, verbal, & non-verbal; temporally, spatially, semantically congruent & incongruent).

Lane Change Test

The goal of this project is to understand driver emotion from comprehensive perspective and help emotional drivers mitigate the effects of emotions on driving performance. Our empirical research has shown that happy music and self-selected music can help angry drivers drive better. However, self-selected "sad" music might degenerate driving performance. This research has also shown the potentials of using fNIRS and ECG to monitor drivers' affective states.

Driving with fNIRS

There have been many studies on investigating appropriate auditory displays for takeover request. However, most of them were examined through empirical human subject research. In the present study, we established computational models using a Queuing Network Model Human Processor (QN-MHP) framework to predict a driver’s reaction time to auditory displays for takeover requests. The reaction times for different sound types were modeled based on the results of subjective questionnaires, acoustical characteristics of sounds and empirical results from the previous findings. The current models will be expanded to span more factors and different contexts. This study will contribute to driving research and auditory display design by allowing us to simulate and predict driver behavior with varying parameters. This project is conducted in collaboration with Dr. Yiqi Zhang in Penn State University. * The QN-MHP framework represents the human cognition system as a queuing network based on several similarities to brain activities. QN-MHP consists of three subnetworks: perceptual, cognitive, and motor subnetworks.

QNMHP

Sponsored by the Northrop Grumman Undergraduate Research Award, this project utilizes assistive robots in a driving simulator to study the effects of reliability and transparency of the in-vehicle agents on trust, driving performance, and user preference. The objective of this on-going research is to determine the best level of transparency in an AI agent to optimize driver situation awareness, increase trust in automation, and secure safe driving behavior. 

Two humanoid robots and the driving simulator