top of page
Instrument Cluster Design for Level-3
Automation
Human- Machine Interaction
Thesis Topic
How can humans develop and maintain trust with a Level 3 autonomous vehicle? 
The Basics
Adapted design process for future tech, employing varied research methods. Prioritized user involvement, even for non-existing technology. Tested diverse instrument cluster designs through immersive sessions. Explored the psychology of human trust in technology with in-depth research.
Details
12 Months | Individual Thesis
Frame 6.png
The Process

During the initial stages of this research, it became evident that there were limited Level 3 vehicles available in the current market. Recognizing the need to base this study on future innovations, I realized that existing design methodologies would not readily apply. Collaborating closely with my professor, we formulated the initial plan for our approach.

 

Phase 1: Preliminary Research

- Comprehensive literature review

- Market analysis of autonomous cars available today

 

Phase 2: Conceptualization

- Establishing the information architecture for the new design proposal

- Exploring various design concepts

- Finalizing and developing the design concept

 

Phase 3: User Testing

- Conducting initial design feedback sessions

- Develop user testing plan

- Participant recruitment

- Conducting usability tests

- Analyzing collected data and presenting findings and discussions.

Engaging Users in the Process

In the first phase, after conducting a literature review and analyzing the current market, I gained insights into how information was organized and presented within the instrument clusters of Level 2 and Level 3 autonomous cars.

 

My primary objective at the end of this phase was to envision what an enhanced TOR (Takeover Request) experience in the instrument cluster should entail. However, this question remained unanswered. To tackle this, I decided to involve potential autonomous vehicle users, seeking their perceptions of this emerging technology.

 

To engage users effectively, I designed an activity that deconstructed the various pieces of information expected on the instrument cluster, drawing from previous research on car interfaces and Level 3 autonomous vehicles. Five participants were asked to arrange these elements on a blank instrument cluster image based on their preferences. By overlaying these placements from different participants, patterns emerged, revealing where most users preferred to see specific information.

During this design activity, participants also shared their thought processes aloud, offering valuable insights into their expectations for TOR presentation.

Key Findings from Design Activity

1. TOR alerts should be prominently indicated using a distinct red color.

2. Users value having a clear view of the current lane and find map information valuable.

3. They prefer to see obstacles around the vehicle within the lane view.

4. Displaying the current and expected actions of the vehicle in autonomous mode is highly desirable.

Information architecture of current cars in the market

New information architecture after design activity, accounting for Take over request (TOR)

Ideation and 3 distinct designs at the end of Phase 2: Conceptualization

Immersive User Testing

In Phase 1, the insights obtained led to the creation of three distinct designs, each presenting the same information but with varying layouts, including the TOR (Takeover Request) in a Level 3 Autonomous Vehicle (AV) instrument cluster. After receiving internal feedback and refining these designs, the next step was user testing – a unique challenge in the process. 

 

To effectively evaluate the designs, we needed to immerse users in a dynamic environment that offered more context than static designs could provide. I utilized the video footage of driving scenarios captured from a car simulator located within the Sonification Lab at Georgia Tech. Subsequently, I overlaid my designs onto this video, ensuring the instrument cluster responded in real-time to the simulated road conditions. This approach resulted in immersive videos that closely mimicked real-world scenarios, making it easy for users to assess instrument cluster responses.

I selected three testing scenarios based on my desk research:

1. Baseline: Unrestricted driving with no takeover required

2. Low Urgency: A known construction site, signifying a 20-second response time

3. High Urgency: A parked car obstructing the driving lane, requiring a swift 6-second response

 

The three instrument cluster designs, originating from the Ideation phase of the design process, were tailored to respond to these three scenarios, leading to the creation of a total of nine immersive videos.

Screen Shot 2023-11-05 at 12.39.52 PM.png
Unpacking Trust in Human-Machine Interaction

Exploring trust is like navigating a labyrinth of emotions and cognition, making it a fascinating subject of research. I was particularly captivated by the intricacies of trust development in humans, especially the unique connection we form with technology. This fascination led me to immerse myself in a world of literature.

 

1. Trust in the system refers to the extent to which drivers are aware of the system's limitations and their ability to adapt their usage to accommodate these limitations while still reaping the intended benefits. [7]

 

2. A particular paper introduced trust models, highlighting the role of trust as a moderator in shaping the relationship between trust evaluation, intent formation, automation, and display. [35]

 

3. Another model considers trust based on personal traits, the situation, and what's learned. Their research finds that system performance, appearance, ease of use, communication style, transparency/feedback, and control level affect design. [41]

 

In the context of driving, trust emerged as a critical factor in shaping attitudes toward autonomous vehicles [32]. While grasping the concept of trust was important, evaluating trust posed its own set of challenges. Current methods rely on subjective rating scales and continuous measurements. To measure trust in dynamic, context-based automation, I opted for situational trust measurement. The STS-AD measurement provides a more in-depth understanding of how experimental changes impact trust elements, as opposed to a generalized trust level. It also allows for repeated measurements throughout the study, enhancing our grasp of the evolving trust dynamics.

In conclusion, my journey in designing and researching Level 3 Autonomous Vehicle systems involved overcoming challenges due to limited market availability. This prompted innovative research approaches to enhance the takeover request (TOR) experience. Engaging users led to critical insights on TOR presentation, shaping three instrument cluster designs tested through immersive simulations. Understanding and measuring trust emerged as a pivotal element, addressed through the STS-AD methodology, emphasizing context-driven trust assessment for dynamic automation systems. 

My thesis delves deeper into this subject, providing a comprehensive exploration of the topic. It includes an extensive review of the background study, an in-depth examination of the methods and processes used in creating the mentioned designs, prototypes, and videos. Additionally, you'll find a detailed account of the data collection process, rigorous analysis, and the valuable insights derived from this research.

[7]  Khastgir, Siddartha, Stewart Birrell, Gunwant Dhadyalla, and Paul Jennings. "Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles." Transportation research part C: emerging technologies 96 (2018): 290-303

[32] Zhang, Tingru, Da Tao, Xingda Qu, Xiaoyan Zhang, Rui Lin, and Wei Zhang. "The roles of iinitial trust and perceived risk in public’s acceptance of automated vehicles." Transportation research part C: emerging technologies 98 (2019): 207-220.

[35] Lee, John D., and Katrina A. See. "Trust in automation: Designing for appropriate reliance." Human factors 46, no. 1 (2004): 50-80.

[41] Hoff, Kevin Anthony, and Masooda Bashir. "Trust in automation: Integrating empirical evidence on factors that influence trust." Human factors 57, no. 3 (2015): 407-434.

Previous

NexT

bottom of page