CMI – Interactive Multimodal Cockpit
Evaluation of the relevance of Human-Machine Interfaces to several sensory modalities
The massive introduction of touch screens has had the effect of suppressing the haptic feedback associated with the use of a physical button and increasing the solicitation of visual resources. The interest of the use of several sensory modalities is to avoid saturating one of the modalities, in most cases, that of vision. It is necessary to know that each sensory modality is associated with an attentional reservoir of its own. The challenge is to define the best combinations of human-machine interfaces (HMI) and to test them in different use cases.
Launched in 2018 for a period of three years, the CMI project focuses on the cockpit of the autonomous vehicle. It focuses on defining the best possible combinations of HMI solutions that solicit different human senses (sight, hearing, touch) to reduce the driver’s cognitive load and improve his or her intuitiveness.
- Develop and evaluate the relevance of HMI applications to multiple contextualized sensory modalities ;
- Establish the right guidelines for HMI design ;
- Explore the potential of a virtual assistant.
|Systems engineering and software design|
Supervised thesis in the framework of the project
Thesis#1: Reconfigurable virtual assistant for the interactive multimodal cockpit of an autonomous car (Université de Bordeaux).
Thesis#2: System architecture and compromise analysis of an intelligent glazing for the interactive multimodal cockpit of an autonomous car