The CMI project focuses on the cockpit of the autonomous vehicle. It will define the best possible combinations of Human-Machine Interaction (HMI) solutions that solicit different human senses (sight, hearing, touch) to reduce the driver’s visual cognitive load and improve his intuitiveness.
Focusing on the development, evaluation and orchestration of human-machine interaction (HMI) solutions with several sensory modalities, the Interactive Multimodal Cockpit (CMI) project aims to reduce the attention load of drivers in the context of perception-action loops of the vehicle. Mainly the visual, auditory and haptic modalities will be studied. The use of the olfactory modality can be considered punctually.
This research project aims to respond to new issues and to remove the scientific obstacles related to the increase of on-board digital functions and technologies, with a particular focus on the impacts related to the automation of the vehicle (levels 1 to 4 of the SAE classification). “Transitions” between the different levels of automation must also be made effective by the multimodality of interactions, without losing sight of security, especially for transitions to more manual driving.