A pass is required to view this video.

A multi-modal data fusion and deep learning model for evaluating the driver take-over readiness

Sign in | Register to Bookmark

Hear from:

Mahdi Rezaei
Assistant Professor of Computer Science
Leader of Computer Vision Group at ITS, Leeds,

University of Leeds

Released on July 04, 2023

Despite all the technological enhancements to gain L3 automated driving, a driver should be ‎still available at all times to resume the control of an automated vehicle, in response to a ‎critical takeover request. In such vehicles, an in-cabin smart system or enabler should ‎monitor the driver and ensure the driver is ready to safely resume driving control. ‎However, there are two fundamental and challenging questions to be answered in this ‎domain:‎

How a driver monitoring system can accurately understand and interpret the driver’s level of readiness or attentiveness ‎using in-cabin sensors?‎

Can current DMS solutions (eye gaze, head pose) or steering wheel sensing ‎technology provide sufficient information about the actual state of the ‎driver/occupants?‎

Accurate understanding and measurement of driver readiness is not a trivial task. To have ‎a seamless transition between automated driving mode and human driving we need to ‎look further and develop the next generation of DMS enablers to fit the purpose. Many ‎recent studies confirm a green light to resume the control can not be issued solely based ‎on eye gaze or steering wheel sensing. In this session, we discuss a broader view ‎of ‎requirements for assessing driver readiness, using both vision sensors ‎and ‎human factors criteria. We also propose a new multi-modal feature fusion and deep ‎learning solution to ‎address one of the current technological challenges in this area.‎

Want to watch this session?

You must be logged in and have a pass to watch this session.
Scroll to Top