15th – 17th March 2023 | Phoenix, uSA
InCabin Phoenix Agenda On-Demand
Please note: Session times and locations are subject to change
Building an immersive car and enhancing the user experience – the road ahead
Building an immersive car and enhancing the user experience – the road ahead
The presentation will address present and future demands of the automotive in-cabin consumer experience. It will talk on in-cabin monitoring – driver and occupant monitoring, explore on related functions such as cognitive distraction, stress-free routing, and personalized experiences, and will also discuss supporting technologies like artificial Intelligence / machine learning and sensor technology such as camera and radar sensors.

Dr. Peter Amthor,
Chief ADAS/AD Technology and Innovation Expert,
Harman

Dr. Peter Amthor,
Chief ADAS/AD Technology and Innovation Expert,
Harman

David Mitropoulos-Rundus,
Senior Engineer,
Hyundai American Technical Center
Joint Q&A session with Peter Amthor and Dave Mitropoulos-Rundus
Joint Q&A session with Peter Amthor and Dave Mitropoulos-Rundus

Dr. Peter Amthor,
Chief ADAS/AD Technology and Innovation Expert,
Harman

David Mitropoulos-Rundus,
Senior Engineer,
Hyundai American Technical Center

Dr. Peter Amthor,
Chief ADAS/AD Technology and Innovation Expert,
Harman

David Mitropoulos-Rundus,
Senior Engineer,
Hyundai American Technical Center
ADAS and AV sensor suite: emerging trends and developments
ADAS and AV sensor suite: emerging trends and developments
This presentation will cover Tech Insights’ detailed analysis and forecasts for the ADAS and Autonomous Vehicle (AV) sensor suite. It will highlight the challenges facing the industry that include bringing safety solutions to mass-market vehicles at a low price point while also expanding on autonomous convenience functions on the path to autonomous vehicles. The industry is challenged by introducing low-cost solutions to comply with NCAP requirements for mass-market vehicles while also developing L2+ systems and reducing the cost of the overall ADAS / AD sensor suite.

Mark Fitzgerald,
Director, Autonomous Vehicle Service,
TechInsights Inc.

Mark Fitzgerald,
Director, Autonomous Vehicle Service,
TechInsights Inc.
What’s next for in-cabin? Future outlooks for technology, industry & regulation
What’s next for in-cabin? Future outlooks for technology, industry & regulation
Future outlooks for technology, industry, regulation. How is the industry changing and how can we prepare for this? Are further industry collaborations the way forward?

Junko Yoshida,
Co-Founder & Editor-in-chief,
The Ojo-Yoshida Report
(Moderator)

Nandita Mangal,
Platform Owner - HMI Vehicle Experience,
Aptiv
(Panellist)

Allen Lin,
Technical Specialist – Driver Monitoring, Night Vision, and Interior Camera Systems,
General Motors
(Panellist)

Caroline Chung,
Engineering Manager,
Veoneer
(Panellist)

Junko Yoshida,
Co-Founder & Editor-in-chief,
The Ojo-Yoshida Report
(Moderator)

Nandita Mangal,
Platform Owner - HMI Vehicle Experience,
Aptiv
(Panellist)

Allen Lin,
Technical Specialist – Driver Monitoring, Night Vision, and Interior Camera Systems,
General Motors
(Panellist)

Caroline Chung,
Engineering Manager,
Veoneer
(Panellist)
Mitigating unsafe driving situations using interior monitoring systems
Mitigating unsafe driving situations using interior monitoring systems
Interior monitoring can enhance safety, comfort, and convenience for all vehicle occupants. Today these systems can detect driver distraction, signs of drowsiness, and even whether a child has been left behind in the vehicle and provide alerts the driver to these critical situations. By using some of the same base signals that are used to detect the functions defined above, we may be able to effectively reduce other unsafe driving behaviors. This may include driving while under the influence of alcohol or perhaps driving a vehicle when being subject to sudden sickness. These interior monitoring systems will be critical for determining if a driver is able to perform the dynamic driving task.

Fabiano Ruaro,
Product Manager for Interior Monitoring Systems,
Bosch

Fabiano Ruaro,
Product Manager for Interior Monitoring Systems,
Bosch
Supporting the DMS on existing ECU with extendable AI accelerator module
Supporting the DMS on existing ECU with extendable AI accelerator module
With the rapid development of cutting edge DMS technology and its deep learning components, the requirements for computing power and memory size are also ever increasing. Most of the existing ECUs fail to keep up pace with such demand due to the difficulty of updating their fixed hardware design. This limits the potential performance of the DMS. Instead, by adopting extendable AI accelerator, we can overcome the limitations and release the full potential of the DMS. We will showcase how deep learning models only need to be transmitted to our extendable AI accelerator module through USB or PCI-E interface to obtain the inference output, without modifying your current software design too much, greatly simplifying the system verification & qualification.

Darren Chen,
Head of Automotive Technology Initiative,
LITE-ON Technology Corp.

Darren Chen,
Head of Automotive Technology Initiative,
LITE-ON Technology Corp.
Body Height, Weight, and Gender Estimation of Vehicle Occupants
Body Height, Weight, and Gender Estimation of Vehicle Occupants
To minimize occupant injury during a crash, an intelligent, adaptive occupant restraint system can use information about body height, weight, and gender that is captured by an interior camera typically used for monitoring tasks. The proposed method relies on facial recognition and deep learning to predict the height, build, and gender of each occupant inside the vehicle, and then calculates the body weight of each person from these features to improve their safety individually.

Patrick Laufer,
Development engineer,
IAV Vehicle Safety

Patrick Laufer,
Development engineer,
IAV Vehicle Safety
InCabin Monocular 3D Sensing: from Current Challenges to Future Opportunities
InCabin Monocular 3D Sensing: from Current Challenges to Future Opportunities
This session covers the latest techniques, challenges and opportunities in implementing InCabin monocular 3D sensing using 2D image sensors. Enabling a new category of use cases for enhanced safety and optimized comfort, this session will further cover how efficient inference on AI chips is capable of generating new types of depth-aware data and new monetization models inside this third living space

Modar Alaoui,
Founder and CEO,
Eyeris

Modar Alaoui,
Founder and CEO,
Eyeris
Reliable in-cabin awareness for occupant safety via multi-sensor fusion
Reliable in-cabin awareness for occupant safety via multi-sensor fusion
Current 2D IR DMS systems and upcoming next generation 2D RGB/ IR OMS systems (≈2026) are expected to rely fully on 2D information. This is a good first step for OEMs to integrate a camera inside the car to monitor head and eye state to alert the driver in case of drowsiness or lack of attention.
Sony believes that in the future, advanced occupant state, posture and context awareness will be needed to achieve a safer in-cabin for all occupants. Therefore several sensors need to be fused, to not only cover the monitoring of the driver, but understanding the activity and behavioral context for the driver and all passengers. With this understanding the active safety systems step up in performance and passive safety systems can be optimally supported, such as active restraint systems.
In the talk Sony describes the necessity of sensor fusion, confirms the statement by research and shows ways how to enable safe in-cabins

Jan-Martin Juptner,
Business Development Manager – Automotive,
Sony Depthsensing Solutions

Jan-Martin Juptner,
Business Development Manager – Automotive,
Sony Depthsensing Solutions
Making use-for-good of in-cabin data
Making use-for-good of in-cabin data
Today’s embedded sensor technology and location APIs make the car the perfect incubator for a new standard in infotainment. Trip Lab is developing responsive in-cabin experiences powered by Empathetic AI. Our Mission: to create digital health interventions that help drivers feel better when they step out of the car than when they got in. This presentation will showcase real use case examples for improved wellbeing in the car; driven by the ability to sense, understand and support any state of driving.

Alex Wipperfürth,
President,
Triplab

Alex Wipperfürth,
President,
Triplab
In-cabin applications: towards an intelligent automotive interior
In-cabin applications: towards an intelligent automotive interior
In-cabin applications are becoming more complex, offering more connectivity, monitoring, and more personalization. Driver and occupant monitoring applications are the main applications due to their integration in future regulations. Several technical developments have been observed related to 2D and 3D vision for driver monitoring. For occupant monitoring, radar technology can provide another solution for Tier-1s. But, in-cabin applications are not limited to these applications. Interior lighting is growing rapidly from functional lighting to a more personalized lighting and could even be used for communication and safety applications. Air quality monitoring is still in its infancy but PM2.5 sensors based on the light scattering principle start to be implemented by some OEMs like Volvo. In-cabin applications require more sensors, more computing and therefore the car architecture will be impacted. Moving from a distributed to a more centralized architecture with domain controllers will offer to OEMs the possibility to monetize software services.

Pierrick Boulay,
Senior Analyst - Lighting and ADAS systems,
Yole Group

Pierrick Boulay,
Senior Analyst - Lighting and ADAS systems,
Yole Group
A versatility approach to the cybersecurity of the in-cabin image sensors
A versatility approach to the cybersecurity of the in-cabin image sensors
Connectivity is increasingly being integrated and expanded into newer vehicle models to improve efficiency and safety for people and property. This high level of connectivity poses increasing risks for external attacks on vehicle systems that automotive manufacturers must protect against. Established cybersecurity methods are being leveraged and brought to automotive architectures to provide increased security. New generation in-cabin sensors will need to support cybersecurity features to establish secure channel link between the sensor and host through mutual authentication and provide video stream integrity. However, the level of protection required is not clearly defined. The lack of a standard imposes a flexibility of cybersecurity features on the image sensor which we will discuss.

Charles Kingston,
Senior Imaging Application Development Engineer,
STMicroelectronics

Charles Kingston,
Senior Imaging Application Development Engineer,
STMicroelectronics
How do lighting technologies and characteristics cross-over with in-cabin safety, comfort and applications?
How do lighting technologies and characteristics cross-over with in-cabin safety, comfort and applications?

Philippe Aumont,
General Editor,
DVN Interior
(Moderator)

Federico Pardo-Saguier,
Innovation & Business Development Director,
Valeo

Ryan Winczewski,
Technical Project Lead & Specialist Lighting,
Varroc

Hugo Piccin,
Hugo Piccin
Technology Leader,
Faurecia

Philippe Aumont,
General Editor,
DVN Interior
(Moderator)

Federico Pardo-Saguier,
Innovation & Business Development Director,
Valeo

Ryan Winczewski,
Technical Project Lead & Specialist Lighting,
Varroc

Hugo Piccin,
Hugo Piccin
Technology Leader,
Faurecia
Emerging Dynamic Vision Sensor Technology Enhances Driving Safety and Privacy in Smart Cockpit
Emerging Dynamic Vision Sensor Technology Enhances Driving Safety and Privacy in Smart Cockpit
Driver monitoring systems (DMS) are crucial to driver safety. Traditional DMS systems use RGBIR sensors and AI models to detect driver states in real-time. However, these systems are sensitive to image illumination, contrast, and dynamic range, and could also raise privacy and security concerns due to the use of in-cabin cameras. In this session, we present an event-based DMS with dynamic vision sensors (DVS) to overcome the limitations above. We will discuss the properties of DVS and showcase how we integrate DVS into edge device along with developed event-based AI models, achieving robust DVS DMS with high privacy.

Kai-Chun Wang,
Machine Learning Engineer,
iCatch Technologies

Kai-Chun Wang,
Machine Learning Engineer,
iCatch Technologies
Health and impairment detection through cognitive states monitoring – development and validation challenges
Health and impairment detection through cognitive states monitoring – development and validation challenges
Driver impairment is a growing concern for automotive industry. This presentation will cover how to develop solutions to detect health and impairment of drivers through physiological and cognitive states monitoring (e.g., cognitive load, stress). We will discuss the challenges to develop and validate such solutions, covering which data to use, which sensors, and which ground truth. We will also describe the relevant use cases of such solutions for in cabin focusing on safety, well-being, and user experience.

Clémentine François,
Biomedical Engineering Manager,
Tobii AB

Clémentine François,
Biomedical Engineering Manager,
Tobii AB
The State of Intoxication Research: The Opportunities and Challenges
The State of Intoxication Research: The Opportunities and Challenges
For as long as cars have existed, drunk driving has been a complex problem in need of comprehensive solutions. Different strategies for mitigating intoxicated driving have been deployed for over a century, yet none of them have proven to be very effective. Critical research is being conducted by government bodies, the automotive industry, technology companies and academia to determine better approaches.
Driven by technical enhancements in Artificial Intelligence and global regulatory efforts, Driver Monitoring Systems are fast becoming a leading human-centered automotive safety system. Initially focused on detecting distracted and drowsy driving, these systems may offer keys to detecting and mitigating driver impairment due to intoxication.
In this presentation we will explore the state of alcohol intoxication research and opportunities to leverage evolving driver monitoring (DMS) technology to enhance road safety – all drawing from Smart Eye’s existing research collaborations.

Detlef Wilke,
Vice President of Automotive,
Smart Eye

Detlef Wilke,
Vice President of Automotive,
Smart Eye
Radar based vital signs monitoring: how it works and why you want to use it
Radar based vital signs monitoring: how it works and why you want to use it
Radar can be used to measure heart rate via exquisite measurement of skin vibration during every heartbeat and respiration rate by measuring the movement of the chest cavity at each breath. Measurement of these vital signs offers a wealth of information beyond what can be gleaned visually with conventional camera-based driver monitoring systems. For example, one may be visually calm while experiencing great stress. But heart rate and respiration rate changes reveal what even the most stoic individual is experiencing. Additionally having radar in-cabin can also offer child presence detection, occupancy detection, and other features at little to no additional cost. In this talk we will cover the basics of how radar based VSM works, what features it enables, how one could use it to enhance automotive safety and convenience, and where it beats cameras for in-cabin monitoring

Harvey Weinberg,
Director Sensor Technologies,
Microtech Ventures

Harvey Weinberg,
Director Sensor Technologies,
Microtech Ventures
From drowsiness monitoring to sleep prediction: a scalable architecture
From drowsiness monitoring to sleep prediction: a scalable architecture
A family of IPs, named PREDICTS, able to precisely predict in real-time the sleep onset of a subject through the analysis of few selected physiological parameters have been developed over the years, in order to run both on contact (wearable) and contactless (RADAR) systems.
A detailed validation phase run in different incremental steps with a relevant number of subjects on overnight tests and on the dynamic vehicle simulator at AVL (Graz).
Since PREDICTS is “sensor agnostic” a scalable and flexible architecture has been defined in order to successfully address both after-market and OEM applications.
A pilot application, combining a camera-based system and a wearable device on a fleet of heavy-duty trucks in collaboration with Garmin, who is providing the sensing technology, is in progress. It is among the first validations in the transport/logistics field.
In this talk, we will present the key concepts and the preliminary results.

Riccardo Groppo,
CEO & Founder,
Sleep Advice Technologies

Riccardo Groppo,
CEO & Founder,
Sleep Advice Technologies
Optics still matter in the Future!
Optics still matter in the Future!
The InCabin revolution will expand from the initial focus on DMS to many vision-based applications, and the presentation will take a hands-on approach and discuss future InCabin vision applications through the lens of an actual lens manufacturer. The optical stack is often seen as a commodity nowadays, even though the lens is a crucial driver, differentiator, and innovator for performance. Applying practical optimization and tradeoff strategies on the lens level supports the development of a differentiated camera product that can satisfy regulations and automotive requirements on the one hand and balance costs on the other.

Ingo Foldvari,
Director of Business Development,
Sunex Inc.

Ingo Foldvari,
Director of Business Development,
Sunex Inc.
Vision based AI for drowsiness detection – a high risk approach to a solved problem
Vision based AI for drowsiness detection – a high risk approach to a solved problem
Vision-based AI is a dangerous approach for drowsiness detection. This presentation covers several risks and pitfalls automotive suppliers will encounter. These stem from misconceptions and incorrect assumptions about the nature of drowsiness. We present an alternative approach that is based on a physiological biomarker for drowsiness using eyelid movements. This methodology can easily be incorporated into all vision-based driver monitoring systems, today. In addition, we will discuss other neurological biomarkers that can be derived from eyelid movement, such as onset of Alzheimer’s, epilepsy and neurotoxicity.

Dr. Trefor Morgan,
GM R&D,
Optalert

Dr. Trefor Morgan,
GM R&D,
Optalert

Per Fogelström,
Principal system architect automotive,
Tobii AB
Drowsiness Prediction Based on an iToF Sensor for iCM Applications
Drowsiness Prediction Based on an iToF Sensor for iCM Applications
Nowadays most drowsiness detection systems are based on visible drowsiness manifestations e.g., yawning, closing eyelids, etc. and therefore have a very limited time window to act. We present here a system based on breathing that detects drowsiness several minutes before any visible drowsiness manifestation appears. Breathing is captured through a 3D iToF sensor in the interior of the vehicle.
The sensor measures the movements in the Z-axis of the driver’s chest corresponding to the breathing signal. Thoracic Effort Drowsiness Detection algorithm (TEDD) analyzes this signal to predict several minutes in advance if the driver will become impaired to drive due to drowsiness.
The proposed DMS shows promising results regarding the breathing signal extraction. When compared to a chest band we obtain a Pearson’s Correlation Coefficient (PCC) of 0.98. Regarding drowsiness prediction we obtain the exact same results for both systems.

Brenda Meza García,
Innovation Leader,
Nextium by IDNEO

Brenda Meza García,
Innovation Leader,
Nextium by IDNEO
Optical Design Technology for In-Cabin Imaging System
Optical Design Technology for In-Cabin Imaging System
Today’s automotive technology is advancing at an unprecedented rate. User experience and satisfaction plays a critical role in automotive system development, especially regarding in-cabin system requirements. Due to this, vehicle imaging systems are moving away from the narrow field of view driver monitoring system towards a wider Field of View (FoV) to capture the entire cabin (occupant & back seat). This means combing the mandatory Driver Monitoring System (DMS) with more modern user experiences such as occupant identification and video surveillance, in one single sensor that reduces cost and impact on interior design.
In this session we will discuss how advanced freeform lens with smart pixel management maximise the resolution on the driver while capturing the entire cabin in varying lighting conditions. We will show how advanced image processing algorithms including dewarping deliver the best pixels for either human or computer vision to maximize the application efficiency.

Patrice Roulet-Fontani,
VP Technology & Co-Founder,
Immervision

Patrice Roulet-Fontani,
VP Technology & Co-Founder,
Immervision

Adrian Capata,
Senior Vice President, Engineering,
DTS/Xperi

Greg Plageman,
Television Writer/Producer,
DTS/Xperi Guest
What does the future hold for in-cabin, where can technology take us? Imagining the immersive user experience
What does the future hold for in-cabin, where can technology take us? Imagining the immersive user experience

Junko Yoshida,
Co-Founder & Editor-in-chief,
The Ojo-Yoshida Report
(Moderator)

Colin Barnden,
Principal Analyst – Automotive ADAS and In-Cabin Monitoring,
Semicast
(Panellist)

Prateek Kathpal,
CTO,
Cerence
(Panellist)

Detlef Wilke,
Vice President of Automotive,
Smart Eye
(Panellist)

Junko Yoshida,
Co-Founder & Editor-in-chief,
The Ojo-Yoshida Report
(Moderator)

Colin Barnden,
Principal Analyst – Automotive ADAS and In-Cabin Monitoring,
Semicast
(Panellist)

Prateek Kathpal,
CTO,
Cerence
(Panellist)

Detlef Wilke,
Vice President of Automotive,
Smart Eye
(Panellist)
DOWNLOAD THE AGENDA
Fill out the form below to download your PDF copy of the InCabin Agenda.