20th – 22nd June 2023 | Autoworld, Brussels
InCabin Brussels Agenda
Please note: Session times and locations are subject to change
Check-In for full pass holders only
- Tuesday 20th June
- 9:30am CET
- 10am CET
- Tuesday 20th June
- Minerva Room
Tutorial 1: Time of Flight Sensing – from working principles to the latest innovations
(For full pass holders only)
- Tuesday 20th June
- 10am CET
- Minerva Room
Tutorial 1: Time of Flight Sensing – from working principles to the latest innovations

Albert Theuwissen,
Founder,
Harvest Imaging

Albert Theuwissen,
Founder,
Harvest Imaging
- 1pm CET
- Tuesday 20th June
- Exhibition Hall
Lunch for full pass holders only
- 2pm CET
- Tuesday 20th June
- Minerva Room
Tutorial 2: Interplay of Human Factors and Safety for ADAS & Automated Driving
(For full pass holders only)
- Tuesday 20th June
- 2pm CET
- Minerva Room
Tutorial 2: Interplay of Human Factors and Safety for ADAS & Automated Driving

Siddartha Khastgir,
Head of Verification and Validation, Intelligent Vehicles,
University of Warwick

Dr. Peter Burns,
Chief, Human Factors and Crash Avoidance Research,
Transport Canada

Siddartha Khastgir,
Head of Verification and Validation, Intelligent Vehicles,
University of Warwick

Dr. Peter Burns,
Chief, Human Factors and Crash Avoidance Research,
Transport Canada
Check-In / Exhibition opens
- Tuesday 20th June
- 4:30pm CET
- 5:30pm CET
- Tuesday 20th June
- Minerva Room
Roundtable 1: Revolutionizing Driver Safety: Steering Wheel Sensing and Real-Time Personalized Assistance with CARDIOID’S Invisible ECG Technology
- Tuesday 20th June
- 5:30pm CET
- Minerva Room
Roundtable 1: Revolutionizing Driver Safety: Steering Wheel Sensing and Real-Time Personalized Assistance with CARDIOID’S Invisible ECG Technology
Roundtable Discussions, Free to attend discussion sessions included with all passes

André Lourenço, PhD,
CEO – Head of R&D&I,
Cardio ID

André Lourenço, PhD,
CEO – Head of R&D&I,
Cardio ID
- 5:30pm CET
- Tuesday 20th June
- Minerva Room
Roundtable 2: Use Cases and Opportunity for In-Cabin Sensing with Time-of-Flight Technology
- Tuesday 20th June
- 5:30pm CET
- Minerva Room
Roundtable 2: Use Cases and Opportunity for In-Cabin Sensing with Time-of-Flight Technology
Time-of-flight is nowadays under the spotlight for several in-cabin sensing applications, either in competition or as complementary to other technologies.
Which are the key use cases and the unmet needs for unlocking the potential of time-of-flight technology?

Gualtiero Bagnuoli,
Marketing manager - Optical Sensors,
Melexis

Gualtiero Bagnuoli,
Marketing manager - Optical Sensors,
Melexis
- 5:30pm CET
- Tuesday 20th June
- Minerva Room
Roundtable 3: Synthetic Human Data
- Tuesday 20th June
- 5:30pm CET
- Minerva Room
Roundtable 3: Synthetic Human Data

Representative,
Seeing Machines

Representative,
Devant

Representative,
Seeing Machines

Representative,
Devant
- 5:30pm
- Tuesday 20th June
- Exhibition Hall
Welcome Reception

Check-In / Exhibition opens with welcome coffee, sponsored by AutoSens PLUS
- Wednesday 21st June
- 08:00am CET
- 9am CET
- Wednesday 21st June
- Mezzanine Stage
Opening remarks from the chair
- Wednesday 21st June
- 9am CET
- Mezzanine Stage
Opening remarks from the chair

Hayley Sarson,
Operations Director,
Sense Media Group

Hayley Sarson,
Operations Director,
Sense Media Group
- 9:10am CET
- Wednesday 21st June
- Mezzanine Stage
Euro NCAP’s Outlook for Occupant Status Monitoring
- Wednesday 21st June
- 9:10am CET
- Mezzanine Stage
Euro NCAP’s Outlook for Occupant Status Monitoring
In this presentation, Euro NCAP will provide a status update and the next milestones on Euro NCAP’s Occupant Status Monitoring for 2026.

Adriano Palao,
Technical Manager ADAS & AD,
Euro NCAP

Adriano Palao,
Technical Manager ADAS & AD,
Euro NCAP
- 09:40am CET
- Wednesday 21st June
- Mezzanine Stage
Detected! Now What? Human Factors in the Design and Evaluation of Effective Interventions
- Wednesday 21st June
- 09:40am CET
- Mezzanine Stage
Detected! Now What? Human Factors in the Design and Evaluation of Effective Interventions

Dr. Peter Burns,
Chief, Human Factors and Crash Avoidance Research,
Transport Canada

Dr. Peter Burns,
Chief, Human Factors and Crash Avoidance Research,
Transport Canada
- 10:10am CET
- Wednesday 21st June
- Mezzanine Stage
The Augmented Cabin. A third space that takes you places
- Wednesday 21st June
- 10:10am CET
- Mezzanine Stage
The Augmented Cabin. A third space that takes you places

Adrian Capåtå,
SVP, In Cabin Sensing,
DTS / Xperi

Adrian Capåtå,
SVP, In Cabin Sensing,
DTS / Xperi
- 10:40am CET
- Wednesday 21st June
- Exhibition Hall
Networking refreshment break, sponsored by Optalert

- 11:00am CET
- Wednesday 21st June
- Classic Lounge
Press Briefing
- 11:30am CET
- Wednesday 21st June
- Mezzanine Stage
Driver visual attention and readiness in L2/L3 vehicles
- Wednesday 21st June
- 11:30am CET
- Mezzanine Stage
Driver visual attention and readiness in L2/L3 vehicles
Driving simulator studies have shown that when automated driving at SAE Level 2 is engaged, drivers spend more time looking away from the forward roadway, and less at safety critical locations, such as the rear-view and side mirrors, when compared to manual driving. A similar pattern of driver attention allocation is now observed on the real road, with L2 automated vehicles. This reduced engagement by the human in the driving task is known to take them “out of the control loop”, leading to a slower resumption of control of the driving task after a takeover request. Response to critical events is also shown to be slower, when compared to manual driving. As the level of automation in vehicles increases, and the driver is allowed to engage in other (non-driving related) activities, the challenge of keeping them suitably “ready” to take back control from the vehicle is even higher. This talk will provide an overview of a number of studies conducted at the University of Leeds, which have used camera-based driver monitoring systems (DMS) to understand how drivers’ visual attention is distributed during different stages of L2 and L3 automated driving, and how this affects resumption of control after a takeover request. The use of such DMS for confirming readiness, before the driver is allowed to take control, is also outlined. Results are discussed in relation to the use of different HMI for taking drivers’ attention back to the road in an efficient manner, and what must be considered for future developments of in-vehicles sensors to improve driver response and safety during higher levels of automated driving.

Professor Natasha Merat,
Chair in Human Factors of Transport Systems, Leader, Human Factors and Safety Group,
Institute for Transport Studies, University of Leeds

Professor Natasha Merat,
Chair in Human Factors of Transport Systems, Leader, Human Factors and Safety Group,
Institute for Transport Studies, University of Leeds
- 12:00pm CET
- Wednesday 21st June
- Mezzanine Stage
Data privacy and transparency within driver monitoring – How do we ensure drivers trust their DMS data is safe?
- Wednesday 21st June
- 12:00pm CET
- Mezzanine Stage
Data privacy and transparency within driver monitoring – How do we ensure drivers trust their DMS data is safe?

Björn Meyer,
Head of Automotive Semiconductor Marketing,
Sony Europe B.V.

Philippe Dreuw,
Chief Product Manager Interior Monitoring Systems,
Robert Bosch GmbH

Gunnar Trioli,
Vice President of Engineering,
Tobii

Moderator:
Professor Natasha Merat,
Chair in Human Factors of Transport Systems, Leader, Human Factors and Safety Group,
Institute for Transport Studies, University of Leeds

Björn Meyer,
Head of Automotive Semiconductor Marketing,
Sony Europe B.V.

Philippe Dreuw,
Chief Product Manager Interior Monitoring Systems,
Robert Bosch GmbH

Gunnar Trioli,
Vice President of Engineering,
Tobii

Moderator:
Professor Natasha Merat,
Chair in Human Factors of Transport Systems, Leader, Human Factors and Safety Group,
Institute for Transport Studies, University of Leeds
- 12:50pm CET
- Wednesday 21st June
- Exhibition Hall
Networking lunch break, sponsored by ST Microelectronics

- 2:15pm CET
- Wednesday 21st June
- Mezzanine Stage
Driver engagement & take-over readiness: Current understandings and future needs
- Wednesday 21st June
- 2:15pm CET
- Mezzanine Stage
Driver engagement & take-over readiness: Current understandings and future needs
Driver engagement is a topical concept that underpins safety conversations around take-over with assisted and automated driving systems. Today, assisted driving systems available and in development require that the driver maintains responsibility for all aspects of vehicle operation at all times. In practice, many auto-makers have had a choice between adopting either camera-based monitoring systems or hands-on the wheel sensors in their vehicle designs, but this is changing as Euro NCAP, European Commission and other bodies recognise the value that camera-based sensing brings to the safety case. As the capability of automated systems increases, the role of the driver and vehicle will evolve – both will need to operate and interact seamlessly as co-pilots of the vehicle system. Understanding the state of the driver is a critical enabler for safe takeovers, particularly as more advanced systems will have increased scope for choosing when they do or don’t transfer control. How is a safe and acceptable driving experience going to be achieved? Existing research will be presented that makes the case for camera-based driver monitoring, focusing on how technology that monitor’s a driver’s ocular movements is more effective at ensuring safety than other approaches. Ultimately, the presentation will outline that camera-based driver monitoring underpins the pinnacle of interior-sensing, promising full-integration with Advanced Driver Assistance Systems (ADAS) enabling appropriate interventions that have the potential to dramatically reduce road accidents.

Mike Lenne,
Chief Science & Innovation Officer,
Seeing Machines

Mike Lenne,
Chief Science & Innovation Officer,
Seeing Machines
- 2:15pm CET
- Wednesday 21st June
- Minerva Room
Validation testing for a type of Driver Monitoring System with regards to (EU) Regulation 2021/1341
- Wednesday 21st June
- 2:15pm CET
- Minerva Room
Validation testing for a type of Driver Monitoring System with regards to (EU) Regulation 2021/1341
Road accidents are responsible for more than 1.25 million fatalities every year (WHO), with more than 90% of the cases caused by human errors. The implementation of Driver State Monitoring Systems (DMS) can significantly reduce driver errors caused by distraction and drowsiness, and the EU General Safety Regulation (GSR) Phase I mandates that all new passenger and commercial vehicles in the EU must have Driver Drowsiness and Attention Warning (DDAW) functionality from 2024 and Phase II requires Advanced Driver Distraction Warning (ADDW) as a mandatory feature from 2026. Although there are growing number of DMS systems installed in both commercial and passenger vehicles, the type-approval for those vehicles with regards to the regulation (EU) 2021/1341 (DDAW) (hereafter “the Regulation”) is still new. Recently TÜV SÜD worked with ArcSoft on a validation test for the ArcSoft Tahoe In-cabin Monitoring System (hereafter “Tahoe” or “DDAW system”). In this report we will present how the validation test has been conducted and some interesting results and observations. Founded in 1994, ArcSoft is a leading algorithm and software solutions provider in the computer vision industry, with applications in both automotive and other fields. ArcSoft Tahoe is a camera-based driver monitoring system. It is composed of an automotive-grade highperformance AI processor, a high-definition camera, and full-featured DMS application software. It is a standalone DMS solution that can be installed at various locations in the cockpit. Tahoe has the functionality of both DDAW and driver distraction warning as well, it supports the standard CAN communication interface, and can output the DMS results in real time through the CAN connection. TÜV SÜD Czech was responsible for all activities of the validation testing. The Regulation provides general description of the validation testing in Annex I, part 2. However, the test methodology had to be defined in detail and applied. It was based on the essential principles:
• To meet the requirements of the Regulation
• Safety strategy to minimize risk of accidents
• Simple installation of the test tool chain with minimal modification of the test vehicle
• Modular setup of the test tool chain for uniform application regardless the applied technology of the system under test and the test vehicle
• Effective execution of the test within dedicated time slot
• Traceable data recorded during the test for further analysis
• Possibility to evaluate the test results immediately. The tests were carried out at a proving ground as a part of safety strategy, using human participants in accordance with the Regulation, where the self-assessment rating provided by test driver meets the criteria in accordance with the Regulation, Annex I, part 2. The tests were carried out independently to environmental conditions, since the DDAW system is less affected by light conditions. Drowsiness was measured using the KSS in accordance with Annex 1 part 2, chapter 5.1. The test results were evaluated in accordance with chapter 7. The acceptance criteria were applied in accordance with the points 8.1 a) and b) with correction stated at point 8.1 c) as some test runs were longer than 15 minutes. The validation testing confirmed that the camera based DDAW system is able to monitor driver drowsiness as required in accordance with the Regulation, Annex I, part 2, point 1.1. The validation testing of the DDAW system met the requirements set out in accordance with the Regulation, Annex I, part 2, points 2 to 8.

Dr. Feng Chen,
Vice President of Automotive Vision Group,
ArcSoft

Karel Jansky,
Functional safety specialist,
TÜV SÜD

Dr. Feng Chen,
Vice President of Automotive Vision Group,
ArcSoft

Karel Jansky,
Functional safety specialist,
TÜV SÜD
- 2:40pm CET
- Wednesday 21st June
- Mezzanine Stage
In-Cabin Emotion Sensing: What? Why? When? How?
- Wednesday 21st June
- 2:40pm CET
- Mezzanine Stage
In-Cabin Emotion Sensing: What? Why? When? How?
The various uses cases for emotion recognition in a vehicle
The differences between various emotion modalities
the key technical challenges of developing and deploying visual emotion recognition in a vehicle
The near-term in vehicle applications of visual emotion recognition
Demo/video examples

Dr. Mohammad Mavadati,
Lead Machine Learning
Scientist,
Smart Eye

Dr. Mohammad Mavadati,
Lead Machine Learning
Scientist,
Smart Eye
- 2:40pm CET
- Wednesday 21st June
- Minerva Room
Quantifying near infrared camera performance for increased DMS/OMS performance
- Wednesday 21st June
- 2:40pm CET
- Minerva Room
Quantifying near infrared camera performance for increased DMS/OMS performance
Abstract: In-cabin monitoring systems use cameras for fundamental detection of the driver and occupant activity. Within the camera, the image sensor is a key component for converting the incoming light into a signal to be used by downstream machine vision algorithms. In-cabin cameras use image sensors designed to be optimized to a near infrared (NIR) signal, as this segment of the electromagnetic spectrum can illuminate the driver and occupant without interference of the primary driving task inside the vehicle. Quantifying the camera’s NIR performance, unlike traditional cameras that use tools designed for the visible spectrum, requires image quality tools optimized for NIR. This talk will describe such tools that can be used to measure image quality factors including sharpness, noise, distortion, flare, and tonal properties such as dynamic range. We will also be describing a novel approach to measure information capacity and related KPIs (key performance indicators). These measurements can then be used to optimize the camera systems to increase performance of the driver and occupant monitoring systems (DMS and OMS).

Jonathan Phillips,
VP of Imaging Science,
Imatest

Jonathan Phillips,
VP of Imaging Science,
Imatest
- 3:05pm CET
- Wednesday 21st June
- Mezzanine Stage
Advanced perception technologies enabling adaptive restraint control
- Wednesday 21st June
- 3:05pm CET
- Mezzanine Stage
Advanced perception technologies enabling adaptive restraint control
Driven by the wish for more safety in unavoidable crash events this presentation will show, how intelligent restraint systems can be controlled via occupant perception technologies and increase occupant safety in a wider range of possible crash scenarios including future test scenarios as proposed in the EuroNCAP roadmap 2030.
On the one hand it will be highlighted how existing and next generation restraint actuators can be enhanced by taking advantage of interior and exterior sensor information to expand their full potential. On the other hand, this presentation is focusing on camera based occupant classification systems as a key technology to enable intelligent restraint systems.
It will be outlined why and how intelligent restraint systems will become reality, already in the next few years, by taking a closer look at
Latest concepts on adaptive restraint actuators
Requirement framework for occupant monitoring in adaptive restraint systems
Perception technologies in focus for adaptive restraint systems
Latest results on body perception based on interior cameras
Outlook on adaptive restraint control

Philipp Russ,
CEO, SIMI Reality Motion Systems GmbH Tillman Herwig, Product Owner,
SIMI (Part of the ZF Group)

Philipp Russ,
CEO, SIMI Reality Motion Systems GmbH Tillman Herwig, Product Owner,
SIMI (Part of the ZF Group)
- 3:05pm CET
- Wednesday 21st June
- Minerva Room
Vehicle Occupant Heart Rate and Respiration Rate Estimation Based on a RGB-NIR Camera
- Wednesday 21st June
- 3:05pm CET
- Minerva Room
Vehicle Occupant Heart Rate and Respiration Rate Estimation Based on a RGB-NIR Camera
Modern vehicles are now fitted with interior cameras that can sense both visual and NIR modalities. Given that an interior camera is already installed in vehicles for driver monitoring purposes to prevent accidents, there is an opportunity to use them for monitoring vital signs of occupants. The proposed presentation will provide the latest research in the field of estimating heart rate and respiration rate with a single RGB-NIR camera using deep learning and the optical flow approach. We chose the face as the region of interest for estimate the heart rate because it is typically uncovered, allowing the camera to capture subtle variations in color and brightness caused by changes in blood volume due to arterial pulsations. To estimate the respiration rate, we used an optical flow algorithm that recognizes chest movements within the video frames and maps them to a respiratory frequency. Additionally, we tested our approach not only in a laboratory environment but also inside a vehicle with different test subjects. The presentation will include a statistical evaluation of our proposed method. The driver’s vital signs can predict or indicate critical events, such as sudden heart attacks, strokes, or fatigue, in the early stages. This has the potential to enable a controlled stop of the vehicle before occurring. Another potential use case for in-cabin vital sign estimation is in telemedicine, where a vehicle can be transformed into a mobile medical space. A remote doctor can access the interior camera and vital parameters of the patient to diagnose an illness and adjust medication accordingly. This presentation will discuss whether a single RGB-NIR camera is reliable enough to estimate the heart rate and respiration rate inside a vehicle based on our findings. Additionally, we will identify the potential problem cases associated with this approach.

Patrick Laufer,
Development Engineer,
IAV

Patrick Laufer,
Development Engineer,
IAV
- 3:30pm CET
- Wednesday 21st June
- Exhibition Hall
Networking refreshment break in the exhibition, sponsored by Devant

- 4:15pm CET
- Wednesday 21st June
- Mezzanine Stage
3D Time-of-Flight and 60 GHz Radar – how two complementary technologies enable differentiating features, meet regulations and make the car safer
- Wednesday 21st June
- 4:15pm CET
- Mezzanine Stage
3D Time-of-Flight and 60 GHz Radar – how two complementary technologies enable differentiating features, meet regulations and make the car safer
In-Cabin Monitoring Systems becoming more and more important to fulfill regulations and NCAP rating requirements like for driver monitoring (DMS) and child presence detection (CPD). Additionally, these systems offer a huge potential to add comfort, innovation and new services while enhancing passive safety. Learn more about the tremendous progress of 3D Time-of-Flight and 60 GHz Radar as complementary technologies addressing these use-cases and being the key enabler for differentiating features. See latest examples how 3D data can fulfill the EuroNCAP 2030 vision towards smart airbags or why 3D depth data can enrich your driver monitoring system by secure 3D face-ID. Get insights into the latest achievements in 60GHz Radar, a highly optimized and cost efficient seat occupant detection solution including robust CPD and intrusion alert.
Key takeaways
There is not the one sensing technology matching all requirements from regulation and the market
60 GHz Radar perfectly matches child presence detection, occupant detection including intrusion alert functionality
Secure 3D face authentication enables highest level of differentiation and the era of real seamless connectivity
Depth data is the key if camera systems shall enable smart airbags

Martin Lass,
Senior Product Marketing Manager,
Infineon Technologies AG

Martin Lass,
Senior Product Marketing Manager,
Infineon Technologies AG
- 4:15pm CET
- Wednesday 21st June
- Minerva Room
Designing modern software architecture for smarter mobility
- Wednesday 21st June
- 4:15pm CET
- Minerva Room
Designing modern software architecture for smarter mobility
With the incredible neck breaking pace of innovation in the areas of machine learning, AI, sensors and IoT’s , drivers and passengers have much higher expectations and OEMs and service providers have massive opportunities and enormous challenges in this brave new future. Customers will evaluate everything with heightened expectations as their digital experiences integrate more AI and will come to expect or at least judge the quality of their in-cabin experience with that background. As software and service architects in the automotive industry with a focus on the in-cabin experience, we have to consider the journey of our customers and cultivate a 360 degrees view that integrate data as a driving force for better products and experience as well as a product in its own right with a lifecycle of its own. There are enormous challenges around storage, classification, privacy, robustness and last but not least how do we apply what we have learned in both SaaS and Software Engineering towards a future that is a mix of software, data, SaaS and Autonomous system based and how can we do this in a way that delights our customers without compromising on safety.

Mohamed Sayed,
CEO,
Heuro Labs

Mohamed Sayed,
CEO,
Heuro Labs
- 4:40pm CET
- Wednesday 21st June
- Mezzanine Stage
Leveraging 3D Information for Automotive In-Cabin Analysis: Technologies, Use Cases and Challenges
- Wednesday 21st June
- 4:40pm CET
- Mezzanine Stage
Leveraging 3D Information for Automotive In-Cabin Analysis: Technologies, Use Cases and Challenges
3D cameras and innovative algorithms have opened up new avenues for analysing in-cabin behaviour of vehicle occupants. In this talk, we will discuss the different technologies for deriving 3D occupant information in an automotive in-cabin setting, including using 3D cameras such as ToF, structured light, etc. as well as 2D cameras paired with novel algorithms that estimate 3D positions. We will explore the benefits of using 3D occupant information for in-cabin analysis and how it enables more accurate tracking of occupants and their behavior. We will showcase the various use cases that benefit from 3D information, specifically: 1. Optimized airbag deployment 2. User experience and intuitive interaction Furthermore, we will discuss the challenges of implementing 3D occupant information in an automotive in-cabin setting and how to overcome them. We will also touch upon the potential for future research and development in this area, such as combining 3D information with other sensor modalities to achieve even greater accuracy and reliability. Overall, this talk will provide valuable insights into the benefits and challenges of using 3D information for automotive in-cabin analysis and how it can be leveraged to improve safety, comfort, and overall driving experience for occupants.

Michael Hoedlmoser,
CTO,
emotion3D

Michael Hoedlmoser,
CTO,
emotion3D
- 4:40pm CET
- Wednesday 21st June
- Minerva Room
Growth of the number of sensors in cars as growing possibilities to improve user experience
- Wednesday 21st June
- 4:40pm CET
- Minerva Room
Growth of the number of sensors in cars as growing possibilities to improve user experience
Current cars are equipped with dozens of regulatory sensors and the manufacturers are using them to get single sets of features working. Thanks to the centralization of vehicle architectures and zonal architectures it is more common to find a symbiosis between vehicle systems where they benefit from each other and improve their performance. This specific environment gives possibilities to parametrize and digitalize cars’ movement, describing both the interior and exterior of the car surroundings including all passengers.
This presentation will explain how:
The integration with the vehicle will look like and what would be the options for OEMs
A few use cases of games that will become the first ones to develop
We can develop Uber-like business model in automotive in-cabin games space.

Piotr Mroz,
Technical Manager,
Varroc

Piotr Mroz,
Technical Manager,
Varroc
- 5:05pm CET
- Wednesday 21st June
- Mezzanine Stage
VCSEL Technologies for In-Cabin Sensing Systems
- Wednesday 21st June
- 5:05pm CET
- Mezzanine Stage
VCSEL Technologies for In-Cabin Sensing Systems
There are more and more demand in Automotive Market to implement In Cabin sensing systems such as DMS to detect drivers status for road safety, OMS for passengers safety supported by new regulations in global. Furthermore, there are more applications which needs in cabin sensing systems such as face recognition for setting personalization, payment or start/stop the engine/EV car and gesture recognition for functions and funs. With this market trend, the technology trends are also getting evolved. In this session, I would like to share the benefits of using VCSEL in DMS/OMS/ICMC and Lumentum VCSEL technology which can be used for those systems for InCabin market.

Jenny Kim,
Senior Product Line Manager,
Lumentum

Jenny Kim,
Senior Product Line Manager,
Lumentum
- 5:05pm CET
- Wednesday 21 June
- Minerva Room
Facing Volatile HMI Requirements with a Modular Software Architecture – Opportunities and Challenges
- Wednesday 21 June
- 5:05pm CET
- Minerva Room
Facing Volatile HMI Requirements with a Modular Software Architecture – Opportunities and Challenges
When choosing consumer goods, it is important for today’s customers to be able to express their individuality. This social trend has also reached the automotive industry, which is why users can choose from a long catalogue of additional equipment when buying a new vehicle or even tailor it entirely to their individual preferences. However, the defined range of functions cannot be changed subsequently, unlike a smartphone, which can be constantly adapted to current demands thanks to regular software updates and an almost infinite app store. To meet these customer needs, a highly adaptable HMI system, which can be modified by users at any time to the changed demands is beneficial. In addition, new, temporary services and functions should be provided to the customer as an after-purchase offering, allowing for continuous improvement of user experience. If OEMs pursue the technology strategy of an on-demand customizable HMI system, they can take advantage of new revenue and monetization opportunities and also reduce their product variety by standardizing hardware as much as possible and shaping product variety through after-purchase features.\r\nThe realization of such an on-demand customizable HMI system requires over-the-air updates. An enabler for this is a software-defined vehicle, which entails an abstraction of the software from the hardware level. In order to facilitate the implementation of new software packages, a modular software architecture is favorable, in which individual modules can be replaced or extended. However, an abstraction of software and hardware not only has advantages for software updatability, but hardware can also be adapted to technological changes for new vehicle models. For example, new sensors for in-cabin sensing can be implemented without the need to adapt the complete software stack, only by extending the corresponding software package. \r\nAn approach for such a modular structure is the service-oriented architecture (SoA), in which software is divided into small, self-contained modules (services) that can be updated individually. SOAs are considered as one of the key elements for more flexibility, integration of external services and on-demand functions. Each software function is split into a different service that can be integrated during runtime. This allows services to be modified after completion of the design process, including function updates, replacements and reuse. In addition to improved updateability, an SoA also offers security-related benefits. If a hardware element fails or resources are insufficient, functions can be moved flexibly to other hardware elements. This means that in addition to better updatability, greater robustness and reliability can also be achieved. However, with the implementation of a SoA, new challenges also arise, for example, with regard to existing security measures, because dynamic communication is more complex to secure as it changes over the runtime. \r\nIn this talk, I will discuss the opportunities and challenges OEMs and suppliers face by implementing a modular software architecture to enable easy updateability of vehicles. In addition to technical possibilities and challenges, I will discuss future software architecture changes to meet dynamically changing requirements for HMI systems.

Laura Fautz,
Consultant,
fka GmbH

Laura Fautz,
Consultant,
fka GmbH
- 5:30pm CET
- Wednesday 21st June
- Exhibition Hall
The InCabin and DTS/Xperi Networking Party

Check-In / Exhibition opens
- Thursday 22nd June
- 08:30am CET
- 9:15am CET
- Thursday 22nd June
- Mezzanine Stage
A multi-modal data fusion and deep learning model for evaluating the driver take-over readiness
- Thursday 22nd June
- 9:15am CET
- Mezzanine Stage
A multi-modal data fusion and deep learning model for evaluating the driver take-over readiness
Presenting a multi-modal deep learning model that fuses multiple sources of inputs from the driver “head pose”, “hand activity”, and “upper body pose” via multiple in-cabin camera sensors plus the “human factor” feeds, all together to understand the driver’s readiness in response to a take-over.

Dr. Mahdi Rezaei,
Assistant Professor of Computer Science,
Institute for Transport Studies, University of Leeds

Dr. Mahdi Rezaei,
Assistant Professor of Computer Science,
Institute for Transport Studies, University of Leeds
- 9:40am CET
- Thursday 22nd June
- Mezzanine Stage
Multi-task learning with Transformers for In-Cabin Monitoring
- Thursday 22nd June
- 9:40am CET
- Mezzanine Stage
Multi-task learning with Transformers for In-Cabin Monitoring
In-cabin monitoring in an automotive mobility environment becomes more and more important due to increasing safety regulations and complicated Human \Machine Interface (HMI) requirements. However, due to embedded systems for HMI has usually limited in computational power, algorithms need be faster and duplicated works should be removed. In addition, the recent trend of integrating with existing systems such as head unit integration requires that the algorithm need to be smaller and lighter. In this work, our approach is proposed a multi-task learning technique that is suitable for various cabin scenarios. Our research has only one backbone and shared it to learn multiple tasks using a transformer decoder. As a result, features such as object detection, key point detection and behavior detection were able to operate with lower power and computation because it is different from the existing method in which each algorithm had its own backbone. In addition, the size of the model has been reduced so that it can be easily used for model update, and only the decoder part can be updated separately. And, since our network is designed only with operations commonly provided by existing commercial System on Chip(SoC), the model can be ported without difficulty and can show high efficiency.

Jungyong Lee,
Vision AI Specialist,
LG Electronics

Jungyong Lee,
Vision AI Specialist,
LG Electronics
- 9:40am CET
- Thursday 22nd June
- Minerva Room
Revolutionizing Automotive Safety: mmWave Radar – the Future of In-Cabin Sensing and Driver/Passenger Monitoring
- Thursday 22nd June
- 9:40am CET
- Minerva Room
Revolutionizing Automotive Safety: mmWave Radar – the Future of In-Cabin Sensing and Driver/Passenger Monitoring
The journey to safe and autonomous vehicles requires an understanding of driver and passenger conditions. Join Pontosense’s Co-founder and CEO, Alex Qi, to learn about:
Why autonomous vehicles require insights on occupant conditions and emotions. Electrification and autonomous driving are cool, but the OEMs and Tier 1s are all trying to solve the challenges of in-cabin sensing. How do you tell the car to stop or slow down because you’re sick or having a medical emergency if there is no driver?
Why mm-wave and wireless sensing were impossible until now, and how having this level of data gives insights on drivers and passenger conditions and emotions that have never existed before
How specific algorithmic and hardware breakthroughs have unlocked the future of in-cabin care, enabling occupant classification, localization, and biometrics wirelessly
Why mm-wave RF sensors are the only way to reach EuroNCAP adherence for Child Occupant Protection
What the cost savings are in comparison to other methods trying to solve the same problem \r\n- Where are the OEMs going from here, and how will they use the data and insights mm-wave RF sensors can provide

Georgia Deacon,
Project Engineering Manager,
Pontosense

Georgia Deacon,
Project Engineering Manager,
Pontosense
- 10:05am CET
- Thursday 22nd June
- Mezzanine Stage
Agile deep learning development – how often can you iterate on a DL model design?
- Thursday 22nd June
- 10:05am CET
- Mezzanine Stage
Agile deep learning development – how often can you iterate on a DL model design?
The automotive industry comes with a history of long development cycles. There is a large difference in the time from sourcing and SoD, to SoP compared with product development cycles in other industries. As an example, the smartphone industry just stretched their commitment to 4 years of SW updates for phones and that is for SoCs that are brand new when the phone is launched. In a young and rapidly moving field such as Machine Learning where there are more than 100 new papers published every day 1 there is little room for legacy “solutions”. Models and architectures get outdated fast and new breakthroughs can be found every 6 months. When ML applications were introduced in cars these two worlds collided. To allow AI engineers to be able to build state-of-the-art products they cannot be stuck with a toolchain that was decided when the project was sourced or a model architecture that takes months to update and deploy on the target HW. To be able to stay in the forefront of development you will need to be agile and open to new architectures, methods and follow the development of the tools and compilers. Data may be the raw material that make up DL applications, but that material needs to be shaped and handled in the right way to shape a competitive product. To find an optimal model architecture for a particular task such as segmentation or object classification, requires that multiple parameters are considered. Except for the more obvious requirement on performance and the size of the input, there are other parameters to consider such as amount of data available for training and what is supported by the toolchain you are using. But most important, the HW you are going to deploy the model on. There is a huge difference between how well different processors and accelerators handle certain operations. With state-of-the-art models increasing in size so does the cost of training the models. To find the correct architecture in a traditional way a lot of trial and error is required. Typically, a DL engineer uses a known model architecture as a starting point and then tries to modify it to only contain operators that are supported by the hardware. The model also needs to be modified in order to fit the available compute on the SoC. To know the true outcome of each experiment with the architecture there is a need to train and then evaluate the model. Training the complex models used today takes time and is expensive. Hence there is a lot to gain by replacing the trial-and-error approach. To get a more agile workflow: ● Reduce the deployment effort for your model. o Training and deployment loop < 24h ● Approximate and simulate accuracy and latency. o Experiment without running the full loop. ● The HW toolchain should be updated throughout the project. o Adding support for new operators. ● Minimise effort required to update the model architecture. o It should be possible to adjust the architecture biweekly.

Peter Kristiansen,
Head of Business Development,
Embedl

Peter Kristiansen,
Head of Business Development,
Embedl
- 10:05am CET
- Thursday 22nd June
- Minerva Room
UWB radar for in-cabin sensing
- Thursday 22nd June
- 10:05am CET
- Minerva Room
UWB radar for in-cabin sensing
Compared to narrow-band radio technology such as Bluetooth, Ultra-wideband (UWB) radio can provide centimeter and even sub-centimeter level ranging accuracy as it uses nano-second pulses to obtain precise time of flight information. Moreover, by properly designing the sequence of the nano-second pulses as specified in the latest IEEE 802.15.4z standard, UWB radio can provide secure distance bounding. Levering the aforementioned unique nature of UWB, CCC (car connectivity consortium) further introduces UWB in the automatic market. Nowadays, we have already seen that UWB radio modules (aka anchors) have been deployed in premium cars to provide secure keyless entry solutions from different OEMS. The following emerging use case of the UWB is to enable radar functionality by essentially reusing UWB radio design. The IEEE 802.15.4ab task group, the successor std. of 802.15.4z, specifies the UWB radar functionality. IMEC has more than 15-year’s track records in ultra-low power wireless IC and system design. As a result, IMEC’s UWB transceiver design achieves ten times lower power consumption compared to other existing products in the market. This presentation will especially reveal IMEC’s R&D output of UWB radar for in-cabin sensing. Specifically, we will report the latest results of IMEC’s UWB radar hardware and algorithmic designs for breathing detection and gesture recognition. By reusing the in-car UWB anchors as radars, we will demonstrate that the robustness breathing detection can achieve child presence detection (CPD) requirements defined in Euro NCAP In addition, we will show hand movement classification results using the UWB radar sensor and machine-learning-based processing. Such developments will shed light on how to enrich using UWB radars to enable other in-cabin sensing applications.Finally, we will share the future R&D roadmap on the UWB radar, including next-gen HW and algorithmic design.

Peng Zhang,
Program Manager,
IMEC Netherlands

Peng Zhang,
Program Manager,
IMEC Netherlands
- 10:30am CET
- Thursday 22nd June
- Exhibition Hall
Networking refreshment break in the exhibition, sponsored by Infineon

- 11:20am CET
- Thursday 22nd June
- Mezzanine Stage
A vision for in-cabin health monitoring: putting the science back into data science
- Thursday 22nd June
- 11:20am CET
- Mezzanine Stage
A vision for in-cabin health monitoring: putting the science back into data science
In the future occupant monitoring systems will provide low-cost health screening to hundreds of millions of people. Vehicles will screen for physiological and neurological conditions, and provide health practitioners with a wealth of data to aid in proactive diagnosis. In the coming years the automotive industry will make health screening ubiquitous.
The automotive industry needs to be careful to employ the correct approach to bring this vision to light. Two broad approaches will be presented that are commonly used to build a feature into occupant monitoring systems: minimum viable product versus scientifically valid product. While the former approach is best suited for comfort features, the latter is required for health monitoring.
If the goal is to measure features such as health conditions, cognitive impairment, or attentiveness, the starting point should be to consult with domain experts and understand the problem space. Data scientists will then have clear guidance on the current state of the science and how best to translate that to an accurate, meaningful, and scientifically valid data model. If the industry adopts the minimum viable product approach and skips this critical step, it will hit an upper limit of accuracy resulting in misclassifications. This compromises the integrity of all later steps because of an erroneous foundation.
This talk advocates for the scientifically valid product approach. Practical examples of how partnering with institutes and domain experts in conducting clinical studies will be presented.

Simon Block,
Chief Technology Officer,
Optalert

Simon Block,
Chief Technology Officer,
Optalert
- 11:20am CET
- Thursday 22nd June
- Minerva Room
Case Study: Real-Sim Data Fusion
- Thursday 22nd June
- 11:20am CET
- Minerva Room
Case Study: Real-Sim Data Fusion
Real-world data is crucial for in-cabin sensing because it provides information about the actual conditions and situations that a sensor may encounter, whilst simulated data allows for controlled and repeatable testing in a variety of scenarios that may be difficult to replicate in real-world testing. Combining real-world in-cabin data and simulated scenarios can be done through a process called data fusion. Once the real-world in-cabin data has been merged with the simulated scenarios, the dataset can be analyzed to gain insights and develop solutions, providing a more comprehensive understanding of vehicle performance and driver behavior.

Jukka Korpi,
Senior manager,
Appen