Innovation
What are the latest advancements in ADAS and AD technology, and who is driving change? Hear from OEMs, Tier 1s, and research institutes for an holistic industry overview.
Safety
Safety continues to be central to our discussions as we work towards a safer future for drivers, passengers and pedestrians; including a focus on data privacy, protection and cybersecurity.
Regulation
Ensuring vehicles comply with safety and privacy regulations continues to challenge the industry. AutoSens provides a platform to help shape these conversations.
Key Topics
Thermal Imaging
Seeing the Unseen
Thermal imaging is not just for sci-fi anymore. It’s an indispensable tool for enhancing safety, especially in adverse conditions like low visibility or extreme weather. We will showcase the latest in thermal imaging technology, discuss its applications, and explore how it’s being integrated into the automotive ecosystem.
AI for AVs
Paving The Way For Autonomous Vehicles
With a whole track dedicated to AI, we will delve into the latest breakthroughs and their application in the automotive world: from machine learning to neural networks, we’ll explore how AI is transforming the ADAS and AV industry.
Sensor Fusion
The Power Of Integration
Sensor fusion is the art of harmonising data from diverse sensor technologies, and there is much debate around the costs and benefits of various approaches in achieving this, including latency, cost, and performance. AutoSens USA will unravel the intricacies of sensor fusion and its role in building comprehensive, real-time perception for vehicles.
Infrastructure, V2X, and Connectivity
Creating A Seamless Ecosystem
The modern vehicle is more than just a mode of transportation; it’s a hub of connectivity. V2X (Vehicle-to-Everything) communication and advanced infrastructure are pivotal for enabling smart transportation systems. We’ll examine the key players, standards, and innovations, hearing from key stakeholders across this multi-disciplinary challenge.
EE Architectures
The Backbone Of Sensing Technology
Electronic and electrical (EE) architectures form the foundation of automotive sensing systems. Understanding and optimising these architectures is vital for the development of advanced safety features and autonomous driving. Dive deep into the complexities and innovations surrounding EE architectures in the automotive world.
Event-Based Sensors
Rethinking Sensing Technology
Traditional sensors have limitations, but event-based sensors are changing the game. These sensors operate on the principle of event-driven data transmission, drastically improving efficiency and response time. Join us to discover how these sensors are redefining automotive perception.
Also on the agenda...
- Market forces & Horizon Scanning
- Camera Technology
- Computer Vision
- Developments in LiDAR
- Windscreen Technology
- Data Simulation & Validation
- Deep Learning & Machine Learning
- Whole of Life Design – Environmental Responsibility/Materials
- Optics and Image Quality
- Park Assist and Low Speed Maneuvering
- Developments in RADAR
- Regulatory Updates
- Safety
- Standards & Evaluation
Shaping Tomorrow's Automotive Sensing Landscape, One Discussion at a Time
In the heart of Detroit, where the automotive industry has been a driving force for innovation for over a century, the newly Co-Located conferences AutoSens & InCabin USA 2024 are set to take center stage from May 21st to May 23rd at the iconic Huntington Place. As the world’s premier automotive sensing technology event, we are excited to announce six of the pivotal themes that will steer the conversations on the AutoSens agenda, and shape the future of automotive sensing technology.


Technical Tutorials
Only available to Full Pass holders, our tutorials are the perfect opportunity to dive deep into core industry topics, interact with industry leaders, collaborate with like-minded engineers, and elevate your expertise by tackling tough questions and broadening your skill set.
Previous topics have included camera technology, noise factor analysis, the SOTIF challenge, and the Spatial Recall Index. Watch this space for 2024’s topics, to be announced soon.
Mastering the SOTIF Challenge

Jann-Eve Stavesand
Head of Consulting
dSpace
The focus of the Tutorial is on new, promising methods in the automotive industry that can be used during the development and validation of complex E/E systems to solve the key challenge of automated and autonomous systems.
Addressing the Safety of the Intended Functionality (SOTIF) challenge in practice involves implementing specific measures and techniques to ensure the safe operation of complex systems. SOTIF focuses on the hazards that arise due to a system’s intended functionality rather than traditional safety concerns related to malfunctions or failures.
Established methods for analysis, design and testing such as FMEAs, FTAs, V-models, prototype vehicles and hardware-in-the-loop tests must be sensibly supplemented with new approaches in automotive engineering to ensure the safety of these highly complex systems. Today, testing is already scenario-based and simulation models are increasingly finding their way into series assurance. However, the experience of the last few years shows that many of the new methods have not yet been integrated into series development in a meaningful way. This is mainly due to the fact that today’s standards and regulations increasingly contain abstract requirements, but do not offer any indications for concrete implementation.
The Tutorial will briefly discuss the interdependencies of ISO 26262 and ISO 21448 and link these above all with type-approval-relevant topics and regulations, because it has become apparent that the safe placing of driving functions on the market has become a complex process in which procedures applied in this context for the first time must increasingly be taken into account. The Tutorial will show holism and at the same time illuminate individual aspects in detail.
A brief overview of the content:
- New approaches for system analysis/risk analysis (also applicable to AD applications)
- Seamless link with testing concepts and V&V strategies
- Quantitative assessment of Safety
Noise Factor Analysis for Automotive Perception Sensors

Prof Valentina Donzella
Professor, Intelligent Vehicles Sensors,
University of Warwick
One of the main challenges to achieve safe and reliable assisted and automated driving (AAD) functions is how to design them in order to cope with the unavoidable measurement uncertainty and degradation of perception sensor data in dynamic, ever-changing, and noisy driving scenarios. A remarkable amount of research and development efforts in industry and academia is still focused on the optimisation of the sensor suite, trying to bring hardware and software innovation to mitigate sensor weaknesses and the arise of unexpected corner cases.
The challenge becomes even more impactful considering that all the tests needed to support safety cases cannot be carried out in the real world, but a proper and balanced mixture of accurate simulations, x-in-the-loop (X-i-L), and real-world testing needs to be designed and evaluated to support a timely development of AAD. In this context, sensor models have a pivotal role, as they need to be able to mimic real sensors’ outputs with a fit for purpose level of fidelity.
To ensure known sensor strengths and weaknesses are evaluated, it is critical to test at all levels (from pure simulation to real world) with the possibility to include noise factors and their effects on the sensors’ outputs.
The aims of this tutorial are twofold:
- Raise awareness of structured tools and techniques that can support a thorough noise factor analysis of automotive perception sensors. To this aim, the tutorial will present WMG’s framework to carry out breakdown analysis of noise factors affecting AAD perception sensors. Applications of the framework to camera, LiDAR, and RADAR will be presented, in combination with developed noise models. Moreover, the tutorial will give an overview of the state of the art of perception sensor modelling, and then some evaluation techniques.
- Enable a wider discussion in the sensor community on noise factors, and particularly challenging the ‘status quo’ to understand what is missing, which are the noise factors with higher impact, what can be mitigated, and providing future directions for research and development.
Automotive cameras: Typical Properties that need Characterisation and Calibration

Uwe Artmann
CTO
Image Engineering
Automotive cameras are a safety critical component in the sensor stack of a car. Therefore, you want to make sure that each and every camera that is installed in a car works as intended, that the user is satisfied, and the machine vision algorithms can work as expected.
In this tutorial we have a look at properties of a camera that in most systems vary enough that you want to have a closer look in the validation phase and for some even in an end-of-line test station. We discuss items that are typically characterized and result in a pass/fail decision and in properties that are part of a calibration process to make sure that the signal from the camera and the input into a machine vision system is well defined and has only small variation. These include the spatial frequency response, noise, relative illumination, color processing, geometric calibration and more.
We have a look at existing international standards and best practice procedures so that all attendees have a good understanding of the used metrics and the caveats connected to them.
Spatial Recall Index — Where is the Performance?

Prof. Dr. Alexander Braun
Professor of Physics
Dusseldorf University of Applied Sciences
Typical AI performance metrics like mAP or precision / recall values and curves are aggregated over a whole dataset. As we are interested in the influence of optical quality on the performance of AI algorithms — like object detection or instance segmentation — we need to go one step further and spatially resolve this performance: any optical system will exhibit varying optical quality over the field of view, i.e. in the corner a typical automotive camera will be less sharp then in the middle of the image. But is traffic sign detection performance influenced if the traffic sign is in the middle of the image or at the edge? What about pedestrian detection? These are the questions we try to answer and have developed a novel metric called the Spatial Recall Index (SRI) and more recently the Generalized Spatial Recall Index (GSRI). Our metric delivers a heat map of the performance of a given AI algorithm, at the resolution of the images of the original dataset. Now we can actually see where in the image a certain performance occurs, and relate it to the optical performance and to the distribution of objects (and object sizes).
In this tutorial we will go in-depth and explain the maths behind this novel metric. Application use cases are demonstrated. A lot of room for discussion will give each participant the chance to make sure that all the details are clear.
Industry Trends
-
The 'chiplets' trend
Impacts for hardware design, data processing, and system architecture.
-
Low speed automation
The latest sensor innovations for parking applications.
-
Supply chain dynamics
How you will be impacted by the changing ways companies are working together.
-
Robust and redundant sensing
How to make your sensing system bullet-proof.
Featured Presentations
Future-proof connectivity for imaging sensors

Christoph Gollab
Network Architect
BMW
Imaging sensors play an ever–larger role in modern cars. This presentation will outline the most important questions to ask when defining and selecting the communication technology for systems using imaging sensors. It considers short term tasks as well as long term trends for which the foundation is laid down now.
Experiment assessment of thermal cameras in automatic emergency braking applications
Do Deep Neural Networks dream of Bayer data?

Pak Hung Chan
Project Engineer, WMG
University of Warwick
This talk presents our investigation on using Bayer data instead of 3-channel processed RGB images with deep neural network (DNN). The amount of data captured by HDR, high-resolution automotive camera is tremendous. However, is the data format currently used optimal for the intended use case? Camera data can be processed with traditional computer vision methods or DNN algorithms. Often, these algorithms consume 3-colour channel data (Red-Green-Blue), generated from raw/Bayer data captured by the image sensor. By working with Bayer data, it is possible to reduce transmitted amount of data, needed power, and processing time. However, there is currently very little work on optimising DNN for single channel inputs, and there are no complete or fully annotated public Bayer datasets for assisted and automated driving functions. Interestingly, preliminary investigation demonstrates that current DNNs can be used with raw data without a significant performance degradation, and possibly can be further optimised and tuned to achieve even better results. Additionally, different methods to convert (with minimal modifications) existing datasets into different Bayer formats are discussed. In this way, big curated datasets can be re-used for creating Bayer based deep learning models.
How latest sensor and computing developments are enabling automated driving

Pierrick Boulay
Senior Analyst – Lighting and ADAS Systems
YOLE Intelligence
This presentation will give insights on the latest developments of camera, radar, and LiDAR sensors, together with computing development, and how they are enabling new eyes-off applications. The presentation will give an overview of how sensors, computing and E/E architecture are linked for automated driving. We will also present our forecast related to the market of sensors and computing.
Almost Lossless Compression of Noisy Images

Prof. Dr. Bernd Jähne
Vice President EMVA
Chair EMVA 1288
Senior professor IWR Heidelberg University
The Road to Autonomous Cars: Technology, Ecosystem, and Market Review

Michele Richichi
Principal Analyst – Europe
Supply Chain & Technology
Autonomy (ADAS & Autonomous Driving) S&P Global Mobility
Researching Autonomy at model level detail, jumping on the data we will see a few clear and insightful trends emerging:
- Even as Level 0 ADAS is becoming increasingly standard, not only due to regulation (e.g. GSR in EU) but also because the competition becomes much more fierce in the industry, L2 and L2+ automation will surpass 50% market share by end of decade.
- Over that time, the average value of autonomy content per vehicle (revenues in terms of sensor hardware and application software) will triple!
- L2+ and L3 Automation will remain optional due to the high costs, but will help OEMs recoup cost od standardizing ADAS.
- L4 Autonomous hype has subsided, but remains a long term target: featuring a lot of content per vehicle (very attractive for many suppliers), a key demonstrator of the most adv tech in the market and still remains high interest prj for many in the industry.
- We will showcase how not only L2 and beyond, but also EV platforms are featuring a higher density of features and Autonomy sensor content: Cameras, Radar, as well as Lidar and Domain Controllers.
- The dynamic Lidar market is shifting in favour of Greater China, at least early on, while global tier-1 suppliers continue to dominate market shares in high-volume Camera and Radar.
Why Automotive Image Sensors Need to be Cyber-Secure

Ludovic Rota
Product Marketing Manager
onsemi
Image sensors act as the eyes of the vehicle and are used in several ADAS functions, such as lane departure warnings, pedestrian detection, and to trigger emergency braking. They also provide input to the fusion system for decision-making to assess the car’s environment and can even be used to monitor driver behavior. In the future, they will assist with identifying and authenticating the car’s users and monitor their vital signs to enable the onboard computer to assume control if the driver becomes incapacitated. It is therefore imperative that image sensors remain functional, especially in the most extreme situations an automobile can encounter.
Automate The Last Stretch Of Daily Driving

Dong Chen
GM
Zongmu Technology Germany GmbH
"Lighting Up the Dark: Next-Gen Thermal Imaging Optics for Affordable Nighttime Pedestrian Detection in Cars"

Bendix De Meulemeester
Director Marketing & Business Development
UMICORE
LYNRED
This presentation explores the influence of optical design on the cost, complexity, and data fusion abilities of thermal imaging cameras used in AI-driven Pedestrian Automatic Emergency Braking (PAEB) systems. Emphasizing the importance of thermal imaging for road safety, particularly for detecting vulnerable road users (VRUs) in different lighting conditions, the study presents a framework to examine the interplay between field of view, f-number, sensor resolution, and pixel pitch, and their collective effect on thermal image quality, optical assembly complexity, and cost for fusion with other sensors. The findings suggest that by optimizing these parameters, it’s possible to enhance VRU detection, manage optical assembly complexity, and significantly decrease system costs. The paper argues that thermal imaging cameras for PAEB fusion systems can be produced much more cost-effectively than current thermal imaging-based night vision systems for cars.
Enhancing AI Perception: A New Strategy for Data Collection in AI Perception Development

Florens Gressner
CEO
Neurocat
ADAS and AD functionalities are built around AI-based perception components. To be safely deployed, these must reliably perform across the Operational Design Domain (ODD).
However, challenges remain in collecting the right data for AI training and testing. Data collection and enrichment processes should concentrate on
scenarios representing weak spots of AI perception models. This implies a need for a feedback loop from AI development to data collection. This talk will discuss solutions to these challenges derived from joint projects done with OEMs and insights from developing the AI safety validation tool aidkit.
aidkit facilitates ODD-based performance analyses of perception models with the help of diverse data augmentation techniques.
Novel method for testing Lidar sensor

Asish Jain
Solution Planner
Keysight
Lidar is quoted to be an imperative to achieve the dream of autonomous driving. Lidar provides unique advantages like high resolution point cloud, 3D mapping and low light effectiveness. To test lidar in design verification and manufacturing stages, physical target boards of various sizes and surface reflectivity are placed at different distances starting from few meters to a few hundred meters. Evidently, this type of test setup requires significant floor space and is cost intensive to scale for high volume manufacturing tests.
In this paper, a novel test methodology based on electronic target simulation for testing lidar’s range, reflectivity and probability of target detection will be introduced. The paper aims to provide a deeper technical understanding of principle of operation of this new lidar target simulation based test method and elaborate the key benefits for lidar makers. This test method can be used to test both mechanical rotating and solid state types of time of flight lidars. Since the target is generated electronically, the target distance and reflectivity can be swept within the defined range. This capability coupled with innovative point cloud analysis, provides additional insight into lidar performance, and can be used by researchers to further improve lidar’s design. Another important condition which lidars are subjected to is interference. This novel method enables users to add an external light source or even another lidar sensor to the wanted reflected signal into lidar under test. The test setup is automated with the help of cobot on which lidar under test is mounted. Cobot also ensures precise opto-mechanical alignment and movement of lidar. Through this paper, it will be explained how lidar target simulation based test method can enable lidar makers to mitigate a part of their challenge related to cost and volume production.
How proven Lidar SPAD based on reliable Silicon means a faster time to market, “tried and tested” and cost-optimised solution to the Lidar space
Alexis Vanderbiest
Sr Team Leader
Sony
A Single-Chip, Low-Power Autonomous Driving Implementation

Pier Paolo Porta
Director of automotive marketing
Ambarella
Fully autonomous vehicles require a strong sensing system in order to capture 360-degree environmental information. Moreover, multiple sensor types are needed for redundancy and to enhance the information quality.
These multiple, heterogeneous data streams must then be merged, in order to obtain a single, more precise representation of the environment. The challenge is that this sensor fusion process is generally a resource-intensive task.
The computational burden presented by sensor fusion is worsened by the real-time planning capabilities that are mandatory for an autonomous vehicle. Indeed, the required computational resources are typically achieved through the use of high-power computing systems, such as GPU- or CPU-based architectures. However, these brute-force processing architectures generate substantial amounts of heat that can only be dissipated through liquid cooling, which opens another wide set of challenges, including higher costs and larger form factors. Additionally, traditional computing systems have high power consumption, which is particularly important for electric vehicles where this consumption results in larger batteries and weight that reduce range while further increasing costs.
This presentation will examine an alternative implementation of an autonomous driving system, based on a low-power, single-chip architecture. Videos will be shown from recent road tests of this vehicle, featuring single-chip L4 driving, which uses data acquired from 18 cameras and nine radars.
Evaluating Geometric Distortion in Surround View Camera Systems: Implications for VRU Safety

Benoit Anctil
Senior Crash Avoidance Research Engineer
Transport Canada
Surround view camera systems provide drivers with a bird’s eye view of their surroundings, aiding in low-speed maneuvers and enhancing safety. However, these systems have limitations, such as blind spots and geometric distortions, which can hinder the driver’s ability to recognize vulnerable road users (VRUs). In this presentation, we discuss our research on evaluating the image quality of surround view cameras and its impact on VRU safety. Our methodology involved exploring different approaches to characterize performance, including field of view diagrams, object size measurements, and qualitative assessments of image quality. Our findings indicate that while distortion at ground level is limited, significant deformations are observed at different elevations, posing challenges for VRU recognition. We emphasize the importance of including image quality measurements in the safety assessment of surround view cameras, in addition to blind zone definition and object size requirements. Improvements in digital processing algorithms and camera placement are necessary to reduce distortion and enhance VRU safety.
Context Adaptation for Automotive Sensor Fusion

Jan Aelterman
Professor
imec – Ghent University
Sensor fusion is key to environment perception in challenging conditions like snow, hail, nighttime, lens flares… Nowadays individual sensor processing relies heavily on machine learning, requiring algorithm developers to create or simulate large amounts of challenging training samples. Unfortunately, this leads to a combinatorically increasing need for training data to create all combinations of challenging conditions (e.g. snow+hail+night+lens flare). We propose “context-adaptive” fusion as a solution: a probabilistic approach wherein an “interpretation layer” translates the output statistic of an existing sensor algorithm to one that is tuned to a particular challenging context like fog, snow, hail, nighttime, lens flares, or even distance. The advantage is that this approach allows to adapt to the challenging context without requiring modification of existing sensor algorithms, using only a very small number of training samples. It is a natural fit for sensor fusion architectures where edge AI is provided with a low-data-rate input of said contexts through what is called “cooperative” sensor fusion. This talk will thus demonstrate how a “cooperative” fusion architecture outperforms a standard sensor fusion pipeline in terms of detection accuracy and tracking performance by adapting to different contexts through the proposed “interpretation layers”.
Network-on-Chip Design for the Future of ADAS AI/ML Semiconductor Devices

Frank Schirrmeister
VP Solutions & Business Development
Arteris
The presentation will highlight the challenges and requirements in designing advanced driver-assistance systems (ADAS) for the future, emphasizing the need for expertise in systems, software, and specialized hardware algorithms to keep up with market pressures. In the context of sensing, a combination of various sensors such as cameras, LiDAR, radar, and thermal sensors is crucial for accurate classification and achieving higher levels of autonomy. Integrating all these sensors creates unprecedented data volumes that must be efficiently transported within and between semiconductor devices. We will discuss the elevated design challenges posed by artificial intelligence (AI) and machine learning (ML) applications in ADAS, including design complexity, power and thermal management, memory management, and integration and verification challenges. We will also outline the importance of efficient memory hierarchy, integration of multiple components, and exhaustive testing for ensuring the reliability and performance of AI/ML system-on-chips (SoCs). Finally, we will discuss the central role of network-on-chips (NoCs) in enabling the evolution of automotive electronic architectures, focusing on safety, security, and semiconductor development productivity as per ISO 26262 standards.
Performance Considerations for Automotive High Dynamic Range Image Sensors

Tomas Geurts
Sr Director R&D
OMNIVISION
The presentation focuses on the evolving landscape of image sensor technology for automotive applications, driven by the need for High Dynamic Range (HDR) and LED Flicker Mitigation (LFM) while maintaining smaller pixel sizes for enhanced spatial resolution. A notable solution that has emerged is the adoption of Lateral Overflow Integrating Capacitor (LOFIC) Pixels by major automotive image sensor suppliers. The implementation of LOFIC-based sensors requires careful consideration of design trade-offs to optimize Image Quality (IQ). Key challenges include maintaining Signal-to-Noise Ratio (SNR) at elevated temperatures and addressing the increasing demand for higher IQ with shrinking pixel dimensions. The presentation discusses the trade-offs between different pixel approaches, such as the Split-Diode Pixel and the single PD LOFIC, examining their impact on low-light performance, SNR, and color reproduction. By discussing these trade-offs and the presentation aims to show that the optimal solution shifts towards single PD LOFIC as the pixel sizes shrink and provides more insights into the advancements and considerations in automotive image sensor technology.
Computer Vision: What KPI for Camera performance evaluation?

Laurent Chanas
Image Quality Director
DXOMark
With the development of autonomous driving and driving assistance, cameras are more and more significant in the Automotive industry. Whereas the evaluation of image quality for consumer cameras such as smartphone cameras has been well defined for years, the definition of a good image quality for an ADAS camera, fully dedicated to computer vision, is still under debate. It seems obvious that the KPIs for camera evaluation need to be redefined in the light of this new scope. In 2022, DXOMARK has launched an ambitious research program (in partnership with a French institute specialized in detection algorithm) to evaluate various image quality evaluation metrics. The aim of the research program is to correlate the result of three different metrics – Contrast Transfer Accuracy (CTA), Contrast Signal to Noise Ratio (CSNR) and Frequency of Correct Resolution (FRC) – with the success of a license plate reading algorithm based on a neural network. In this talk, we are going to introduce the protocol that we have designed for this purpose and the preliminary conclusions that we draw on the three metrics.
Technical Semiconductor Management in times of BANI (Brittle, Anxious, Nonlinear, Incomprehensible)

Jonas Edele
Technical Semiconductor Management
BMW Group
The nonlinear event of the pandemic resulted in a supply shortage of semiconductors for the car industry, because production capacities of semiconductor fabs all over the world shifted towards consumer electronics. In consequence, car manufacturers faced severe challenges up to the standstill of production lines due to, at that time, brittle supply chains. It was thus necessary for car manufacturers to change the order behavior of electronic components. This presentation will give the audience some insights into the semiconductor management for the category in-vehicle-networking at BMW and will detail the options and rationale behind harmonizing and standardizing the switch configuration.
From Vision to Action: Elevating Safety with Super-Resolution Imaging Radars and AI-Enabled Perception

Dane Mitrev
Senior ML Engineer
Provizio
In this presentation, attendees will gain valuable insights into the critical role of imaging radars in achieving safe autonomy and next-generation Advanced Driver Assistance Systems (ADAS). Provizio’s innovative approach focuses on leveraging super-resolution imaging radars and Artificial Intelligence (AI) throughout the radar processing chain to unlock scalable, all-weather perception capabilities. By incorporating AI algorithms, attendees will discover how imaging data can be transformed into actionable insights, revolutionizing safety in autonomous systems. The session will showcase point cloud neural network (NN) architectures for super-resolution and denoising, as well as NN architectures for radar-only detection, classification, and freespace estimation. By compounding the advantages of super resolution radars and AI integration at various stages of the chain, attendees will understand how scene understanding is exponentially enhanced, leading to ubiquitous perception on-the-edge.
Software-Defined Radar Sensors for Autonomy Customers

Dr Ralph Mende
CEO
Smart Micro
Autonomy customers are defining various use cases. Software defined radar sensors can adapt to work in many applications, changing measurement and signal processing performance according to the use case
Radar sensors are the “last sensors standing” under adverse weather conditions and as such are a must for any L2+ and higher system.
When data fusion of multiple radar sensors is applied, the radar sensors must have certain features for achieving the best fusion performance, and furthermore, when the radar data is best combined with that from other sensors (i.e. camera, ultrasonic etc.). For an optimum sensor fusion, the radar system must bring specific features and interfaces.
This session will look at features for multi-radar sensor fusion vs multi sensor data fusion, and software-defined operational and application-specific operational modes for parking, low-speed maneuvers, and high-speed driving.
Enhanced Pedestrian Safety with 3D Thermal Ranging using AI for ADAS/AV Applications

Chuck Gershman
CEO
OWL
The current de-facto Automotive Driver Assist System (ADAS) sensor suite typically comprises mutually dependent visible-light cameras and radar, but when one of these sensors becomes ineffective, so too does the entire sensor suite. This scenario happens often especially when it comes to pedestrians, cyclists, and animals at night or in inclement weather. Studies by the Insurance Institute for Highway Safety (IIHS) have shown that systems now installed on automobiles intended to protect pedestrians, fail to work at night – when more than 76% of the annual 700,000 fatalities occur. Consequently, the U.S. National Highway Traffic Safety Administration (NHTSA) has joined EU NCAP in mandating pedestrian safety regulations through Automatic Emergency Braking (PAEB) for all new vehicles. The NHTSA mandate now requires a vehicle to be able to avoid hitting pedestrians even in total darkness. We will discuss how thermal imaging dramatically improves pedestrian safety to reduce accidents and save lives using HD thermal imaging and innovative AI/ML based computer vision algorithms. Operating in the thermal IR spectrum (8000 to 14000 nm) these algorithms exploit angular, temporal and intensity data to produce ultra-dense 3D point clouds (up to 150x that of LIDAR) along with highly refined object classification and fusion.
Virtualization of optical qualification for windshield camera zones

Stephane Baldo
CTO
SynergX
Windshields are not neutral optical elements. They can have an important negative impact on the performance of ADAS and AD systems. Different makers of these systems specify various quality requirements and different quality metrics validation for the need of their specific systems. These metrics are such as optical power, MTF, wavefront analysis, etc… The plurality of these tests would impose an unbearable burden on windshields makers in term of investment and in term of impact on production throughput by their time-consuming chokehold if using conventional measuring technics. We have developed a new scanning technology that mathematically models in high definition and high precision the inner and outer surfaces of the camera zone. Using this representation, we can virtualize any optical test by advanced ray tracing rendering: Optical power at any tilt and yaw angles Multi position MTF measurement Shack-Hartmann full surface sensor Full surface mapping of double image Maximum stereoscopic deviations This all-purpose representation can also be integrated by ADAS and AD systems makers into their calibration process, their QC pipeline, and their teaching database for system robustness improvements.
Energy-aware Neural Architecture Search using accurate Virtual Prototypes of AI inference accelerators

Nicolai Behmann
Technical Solution
Architect Siemens EDA
In this paper, we propose an extension to Neural Architecture Search (NAS) incorporating accurate inference energy prediction. An exemplary dedicated AI accelerator is emulated in a digital twin and power is measured for a dataset of different Neural Networks. These measurements can then be used for energy-aware NAS. In the presentation, the methodology and virtual platform will be presented, before the accuracy of the extended NAS is evaluated.
Windscreen optical quality

Professor Dr. Alexander Braun
University of Applied Sciences, Dusseldorf
Windscreens are in every car, and a standard position of modern ADAS cameras is behind the glass. The optical quality of the windscreen thus influences the image quality of the camera system, and hence the performance of the computer vision algorithms. As these algorithms are often based on Machine Learning it is hard to link the performance of these algorithms to the optical quality of the windscreen. There is an established and standardized measurement „refractive power“ that quantifies the optical quality, which is used on every windscreen in the world. In a recent development the Modulation Transfer Function — MTF, know from camera ’sharpness’ characterization — is being explored as an alternative to refractive power. In this session we look at the usability of refractive power and MTF, and we explore novel ways to characterize windscreen quality. We take a fundamental look how to optically model the windscreen, such that it can be used in numerical simulations. A panel discussion closes this session.
Improving camera optics for windshields. Introducing novel interlayer and measurement technology

Uwe Keller Kuraray and Dr Olaf Thiele
LaVision
Automotive windshields currently use black ceramic frit for sunlight shielding and cosmetic reasons along the edge of the windshield as well as in the ADAS camera area. Ceramic frit, however, is known to cause geometrical glass distortion sometimes referred to by “burn lines” or “lensing” – effects that can drastically influence the image quality of cameras from the ADAS systems placed behind the windshield. A special black printed PVB is presented here as a new solution to minimize impacts to the optical quality of the windshield. A one-to-one comparison of camera optics between a standard windshield with ceramic frit and a prototype windshield with the black printed PVB is provided using the high-spatially resolved diopter measurement technique. The diopter maps with sub-millimeter resolution give a detailed insight to local optical influences. For the evaluation of the impact to the ADAS camera image quality the spatial frequency response (SFR) of the multi-layer windshield is the relevant parameter. A new, multifunctional measurement system provides, in addition to diopter values, the precise analysis of local SFR – data to complete the quantification of the optical performance of windshields.
PANEL SESSION
Observing the World: Defining a New Reference Observer for Non-Biological Entities
All photometric and road (artefacts, light, signalling,…) measurements for design, performance, and standard compliance are currently based on the CIE reference observer for photometry, which is representative of human visual performance. AS we move towards L3 & 4, the urgency to develop a new reference observer representative of AD/ADAS grows. Join the newly founded Research Consortium BELLORAMA to discuss the next steps and challenges in observing static infrastructure, alongside experts and stakeholders with standardization and measurement perspective expertise.

Professor Dr. Alexander Braun
University of Applied Sciences, Dusseldorf

Robert Dingess
President
Mercer Strategic Alliance Inc

Dr Patrick Denny
University of Limerick
PANEL SESSION
Could the Chiplet Save Moore’s Law?
Chiplets have the potential to contribute to the evolution of semiconductor technology and help address some challenges associated with Moore’s Law. This exponential technology presents a new paradigm in the semiconductor industry, allowing developers to enhance power efficiency, improve yield, and optimise performance with their modular nature. However, challenges still remain: how does the industry integrate connectivity? What standards and norms should be applied for true centralised processing to be achieved? And how do we minimise latency and power consumption?
Join this panel to discuss the opportunities and challenges chiplet technology brings to the automotive industry.
*Session titles and timings subject to change.
Schedule of Events
May 21st
- 10am - 5pm
Tutorials
- 10:30am - 4:30pm
Exhibitor Setup
- 5:40pm - 7pm
Welcome Drinks Reception
- 5:30pm - 6:30pm
Roundtable discussions
May 22nd
- 8:45am - 6pm
Conference & Exhibition
- 6:15pm - 8pm
Drinks Reception
May 23rd
- 9am - 3:30pm
Conference & Exhibition
- 3:30pm - 5pm
Exhibitor Tear down