19th – 21st September 2023 | Autoworld, Brussels
AutoSens Brussels Agenda
Please note: Session times and locations are subject to change
Check-in for full pass holders
- Tuesday 19th September
- 9:30am CET
- 10:00am CET
- Tuesday 19th September
- Minerva Room
Tutorial 1 : Mastering the SOTIF Challenge
(For full pass holders only)
- Tuesday 19th September
- 10:00am CET
- Minerva Room
Tutorial 1 : Mastering the SOTIF Challenge
Addressing the Safety of the Intended Functionality (SOTIF) challenge in practice involves implementing specific measures and techniques to ensure the safe operation of complex systems. SOTIF focuses on the hazards that arise due to a system’s intended functionality rather than traditional safety concerns related to malfunctions or failures. Here are some steps to address the SOTIF challenge:
Established methods for analysis, design and testing such as FMEAs, FTAs, V-models, prototype vehicles and hardware-in-the-loop tests must be sensibly supplemented with new approaches in automotive engineering to ensure the safety of these highly complex systems. Today, testing is already scenario-based and simulation models are increasingly finding their way into series assurance. However, the experience of the last few years shows that many of the new methods have not yet been integrated into series development in a meaningful way. This is mainly due to the fact that today’s standards and regulations increasingly contain abstract requirements, but do not offer any indications for concrete implementation.
The Workshop will once again briefly discuss the interdependencies of ISO 26262 and ISO 21448 and link these above all with type-approval-relevant topics and regulations, because it has become apparent that the safe placing of driving functions on the market has become a complex process in which procedures applied in this context for the first time must increasingly be taken into account. The Workshop will show holism and at the same time illuminate individual aspects in detail.
The following is a brief overview of the content:
1. New approaches for system analysis/risk analysis (also applicable to AD applications)
2. Seamless link with testing concepts and V&V strategies
3.Quantitative assessment of Safety
The focus of the Workshop is on new , promising methods in the automotive industry that can be used during the development and validation of complex E/E systems to solve the key challenge of automated and autonomous systems.

dSPACE

dSPACE

dSPACE

dSPACE
- 10:00am CET
- Tuesday 19th September
- Mahy Room
Tutorial 2: Noise Factor Analysis for Automotive Perception Sensors
(For full pass holders only)
- Tuesday 19th September
- 10:00am CET
- Mahy Room
Tutorial 2: Noise Factor Analysis for Automotive Perception Sensors
One of the main challenges to achieve safe and reliable assisted and automated driving (AAD) functions is how to design them in order to cope with the unavoidable measurement uncertainty and degradation of perception sensor data in dynamic, everchanging, and noisy driving scenarios. A remarkable amount of research and development efforts in industry and academia is still focused on the optimisation of the sensor suite, trying to bring hardware and software innovation to mitigate sensor weaknesses and the arise of unexpected corner cases. The challenge becomes even more impactful considering that all the tests needed to support safety cases cannot be carried out in the real world, but a proper and balanced mixture of accurate simulations, x-in-the-loop (X-i-L), and real-world testing needs to be designed and evaluated to support a timely development of AAD. In this context, sensor models have a pivotal role, as they need to be able to mimic real sensors’ outputs with a fit for purpose level of fidelity. To ensure known sensor strengths and weaknesses are evaluated, it is critical to test at all levels (from pure simulation to real world) with the possibility to include noise factors and their effects on the sensors’ outputs.
The aims of this tutorial and workshop are two fold:
1. Raise awareness of structured tools and techniques that can support a thorough noise factor analysis of automotive perception sensors. To this aim, the tutorial will present WMG’s framework to carry out breakdown analysis of noise factors affecting AAD perception sensors. Applications of the framework to camera, LiDAR, and RADAR will be presented, in combination with developed noise models. Moreover, the tutorial will give an overview of the state of the art of perception sensor modelling, and then some evaluation techniques.
2. Enable a wider discussion in the sensor community on noise factors, and particularly challenging the ‘status quo’ to understand what is missing, which are the noise factors with higher impact, what can be mitigated, and providing future directions for research and development.

University of Warwick

University of Warwick

University of Warwick

University of Warwick
- 1:00pm CET
- Tuesday 19th September
- Exhibition Hall
Lunch for Full Pass Holders
- 2:00pm CET
- Tuesday 19th September
- Minerva Room
Tutorial 3: Automotive Cameras: Typical Properties that need Characterisation and Calibration
(For full pass holders only)
- Tuesday 19th September
- 2:00pm CET
- Minerva Room
Tutorial 3: Automotive Cameras: Typical Properties that need Characterisation and Calibration
Automotive cameras are a safety critical component in the sensor stack of a car. Therefore, you want to make sure that each and every camera that is installed in a car works as intended, that the user is satisfied, and the machine vision algorithms can work as expected.
In this tutorial we have a look at properties of a camera that in most systems vary enough that you want to have a closer look in the validation phase and for some even in an end-of-line test station. We discuss items that are typically characterized and result in a pass/fail decision and in properties that are part of a calibration process to make sure that the signal from the camera and the input into a machine vision system is well defined and has only small variation. These include the spatial frequency response, noise, relative illumination, color processing, geometric calibration and more.
We have a look at existing international standards and best practice procedures so that all attendees have a good understanding of the used metrics and the caveats connected to them.

Image Engineering

Image Engineering
- 2:00pm CET
- Tuesday 19th September
- Mahy Room
Tutorial 4: Spatial Recall Index — Where is the Performance?
(For full pass holders only)
- Tuesday 19th September
- 2:00pm CET
- Mahy Room
Tutorial 4: Spatial Recall Index — Where is the Performance?
Typical AI performance metrics like mAP or precision / recall values and curves are aggregated over a whole dataset. As we are interested in the influence of optical quality on the performance of AI algorithms — like object detection or instance segementation — we need to go
one step further and spatially resolve this performance: any optical system will exhibit varying optical quality over the field of view, i.e. in the corner a typical automotive camera will be less sharp then in the middle of the image. But is traffic sign detection performance influenced if the traffic sign is in the middle of the image or at the edge? What about pedestrian detection? These are the questions we try to answer and have developed a novel metric called the Spatial Recall Index (SRI) and more recently the Generalized Spatial Recall Index (GSRI). Our metric delivers a heat map of the performance of a given AI algorithm, at the resolution of the images of the original dataset. Now we can actually see where in the image a certain performance occurs, and relate it to the optical performance and to the distribution of objects (and object sizes).
In this tutorial we will go in-depth and explain the maths behind this novel metric. Application use cases are demonstrated. A lot of room for discussion will give each participant the chance to make sure that all the details are clear.

University of Applied Sciences Dusseldorf

University of Applied Sciences Dusseldorf
Check-in for basic pass holders / Exhibition opens
- Tuesday 19th September
- 4:30pm CET
- 5:15pm CET
- Tuesday 19th September
- Minerva Room
Roundtable discussion: Understanding the value of materials to address challenges in mass production and reliable performance of ADAS sensors and controls

Rodrigo Aguilar,
Business Development Manager ADAS components,
Henkel
- 5:15pm CET
- Tuesday 19th September
- Minerva Room
Roundtable discussion: Saving Lives with AEB – The Rationale and Impact Behind the NHTSA Mandate

Chris Posch,
Engineering Director, Automotive,
Teledyne FLIR

John Eggert,
Head of Global Business Development, Automotive,
Teledyne FLIR
- 5:15pm CET
- Tuesday 19th September
- Minerva Room
Roundtable Discussion: Beyond Imaging Radar – current performance gaps and how to address them

Vinayak Nagpal,
Founder and CEO,
Zendar

Sunil Thomas,
Chief Business Officer,
Zendar
- 5:15pm CET
- Tuesday 19th September
- Minerva Room
Roundtable Discussion: The future of the automotive camera; will a higher resolution sensor combined with a good MTF lens answer the needs of the market?

Patrice Roulet Fontani,
Vice President, Technology and Co-Founder,
IMMERVISION

Oliver Tyson,
Manager, Product and Technology Offer
IMMERVISION
- 5:30pm CET
- Tuesday 19th September
Welcome Reception in Exhibition Hall – Sponsored by NXP

Check-in / Exhibition Opens
- Wednesday 20th September
- 8:15am CET
- 8:50am CET
- Wednesday 20th September
- Mezzanine Stage
Opening remarks

Sense Media Group
- 9:00am CET
- Wednesday 20th September
- Mezzanine Stage
AutoSens Interviews ChatGPT
- Wednesday 20th September
- 9:00am CET
- Mezzanine Stage
AutoSens Interviews ChatGPT
We’ve all been playing with ChatGPT these last months, as the boom in AI for natural language processing takes over the headlines. What does ChatGPT think about ADAS sensors and the future of autonomous vehicles? Sara and Rob will conduct a genuine interview with our guest AI speaker, shedding some new light on pertinent topics, a few curve balls, and some good old robot humour.

Sense Media Group

Sense Media Group

Sense Media Group

Sense Media Group
- 9:15am CET
- Wednesday 20th September
- Mezzanine Stage
Future-Proof Connectivity for Imaging Sensors
- Wednesday 20th September
- 9:15am CET
- Mezzanine Stage
Future-Proof Connectivity for Imaging Sensors
Imaging sensors play an ever-larger role in modern cars. This presentation will outline the most important questions to ask when defining and selecting the communication technology for systems using imaging sensors. It considers short term tasks as well as long term trends for which the foundation is laid down now.

BMW

BMW
- 9:45am CET
- Wednesday 20th September
- Mezzanine Stage
A General Camera Model: The EMVA 1288 Standard Release 4.0 and Lossless Image Compression
- Wednesday 20th September
- 9:45am CET
- Mezzanine Stage
A General Camera Model: The EMVA 1288 Standard Release 4.0 and Lossless Image Compression
Until Release 3.1, the globally used EMVA standard 1288 was limited to describe the performance parameters of linear cameras without further processing based on a system theoretic approach. With the new release 4.0, in effect since June 2021, this approach was extended to characterize nonlinear, HDR and multimodal cameras with the rich toolset of the standard.
The new general system model with an arbitrary characteristic curve also makes it possible to extend the concepts for lossless image compression based on noise equalization from linear cameras to any type of cameras.

IWR Heidelberg University

IWR Heidelberg University
- 10:15am CET
- Wednesday 20th September
- Mezzanine Stage
Image Sensors for Safer Roads
- Wednesday 20th September
- 10:15am CET
- Mezzanine Stage
Image Sensors for Safer Roads
Achieving a higher level of safety in autonomous driving requires massive investment in advanced sensing capabilities and system solutions. HDR image sensors are at the heart of automotive cameras and are undergoing transformation to better handle numerous corner cases. This presentation will focus on some of the greatest challenges to safety within image sensing, including LED flickering, VRU detection, object avoidance and motion blur, as well as examining the way pixel architectures may impact distance detection capabilities of a stereo camera.

onsemi

onsemi
- 10:45am CET
- Wednesday 20th September
- Exhibition Hall
Networking coffee break sponsored by Murata

- 11:00am CET
- Wednesday 20th September
- Classic Lounge
Press Briefing
- 11:30am CET
- Wednesday 20th September
- Mezzanine Stage
Computer Vision: What KPI for Camera performance evaluation?
- Wednesday 20th September
- 11:30am CET
- Mezzanine Stage
Computer Vision: What KPI for Camera performance evaluation?
With the development of autonomous driving and driving assistance, cameras are more and more significant in the Automotive industry. Whereas the evaluation of image quality for consumer cameras such as smartphone cameras has been well defined for years, the definition of a good image quality for an ADAS camera, fully dedicated to computer vision, is still under debate. It seems obvious that the KPIs for camera evaluation need to be redefined in the light of this new scope. In 2022, DXOMARK has launched an ambitious research program (in partnership with a French institute specialized in detection algorithm) to evaluate various image quality evaluation metrics. The aim of the research program is to correlate the result of three different metrics – Contrast Transfer Accuracy (CTA), Contrast Signal to Noise Ratio (CSNR) and Frequency of Correct Resolution (FRC) – with the success of a license plate reading algorithm based on a neural network. In this talk, we are going to introduce the protocol that we have designed for this purpose and the preliminary conclusions that we draw on the three metrics.

DXOMARK

DXOMARK
- 11:30am CET
- Wednesday 20th September
- Minerva Room
Do Deep Neural Networks dream of Bayer data?
- Wednesday 20th September
- 11:30am CET
- Minerva Room
Do Deep Neural Networks dream of Bayer data?
This talk presents our investigation on using Bayer data instead of 3-channel processed RGB images with deep neural network (DNN). The amount of data captured by HDR, high-resolution automotive camera is tremendous. However, is the data format currently used optimal for the intended use case? Camera data can be processed with traditional computer vision methods or DNN algorithms. Often, these algorithms consume 3-colour channel data (Red-Green-Blue), generated from raw/Bayer data captured by the image sensor. By working with Bayer data, it is possible to reduce transmitted amount of data, needed power, and processing time. However, there is currently very little work on optimising DNN for single channel inputs, and there are no complete or fully annotated public Bayer datasets for assisted and automated driving functions. Interestingly, preliminary investigation demonstrates that current DNNs can be used with raw data without a significant performance degradation, and possibly can be further optimised and tuned to achieve even better results. Additionally, different methods to convert (with minimal modifications) existing datasets into different Bayer formats are discussed. In this way, big curated datasets can be re-used for creating Bayer based deep learning models.

University of Warwick

University of Warwick
- 11:30am CET
- Wednesday 20th September
- Mahy Room
Redefining Radar – Camera Sensor Fusion: A Leap Towards Autonomous Driving Without LiDAR
- Wednesday 20th September
- 11:30am CET
- Mahy Room
Redefining Radar – Camera Sensor Fusion: A Leap Towards Autonomous Driving Without LiDAR
In an ever-evolving autonomous driving landscape, the need for efficient, reliable and cost-efficient sensor fusion strategies is paramount. At Perciv AI, we’ve been innovating a novel approach to this challenge, leveraging the power of next-generation 4D radar and monocular RGB camera to generate ‘pseudo-LiDAR 3D point clouds. Cameras and radars have been used for driver assistance for decades. Our solution pushes the fusion of these sensors to the next level: the way we combine these two sensors will challenge the performance of LiDAR sensors for an 80% cheaper price. This ground-breaking technique presents a potential path to a future where high-end LiDARs may not be a necessity in consumer vehicles, only for research and evaluation purposes. Our introduced fusion paradigm is modular by design and can be trained without the reliance on expensive manual annotations. The outcome is a point cloud that mirrors the density of a LiDAR-generated one, but it also seamlessly integrates semantic information from the camera and velocity data from the radar. The variety and depth of this data offer a wide scope for potential applications in the field of autonomous driving, e.g. it can be effectively utilized for multiple downstream tasks such as object detection, free road estimation, or SLAM.

Perciv AI

Perciv AI
- 11:55am CET
- Wednesday 20th September
- Mezzanine Stage
Performance Considerations for Automotive High Dynamic Range Image Sensors
- Wednesday 20th September
- 11:55am CET
- Mezzanine Stage
Performance Considerations for Automotive High Dynamic Range Image Sensors
The presentation focuses on the evolving landscape of image sensor technology for automotive applications, driven by the need for High Dynamic Range (HDR) and LED Flicker Mitigation (LFM) while maintaining smaller pixel sizes for enhanced spatial resolution. A notable solution that has emerged is the adoption of Lateral Overflow Integrating Capacitor (LOFIC) Pixels by major automotive image sensor suppliers. The implementation of LOFIC-based sensors requires careful consideration of design trade-offs to optimize Image Quality (IQ). Key challenges include maintaining Signal-to-Noise Ratio (SNR) at elevated temperatures and addressing the increasing demand for higher IQ with shrinking pixel dimensions. The presentation discusses the trade-offs between different pixel approaches, such as the Split-Diode Pixel and the single PD LOFIC, examining their impact on low-light performance, SNR, and color reproduction. By discussing these trade-offs and the presentation aims to show that the optimal solution shifts towards single PD LOFIC as the pixel sizes shrink and provides more insights into the advancements and considerations in automotive image sensor technology.

OMNIVISION

OMNIVISION
- 11:55am CET
- Wednesday 20th September
- Minerva Room
From the road to the datacenter: fast and smart data recording to speed up ADAS/AD development
- Wednesday 20th September
- 11:55am CET
- Minerva Room
From the road to the datacenter: fast and smart data recording to speed up ADAS/AD development
Developing and testing autonomous driving (AD) systems requires the analysis and storage of more data than ever before. Next-generation Automotive enterprises are fully data-driven. They are focused on unlocking and activating the intrinsic value of the data in their development cycle using the combined practices of continuous integration (CI) and continuous delivery (CD). In this process the data can be created and accessed at the core or on the edge. Modern data loggers capture huge amounts of scenes and scenario data with smart recording technology under the control of a cloud-based AV fleet management solution.

IBM

IBM
- 11:55am CET
- Wednesday 20th September
- Mahy Room
Context Adaptation for Automotive Sensor Fusion
- Wednesday 20th September
- 11:55am CET
- Mahy Room
Context Adaptation for Automotive Sensor Fusion
Sensor fusion is key to environment perception in challenging conditions like snow, hail, nighttime, lens flares… Nowadays individual sensor processing relies heavily on machine learning, requiring algorithm developers to create or simulate large amounts of challenging training samples. Unfortunately, this leads to a combinatorically increasing need for training data to create all combinations of challenging conditions (e.g. snow+hail+night+lens flare). We propose “context-adaptive” fusion as a solution: a probabilistic approach wherein an “interpretation layer” translates the output statistic of an existing sensor algorithm to one that is tuned to a particular challenging context like fog, snow, hail, nighttime, lens flares, or even distance. The advantage is that this approach allows to adapt to the challenging context without requiring modification of existing sensor algorithms, using only a very small number of training samples. It is a natural fit for sensor fusion architectures where edge AI is provided with a low-data-rate input of said contexts through what is called “cooperative” sensor fusion. This talk will thus demonstrate how a “cooperative” fusion architecture outperforms a standard sensor fusion pipeline in terms of detection accuracy and tracking performance by adapting to different contexts through the proposed “interpretation layers”.

Gent University

Gent University
- 12:20pm CET
- Wednesday 20th September
- Mezzanine Stage
Evaluating Geometric Distortion in Surround View Camera Systems: Implications for VRU Safety
- Wednesday 20th September
- 12:20pm CET
- Mezzanine Stage
Evaluating Geometric Distortion in Surround View Camera Systems: Implications for VRU Safety
Surround view camera systems provide drivers with a bird’s eye view of their surroundings, aiding in low-speed maneuvers and enhancing safety. However, these systems have limitations, such as blind spots and geometric distortions, which can hinder the driver’s ability to recognize vulnerable road users (VRUs). In this presentation, we discuss our research on evaluating the image quality of surround view cameras and its impact on VRU safety. Our methodology involved exploring different approaches to characterize performance, including field of view diagrams, object size measurements, and qualitative assessments of image quality. Our findings indicate that while distortion at ground level is limited, significant deformations are observed at different elevations, posing challenges for VRU recognition. We emphasize the importance of including image quality measurements in the safety assessment of surround view cameras, in addition to blind zone definition and object size requirements. Improvements in digital processing algorithms and camera placement are necessary to reduce distortion and enhance VRU safety.

Transport Canada

Transport Canada
- 12:20pm CET
- Wednesday 20th September
- Minerva Room
Advancing vehicle perception development using high-fidelity synthetic training data created using physically modelled sensors
- Wednesday 20th September
- 12:20pm CET
- Minerva Room
Advancing vehicle perception development using high-fidelity synthetic training data created using physically modelled sensors
The accuracy of simulation technologies is currently a limiting factor in the development of perception systems for autonomous vehicles. The industry is still heavily dependent on collecting and processing real-world physical data and this process is expensive, time-consuming and doesn’t cover the variety and quantity of edge cases required to provide confidence in the system’s performance.
rFpro’s new ray-traced simulation rendering technology delivers high-fidelity engineering-grade synthetic training data. It has been developed in partnership with leading sensor and perception manufacturers, such as Sony, to integrate and improve the accuracy of virtual camera models as well as validate and correlate the results.
rFpro will showcase how this technology is helping to accelerate sensor development and advance vehicle perception systems, delivering high-quality training data.

rFpro

rFpro
- 12:20pm CET
- Wednesday 20th September
- Mahy Room
From ABS to Autonomous Driving
- Wednesday 20th September
- 12:20pm CET
- Mahy Room
From ABS to Autonomous Driving
The advancement in automotive technology has evolved rapidly from engines using fossil fuel to electric vehicle; from ABS to autonomous driving. An overview of automotive history will be presented in this presentation where many breakthroughs achieved recently will be highlighted.
By observing facts and current trends, a forecast on the future growth on different segments of automotive market will be discussed. Finally, some new advanced technologies will be shown as examples and evidences supporting the prediction.

Panasonic Automotive Systems Europe GmbH

Panasonic Automotive Systems Europe GmbH
- 12:40pm CET
- Wednesday 20th September
- Mezzanine Stage
MEMS ACTUATOR APPLICATION IN AUTOMOTIVE CAMERA
- Wednesday 20th September
- 12:40pm CET
- Mezzanine Stage
MEMS ACTUATOR APPLICATION IN AUTOMOTIVE CAMERA
Along with the ADAS development in automotive industry, multiple cameras installation on vehicle is essential. However, the continuous vibration during driving, bumps leading to vertical displacement of the camera modules and unfavorable lighting conditions result in blurry video images captured by the cameras.
The innovative application of MEMS Technology in automotive camera provides a 5-axis Optical Image Stabilization (OIS) solution to improve clarity and achieve stable and precise images. MEMS actuator has higher resonant frequency compared to the regular OIS solution, so it can perform well under the vehicle working conditions.
The MEMS actuator provides a new robust platform for the automotive industry, and this sensor-based technology is more than just OIS. It moves 3 times faster and 10 times more precisely by detecting vibrations of 0.4 micrometers. It consumes up to 50 times less power and function well under 115°C environment. It facilitates high image resolution in low light and video without blur. MEMS actuator also has long lifetime whereas cycling test passed more than 1,600 million and no image degradation.

MEMSDrive

MEMSDrive
- 12:40pm CET
- Wednesday 20th September
- Minerva Room
From Vision to Action: Elevating Safety with Super-Resolution Imaging Radars and AI-Enabled Perception
- Wednesday 20th September
- 12:40pm CET
- Minerva Room
From Vision to Action: Elevating Safety with Super-Resolution Imaging Radars and AI-Enabled Perception
In this presentation, attendees will gain valuable insights into the critical role of imaging radars in achieving safe autonomy and next-generation Advanced Driver Assistance Systems (ADAS). Provizio’s innovative approach focuses on leveraging super-resolution imaging radars and Artificial Intelligence (AI) throughout the radar processing chain to unlock scalable, all-weather perception capabilities. By incorporating AI algorithms, attendees will discover how imaging data can be transformed into actionable insights, revolutionizing safety in autonomous systems. The session will showcase point cloud neural network (NN) architectures for super-resolution and denoising, as well as NN architectures for radar-only detection, classification, and freespace estimation. By compounding the advantages of super resolution radars and AI integration at various stages of the chain, attendees will understand how scene understanding is exponentially enhanced, leading to ubiquitous perception on-the-edge.

Provizio ai

Provizio ai
- 12:40pm CET
- Wednesday 20th September
- Mahy Room
Sensor Fusion, is this the next step for AI?
- Wednesday 20th September
- 12:40pm CET
- Mahy Room
Sensor Fusion, is this the next step for AI?
Until recently, the majority of sensor-based AI processing used vision and speech inputs. Recently, we have begun to see radar, LiDAR, event-based image sensors, and other types of sensors used in new AI applications. And, increasingly, system developers are incorporating multiple, heterogeneous sensors in their designs and utilizing sensor fusion techniques to enable more robust machine perception. In this presentation, we’ll explore some of the heterogeneous sensor combinations and sensor fusion approaches that are gaining adoption in applications such as driver assistance and mobile robots. These sensor fusion techniques are using a mix of AI techniques and traditional DSP processing. We’ll also show how the Cadence Tensilica ConnX DSP, Vision DSP, and AI Accelerator IP families and their associated software tools and libraries support sensor fusion applications with high performance, efficiency, and ease of development.

Cadence

Cadence
- 1:00pm CET
- Wednesday 20th September
- Exhibition Hall
Networking Lunch Sponsored by OMNIVISION

- 2:15pm CET
- Wednesday 20th September
- Mezzanine Stage
Analysis of weather effects on sensor performance for improving image quality
- Wednesday 20th September
- 2:15pm CET
- Mezzanine Stage
Analysis of weather effects on sensor performance for improving image quality
Considering sensor perception interferences due to adverse weather conditions is decisive for improving the quality of ADAS functions and indispensable for the road release of AD functions / vehicles.
Even more than in the past the “right mix” of test and validation approaches is essential for a reliable and safe sensor and perception approval.
An inverse test strategy from real-world data to indoor testing and virtual simulation is required to ensure a realistic cross validation of the different test and validation approaches (e.g. indoor vs. outdoor, static vs. dynamic SUT).

AVL

AVL
- 2:15pm CET
- Wednesday 20th September
- Minerva Room
Empowering the Future of Autonomous Driving: Scalable and Safety-Compliant Architectures
- Wednesday 20th September
- 2:15pm CET
- Minerva Room
Empowering the Future of Autonomous Driving: Scalable and Safety-Compliant Architectures
In a dynamic era of automotive evolution, LGE stands as a pioneer in driving innovation. This session unveils two pivotal strategies that define our commitment. Discover how we’re engineering architectures that seamlessly scale, catering to the evolving demands of autonomous driving, from GSR/NCAP to L3/L4 levels. Moreover, we’ll delve into the crucial realm of identifying and mitigating safety risks, ensuring a secure and reliable foundation for autonomous driving platforms.

LG Electronics

LG Electronics
- 2:15pm CET
- Wednesday 20th September
- Mahy Room
Why Automotive Image Sensors Need to be Cyber-Secure
- Wednesday 20th September
- 2:15pm CET
- Mahy Room
Why Automotive Image Sensors Need to be Cyber-Secure
Image sensors act as the eyes of the vehicle and are used in several ADAS functions, such as lane departure warnings, pedestrian detection, and to trigger emergency braking. They also provide input to the fusion system for decision-making to assess the car’s environment and can even be used to monitor driver behavior. In the future, they will assist with identifying and authenticating the car’s users and monitor their vital signs to enable the onboard computer to assume control if the driver becomes incapacitated. It is therefore imperative that image sensors remain functional, especially in the most extreme situations an automobile can encounter.

onsemi
