Agenda

Brussels Agenda
Download agenda PDF
  • WEDNESDAY 15 SEPTEMBER
  • THURSDAY 16 SEPTEMBER

9:00am | Exhibition opens, followed by opening remarks at 9:30am

Wednesday 15th September

|

9:45am CET

Location: Mezzanine

Keynote – Overcoming challenges of moving from a de-centralized to centralized ADAS system

  • What is the sensor fusion set up in a De-Centralized ADAS system?
  • Overview of the fusion and functions in such a system
  • How can the current smart sensor set up in de-centralized system be re-used in a remote sensor in centralized set up?
  • Re-use of optical path and computer vision from smart to remote set up

Hear from:

Raj Vazirani
Director of Radar, Camera and Global Electronics Engineering ADAS and AD
ZF Group

Wednesday 15th September

|

10:15am CET

Location: Mezzanine

Keynote – An update and overview of M&A and financing activity in the automotive sensor space

Hear from:

Rudy Burger
Managing Partner
Woodside Capital Partners

Wednesday 15th September

|

10:45am CET

Location: Mezzanine

Panel Discussion – Managing supply chain disruptions and the impact these have on design decisions for ADAS and autonomous driving systems

Hear from:

Raj Vazirani
Director of Radar, Camera and Global Electronics Engineering ADAS and AD
ZF Group

Moderated by Junko Yoshida

11:30am | Networking Coffee Break and Press Briefing

Wednesday 15th September

|

12:15pm CET

Location: Mezzanine

New proposal for Lens Flare evaluation

  • Evaluation of the contribution of both lens and sensor to the flare effect
  • Flare effect evaluation for any position of a point light source (in and out of the field of view)
  • Comprehensive set of metrics and tabletop test bench for Flare effect evaluation

Hear from:

Hoang-Phi Nguyen
Product Owner
DXOMARK

Wednesday 15th September

|

12:15pm CET

Location: Minerva

From glass to computer vision output: end-to-end optimization of camera systems

  • Tuning the camera hardware on one side and the computer vision separately is not enough to reach optimal performance of a camera system for automated driving.
  • As the main target of a camera system for automated driving, computer vision performance needs to be considered in each system design decision.
  • End-to-end simulation of a camera system enables to objectively evaluate design choices based on their effect on computer vision performance.

Hear from:

Dr. Damien Schroeder
Project Manager Camera Systems
BMW Group

Wednesday 15th September

|

12:40pm CET

Location: Mezzanine

The use of incremental SNR (iSNR) as criteria for the SOTIF of HDR front camera sensor once integrated in the vehicle

For machine vision application, when one can calculate, on the full dynamic range of luminance, the iSNR of a front camera sensor alone, it becomes possible for carmakers and integrators to predict if the SOTIF specification will be met for the front camera sensor but once integrated in the vehicle. It will be discussed of a proposal to define the SOTIF specification of camera sensor for machine vision, of a method to digitally pass from a camera sensor alone to a camera sensor integrated in the vehicle and finally on the constraints that the OECF of HDR camera sensor must met to enable the use of iSNR criteria for automotive industry.

Hear from:

Christophe Lavergne
Specialist Image Sensor and Processing
Renault

Wednesday 15th September

|

12:40pm CET

Location: Minerva

Robust Perception under Adverse Weather and Lighting Conditions

  • Existing visual perception methods scale badly to adverse weather and lighting conditions
  • Weather phenomenon simulation and image translation can generate effective training data for adverse conditions
  • Other domain adaptation techniques such as domain flow and self-training can also increase the robustness of perception methods
  • New benchmarks with real-world data, such as our ACDC dataset, are strongly needed for method training and evaluation
  • Other robust sensors such as Radar and Microphones should be leveraged

Hear from:

Dengxin Dai
Group Leader,
Vision for Autonomous Systems Group
MPI for Informatics

Wednesday 15th September

|

1:05pm CET

Location: Mezzanine

Sensor technology and safety features to address the challenging needs for reliable and robust sensing/viewing systems

The importance of the camera-based sensing system is increasing. For "Level 2+" application, the camera-based sensing system is already a de-facto system configuration.
The improvement of image sensor characteristics and functionality is strongly required as it could influence the total performance of the sensing system.
In this presentation, the key characteristics of the image sensors will be presented. Also, the state-of-the-art of functional safety and cybersecurity requirement to achieve reliable and robust sensing/viewing system will be discussed.

Hear from:

Yuichi Motohasi
Automotive Image Sensor Applications Engineer
Sony

Wednesday 15th September

|

1:05pm CET

Location: Minerva

Using Artificial Intelligence Layer to Transform High-Resolution Radar Point Cloud into Insights for Autonomous Driving Applications

How utilizing an AI layer for post-processing the radar's data enables many advanced real-time features. Why 4D imaging radar technology is the perfect counterpart to the camera, not offering redundancy by doing "more of the same", but rather, relying on a different technology, due to which radar's and camera's strengths and weaknesses complement each other. Therefore, achieving true safety for any Level 2 application, hands free driving, and full autonomy which is, after all, the ultimate goal, must utilize the fusion of both sensors.

Hear from:

Matan Nurick
Director of Product Management
Arbe

1:30pm | Networking Lunch Break in the Exhibition

Wednesday 15th September

|

2:10pm CET

Location: Mezzanine

Discussion: How can we achieve better Automatic Emergency Braking?

Over 6,000 pedestrians were killed in 2020 in the US, up 40% from 2010. AAA testing shows that current AEB systems are ineffective at night, when 75% of pedestrian deaths occur.  How can this figure be reduced? Join Teledyne FLIR to explore the challenges that face AEB systems and how these can be overcome. Key topics for discussion will include achieving the right fusion of sensors, the use of AI in the AEB system and testing standards.

Hear from:

John Eggert
Director of Automotive Business Development
Teledyne FLIR

Wednesday 15th September

|

2:45pm CET

Location: Mezzanine

Beyond the Visible: Shortwave Infrared Imaging and Ranging in all visibility conditions

  • Overview of the current technologies used for detecting surroundings in autonomous systems and why they fall short of capabilities necessary for autonomous vehicles.
  • We will introduce a new and exciting SWIR-based sensor modality which provides HD imaging and ranging information in all conditions (“SEDAR”). How it works, its main benefits, and why it is the future.
  • We will then show experimental evidence of SEDAR superiority over sensors of other wavelengths. These include recordings in difficult conditions such as nighttime, fog, glare, dust, and more. Also, show depth map field results.

Hear from:

Ziv Livne
CBO
TriEye

Wednesday 15th September

|

2:45pm CET

Location: Minerva

How to get the best from an RGB-Ir sensor: the future of In Cabin applications

RGBIr sensors are getting more and more popular as the possibility of generating both RGB and IR content from a single sensor is a key enabler for those applications.
Being able to effectively handle RGBIr data is crucial: image quality for the RGB domain is one of the most important KPI while a full resolution IR image is the key to support the computer vision analysis of the scene.
In this presentation an effective architecture to manage RGBIr content will be presented. The challenges of bright light scenarios for color rendering as well as low light scenes will be addressed and multiple modes of RGB and IR content reconstruction will be described. A set of videos showing the two reconstructed
streams will be also presented to give the audience an overview of the possible use cases.

Hear from:

Udit Budhia
Director of Marketing
Ambarella

Wednesday 15th September

|

3:10pm CET

Location: Mezzanine

Automotive 2.1 µm High Dynamic Range Image Sensors

This work describes a first generation 8.3 Mega-Pixel (MP) 2.1 µm dual conversion gain (DCG) pixel image sensor developed and released to the market. The sensor has high dynamic range (HDR) up to 140 dB and cinematographic image quality. Non-bayer color filter arrays improve low light performance significantly for front and surround Advanced Driver Assistance System (ADAS) cameras. This enables transitioning from level 2 to level 3 autonomous driving (AD) and fulfilling challenging Euro NCAP requirements.

Hear from:

Sergey Velichko
Sr. Manager, ASD Technology and Product Strategy
ON Semiconductor

Wednesday 15th September

|

3:10pm CET

Location: Minerva

IR illumination for Next-Gen In-Cabin Sensing Systems

In-Cabin Sensing systems have been emerging with an unprecedented pace due to upcoming regulations and safety standards and is accompanied by the ongoing efforts of sensor and illuminator suppliers to address the demands of these new systems. In this presentation different illumination solutions, including IRED and VCSEL technologies will be presented and the current challenges thereof.

Hear from:

Firat Sarialtun
Segment Manager
ams OSRAM

Wednesday 15th September

|

3:45pm CET

Location: Mezzanine

Thermal requirements for high temperature vs. extreme summer.

In the automotive industry, the design validation tests and product validation tests are performed in thermal chamber conditions. During these tests, only the ECU is tested, without considering surrounding parts, solar radiation, and real-life convection conditions. The thermal chamber conditions are different than extreme real-life conditions. In extreme real car environment conditions, there are surrounding parts, like brackets and protection covers, which reduce the forced convection conditions significantly.
This presentation analyzes the difference between these conditions and explains why these conditions do not substitute each other. It is essential to analyze both situations and to take the right conclusions for both cases, being aware of the limitations.

Hear from:

Cristina Dragan
Thermal Analyst Expert
Continental

Wednesday 15th September

|

3:45pm CET

Location: Minerva

High Dynamic Range Backside Illuminated Voltage Mode Global Shutter CIS for in Cabin Monitoring

Although global shutter operation is required to minimize motion artifacts in in-cabin monitoring, it forces large changes in the CIS architecture. Most global shutter CMOS image sensors available in the market today have larger pixels and lower dynamic range than rolling shutter image sensors. This adversely impacts their size/cost and performance under different lighting conditions. In this paper we describe the architecture and operation of backside illuminated voltage mode global shutter pixels. We also describe how the dynamic range of these pixels can be extended using either multiple integration times or LOFIC techniques. In addition, how backside illuminated voltage mode global shutter pixels can be scaled, enabling smaller more cost effective camera solutions and results from recent backside illuminated voltage mode global shutter CIS will be presented.

Hear from:

Boyd Fowler
CTO
OmniVision Technologies

Wednesday 15th September

|

4:20pm CET

Location: Mezzanine

Panel Discussion – Dealing with adverse weather

The panel will cover the requirements for perception, testing and robustness in algorithms, to accurately deal with adverse weather situations.

Hear from:

Cristina Dragan
Thermal Analyst Expert
Continental

Dengxin Dai
Group Leader, Vision for Autonomous Systems Group, MPI for Informatics

Dr. Tom Jellicoe
Head of Autonomous Technology
TTP

Wednesday 15th September

|

4:20pm CET

Location: Minerva

Panel Discussion – How can sensing change the future of the in-cabin experience?

As the industry shifts its focus from exterior to interior, from driver to occupancy, from basic monitoring to advanced monitoring, and from safety as the main feature, to safety as an implicit part of the user experience. The panel will discuss in-cabin sensing advancements and how sensing can change the future of the in-cabin experience. In addition to discussing enabling technologies and the infrastructure that can take us there, including various sensing methods and sensor fusion.

Hear from:

Petronel Bigoi
CTO
Xperi

Dr. Jordi Vila-Planas
Innovation Supervisor, Biometrics & In Cabin Sensing
Nextium

5:00pm | Networking Coffee Break, sponsored by OmniVision Technologies

Wednesday 15th September

|

5:45pm CET

Location: Mezzanine

Keynote – ADAS Sensor Performance

Wednesday 15th September

|

6:15pm CET

Location: Mezzanine

Panel Discussion – Impacts of Sensor Degradation on ADAS System Performance

Hear from:

Gerhard Steininger, Senior Manager, Deloitte

Representatives from obsurver and dSPACE

7:00pm | Networking time in exhibition | Event closes for day at 7:30pm

9:00am | Exhibition opens, followed by opening remarks at 9:30am

Thursday 16th September

|

9:40am CET

Location: Mezzanine

Chip-scale LiDAR for affordability and manufacturability

In this presentation, we introduce a chip-scale solid-state LiDAR technology promising the cost and manufacturability advantages inherited from the silicon technology. The challenge of the light source integration has been overcome by the III/V-on-silicon technology that has just emerged in the silicon industry. With the III/V-on-silicon chip in the core, initial LiDAR module performance, performance scalability, and application status are presented for the first time. Cost-volume analysis and eco-system implications are also discussed.

Hear from:

Dr. Dongjae Shin
Principal researcher
Samsung Advanced Institute of Technology

Thursday 16th September

|

9:40am CET

Location: Minerva

Spatial Recall Index for the performance of machine-learning algorithms for automotive applications

We simulate a realistic objective lens based on a Cooke-triplet that exhibits typical optical aberrations like astigmatism and chromatic aberration, all variable over field. We use a special pixel-based convolution to degrade a subset of images from the BDD100k dataset, and quantify the changes in the performance of the pre-trained Hybrid Task Cascade (HTC) and Mask R-CNN algorithm. We present the SRI, which spatially resolves where in the image these changes occur, on a pixel-by-pixel basis. Our examples demonstrate the spatial dependence of the performance from the optical quality over field, highlighting the need to take the spatial dimension into account when training ML-based algorithms, especially when looking forward to autonomous driving applications.

Hear from:

Prof. Dr. Alexander Braun
Professor of Physics
University of Applied Science, Duesseldorf

Thursday 16th September

|

10:05am CET

Location: Mezzanine

Scalable lidar technology for automotive and transportation applications

This presentation focuses on key considerations for high performance, mass-market lidar solutions. It will explain the key success factors that enable solution scalability, such as performance, reliability, cost and ease of integration, and a technology path optimized to strike the right balance between those factors. It will feature some of the key use cases of automotive lidars that enable safe autonomy for ADAS and AV applications. It will also talk about how lidars, when coupled with perception software, can provide intelligent perception to transform transportation infrastructure by enabling next-generation applications such as smart intersections, traffic analytics and electronic free flow tolling.

Hear from:

Dr. Jun Pei
CEO
Cepton Technologies

Thursday 16th September

|

10:05am CET

Location: Minerva

Self-supervised Learning for Autonomous Driving

Self-supervised learning enables a vectorized mapping of unlabeled datasets. When dealing with large visual datasets, self-supervised techniques remove those datapoints that are biased or redundant, which
would otherwise damage the AI. In autonomous driving, as companies gather petabytes of visual data, supervised learning enables them to identify the most relevant data points, thus increasing deployment speed while decreasing costs.

Hear from:

Igor Susmelj
CTO & Co-Founder
Lightly

Thursday 16th September

|

10:30am CET

Location: Mezzanine

LiDAR Presentation from Opsys

Thursday 16th September

|

10:30am CET

Location: Minerva

Dynamic Ground Truth – a core enabler for data driven ADAS / AD development and validation

The presentation gives an overview of a “real world data validation toolchain for ADAS / AD vehicle testing and validation” and describes the main aspects of such a toolchain which are:
• A High-precision sensor system with 360° FoV for an independent picture of the environment around the ego vehicle plus an adequate data logger for recording the data stream from the Reference system as well as the System under Test (sensor system of the vehicle).
• A Data management system that organizes / supports data-ingestion, (meta-) data management and statistical data analysis in the data center / IT-backbone.
• A perception algorithm that automatically detects, classifies and highly-accurate determines the position of the detected objects relative to the ego vehicle’s position

Hear from:

Dr. Armin Engstle
Main Department Manager Dynamic Ground Truth System
AVL Software & Functions

10:55am | Networking Coffee Break

Thursday 16th September

|

11:45am CET

Location: Mezzanine

A novel scoring methodology and tool for assessing LiDAR performance

This presentation presents a tool, which summarizes the most crucial characteristics and provides a common ground to compare each solution's pros and cons, by drawing a scoring envelope based on 8 major parameters of the LiDAR system, representing its performance, suitability to an automotive application, and business advantages.

Hear from:

Dima Sosnovsky
Principal System Architect
Huawei

Thursday 16th September

|

11:45am CET

Location: Minerva

Smart data pipeline – separate the wheat from the chaff for raw sensor data

On the one hand the amount of data recorded during vehicle test drives has to be collected by complex vehicle setups, which have to be managed properly. Measurement equipment and test drives have to be permanently accessible, monitored and updated in the field.
On the other hand, every bit collected has to be ingested into the data centers, which requires high bandwidth connections between ingest stations. At the same time these raw data coming from test drives typically comes in raw to the data lake.
The smart data pipeline offers a smart recording architecture, where relevant data is already pre-selected while recording in the vehicle. Selecting the right data for AI training and validation saves cloud storage and speeds up development process with sensorics, especially the acquisition to simulation time of data.

Hear from:

Adrian Bertl
Team Lead Product Marketing
b-plus

Thursday 16th September

|

12:10pm CET

Location: Mezzanine

Innovative applications of DLP MEMs devices for Automotive Depth Sensing Use-Cases

Together, with Audi Group, we show a concept on how the DMD based headlights in conjunction with the front camera can enable depth generation use cases based on Structured Light (SL) Algorithms. We provide an overview of existing SL algorithms and challenges in using them in the automotive environments due to Ego Motion and provide possible solutions to address these challenges.
Changing gears, we discuss the possible applications of DMD devices in Lidar devices for ambient noise reduction in the receiver path. We continue further to introduce a revolutionary pre-production MEMs device that works on the principle of Phase Light Modulation (PLM). We discuss the high-level architecture of the PLM device and the corresponding programming model based on Fourier Imaging. We then conclude with possible architectures highlighting the suitability of the PLM device for Lidar transmitters and its advantage over existing technologies.

Hear from:

Shashank Dabral
Lead Systems Architect
Texas Instruments

Thursday 16th September

|

12:10pm CET

Location: Minerva

Distributed and Federated Data will Drive Autonomous Vehicles to Open Standards

In this talk, Codeplay will beginning exploring the scope of the complexity of these distributed and federated data-streams and how open standard and industry adopted software like SYCL and OpenCL are already enabling all types of computer architectures in other markets. The talk will outline the requirements of the processing the data in the car, on the edge, in the cloud, in the datacenters and then back.

Hear from:

Andrew Richards
CEO
Codeplay Software

Thursday 16th September

|

12:35pm CET

Location: Mezzanine

Panel Discussion on challenges in validating LiDAR simulations

Hear from:

Mike Phillips
Team Lead
Siemens Digital Industries Software

Dr. Valentina Donzella
Associate Professor
University of Warwick

Thursday 16th September

|

12:35pm CET

Location: Minerva

Panel Discussion: The Road to Embedded Camera API Standardization

Panellists from Khronos, EMVA, and other members of the Exploratory Group will discuss how a consistent set of interoperability standards and guidelines for embedded cameras and sensors will help solve the problems impeding growth in advanced sensor deployment and share insights into the innovative Exploratory Group process that is bringing the industry together to generate consensus on catalyzing effective standardization initiatives.

Hear from:

Uwe Artmann
CTO
Image Engineering

Chris Yates
President
EMVA

1:20pm | Networking Lunch Break in the Exhibition

Thursday 16th September

|

2:00pm CET

Location: Mezzanine

Discussion led by AEye on LiDAR

Thursday 16th September

|

2:35pm CET

Location: Mezzanine

Keynote – How to…reinvent automotive engineering using platform technologies

  • Use 135 years of experience in automotive engineering to create a sustainable omniscient platform by connecting engineers, sensors and data
  • Enable automotive engineers to become data driven and transform huge amounts of data towards shareable data products
  • Showcase examples and AI research results from acoustic engineering (interior noise analysis, NVH models and overarching data journey)

Hear from:

Frank Schweickhardt
Head of Sound & Isolation, Thermodynamics & Airflow, R&D
Daimler

Jessica Gasper
Acoustics Engineer
Daimler

Oliver Hupfeld
CEO
Inno-Tec

Thursday 16th September

|

3:20pm CET

Location: Mezzanine

Keynote TBC

Thursday 16th September

|

3:55pm CET

Location: Mezzanine

Panel Discussion – novel sensor modalities and how they could shape the future of ADAS and AVs

Hear from:

Moderated by Junko Yoshida

4:30pm | Closing remarks and close of AutoSens in Brussels 2021