AutoSens in Brussels 2021 | Watch On-Demand

Please note that you must be signed in and be a ticket holder in order to watch the content below. Buy On-Demand Access to gain immediate access to all the content from both Brussels and our latest AutoSensONLINE conference.

Order by
Topics
Show more
Company
Show more

ON-DEMAND | FREE TO WATCH

Post-event analysis, key takeaways and insights from the latest conference in Brussels. 

Following the conference theme, “transitioning to a software-driven, integrated automotive supply chain”, the Talking Points online session brings together speakers from AutoSens in Brussels with selected members of the onsite audience to recap the most important topics and panels presented, the following Q&A discussions, and draw insights from the wealth of discussion that happens onsite to give you a roundup of the key takeaways from the conference agenda. 

Sign in or register for a free account to access the recording.

Hear from:

Christophe Lavergne
Image Sensor and Processing Expert
Renault

Roee Elizov
Chief Architect, Sensor Technologies
Harman

Rudy Burger
Managing Partner
Woodside Capital Partners

Junko Yoshida
Editor in Chief
The Ojo-Yoshida Report

Benni May
Co-Founder
Obsurver

Enguerrand Prioux
ADAS/AD Product Line Manager
Siemens Digital Industries Software

Prof. Dr. Alexander Braun
Professor of Physics
University of Applied Sciences, Duesseldorf

  • What is the sensor fusion set up in a De-Centralized ADAS system?
  • Overview of the fusion and functions in such a system
  • How can the current smart sensor set up in de-centralized system be re-used in a remote sensor in centralized set up?
  • Re-use of optical path and computer vision from smart to remote set up

Hear from:

Raj Vazirani
Director of Radar, Camera and Global Electronics Engineering ADAS and AD
ZF Group

  • Evaluation of the contribution of both lens and sensor to the flare effect
  • Flare effect evaluation for any position of a point light source (in and out of the field of view)
  • Comprehensive set of metrics and tabletop test bench for Flare effect evaluation

Hear from:

Hoang-Phi Nguyen
Product Owner
DXOMARK

  • Tuning the camera hardware on one side and the computer vision separately is not enough to reach optimal performance of a camera system for automated driving.
  • As the main target of a camera system for automated driving, computer vision performance needs to be considered in each system design decision.
  • End-to-end simulation of a camera system enables to objectively evaluate design choices based on their effect on computer vision performance.

Hear from:

Dr. Damien Schroeder
Project Manager Camera Systems
BMW Group

For machine vision application, when one can calculate, on the full dynamic range of luminance, the iSNR of a front camera sensor alone, it becomes possible for carmakers and integrators to predict if the SOTIF specification will be met for the front camera sensor but once integrated in the vehicle. It will be discussed of a proposal to define the SOTIF specification of camera sensor for machine vision, of a method to digitally pass from a camera sensor alone to a camera sensor integrated in the vehicle and finally on the constraints that the OECF of HDR camera sensor must met to enable the use of iSNR criteria for automotive industry.

Hear from:

Christophe Lavergne
Specialist Image Sensor and Processing
Renault

How utilizing an AI layer for post-processing the radar's data enables many advanced real-time features. Why 4D imaging radar technology is the perfect counterpart to the camera, not offering redundancy by doing "more of the same", but rather, relying on a different technology, due to which radar's and camera's strengths and weaknesses complement each other. Therefore, achieving true safety for any Level 2 application, hands free driving, and full autonomy which is, after all, the ultimate goal, must utilize the fusion of both sensors.

Hear from:

Matan Nurick
Director of Product Management
Arbe

The importance of the camera-based sensing system is increasing. For "Level 2+" application, the camera-based sensing system is already a de-facto system configuration.
The improvement of image sensor characteristics and functionality is strongly required as it could influence the total performance of the sensing system.
In this presentation, the key characteristics of the image sensors will be presented. Also, the state-of-the-art of functional safety and cybersecurity requirement to achieve reliable and robust sensing/viewing system will be discussed.

Hear from:

Yuichi Motohashi
Automotive Image Sensor Applications Engineer
Sony

  • Overview of the current technologies used for detecting surroundings in autonomous systems and why they fall short of capabilities necessary for autonomous vehicles.
  • We will introduce a new and exciting SWIR-based sensor modality which provides HD imaging and ranging information in all conditions (“SEDAR”). How it works, its main benefits, and why it is the future.
  • We will then show experimental evidence of SEDAR superiority over sensors of other wavelengths. These include recordings in difficult conditions such as nighttime, fog, glare, dust, and more. Also, show depth map field results.

Hear from:

Ziv Livne
CBO
TriEye

RGBIr sensors are getting more and more popular as the possibility of generating both RGB and IR content from a single sensor is a key enabler for those applications.
Being able to effectively handle RGBIr data is crucial: image quality for the RGB domain is one of the most important KPI while a full resolution IR image is the key to support the computer vision analysis of the scene.
In this presentation an effective architecture to manage RGBIr content will be presented. The challenges of bright light scenarios for color rendering as well as low light scenes will be addressed and multiple modes of RGB and IR content reconstruction will be described. A set of videos showing the two reconstructed
streams will be also presented to give the audience an overview of the possible use cases.

Hear from:

Pier Paolo Porta
Marketing Senior Manager
Ambarella

Vehicles need to operate safely in all conditions. While current perception approaches have enabled ADAS and Autonomous Vehicles to make progress in that regard, it is crucial for “all conditions” to include the most difficult scenarios such as darkness and poor weather, i.e. rain, snow, and fog. Unfortunately, current testing guidelines from NHTSA and EuroNCAP evaluate vehicle safety in nominal good conditions only. Numerous reports show a lack of robustness in these harsh scenarios. For example, a recent AAA report testing late-model vehicles concluded that Automated Emergency Braking (AEB) consistently fails in darkness. Algolux will review the challenges of robust computer vision, describe advanced machine learning approaches to improve computer vision accuracy under all scenarios and show direct comparisons and benchmarks against open and commercial perception solutions.

Hear from:

Matthias Schulze
VP Europe & Asia
Algolux

In-Cabin Sensing systems have been emerging with an unprecedented pace due to upcoming regulations and safety standards and is accompanied by the ongoing efforts of sensor and illuminator suppliers to address the demands of these new systems. In this presentation different illumination solutions, including IRED and VCSEL technologies will be presented and the current challenges thereof.

Hear from:

Firat Sarialtun
Segment Manager
ams OSRAM

In the automotive industry, the design validation tests and product validation tests are performed in thermal chamber conditions. During these tests, only the ECU is tested, without considering surrounding parts, solar radiation, and real-life convection conditions. The thermal chamber conditions are different than extreme real-life conditions. In extreme real car environment conditions, there are surrounding parts, like brackets and protection covers, which reduce the forced convection conditions significantly.
This presentation analyzes the difference between these conditions and explains why these conditions do not substitute each other. It is essential to analyze both situations and to take the right conclusions for both cases, being aware of the limitations.

Hear from:

Cristina Dragan
Thermal Analyst Expert
Continental

Although global shutter operation is required to minimize motion artifacts in in-cabin monitoring, it forces large changes in the CIS architecture. Most global shutter CMOS image sensors available in the market today have larger pixels and lower dynamic range than rolling shutter image sensors. This adversely impacts their size/cost and performance under different lighting conditions. In this paper we describe the architecture and operation of backside illuminated voltage mode global shutter pixels. We also describe how the dynamic range of these pixels can be extended using either multiple integration times or LOFIC techniques. In addition, how backside illuminated voltage mode global shutter pixels can be scaled, enabling smaller more cost effective camera solutions and results from recent backside illuminated voltage mode global shutter CIS will be presented.

Hear from:

Boyd Fowler
CTO
OmniVision Technologies

  • Converging applications requirements set Lidar requirements
  • Integration positions require application and/or operational benefits
  • Architecture and LIDAR performance must fit application and integration
  • Innovative partnership setup for modular or total solutions

Hear from:

Filip Geuens
CEO
XenomatiX

Frederic Chave
Director Product Management
Marelli

We simulate a realistic objective lens based on a Cooke-triplet that exhibits typical optical aberrations like astigmatism and chromatic aberration, all variable over field. We use a special pixel-based convolution to degrade a subset of images from the BDD100k dataset, and quantify the changes in the performance of the pre-trained Hybrid Task Cascade (HTC) and Mask R-CNN algorithm. We present the SRI, which spatially resolves where in the image these changes occur, on a pixel-by-pixel basis. Our examples demonstrate the spatial dependence of the performance from the optical quality over field, highlighting the need to take the spatial dimension into account when training ML-based algorithms, especially when looking forward to autonomous driving applications.

Hear from:

Prof. Dr. Alexander Braun
Professor of Physics
University of Applied Science, Duesseldorf

This presentation focuses on key considerations for high performance, mass-market lidar solutions. It will explain the key success factors that enable solution scalability, such as performance, reliability, cost and ease of integration, and a technology path optimized to strike the right balance between those factors. It will feature some of the key use cases of automotive lidars that enable safe autonomy for ADAS and AV applications. It will also talk about how lidars, when coupled with perception software, can provide intelligent perception to transform transportation infrastructure by enabling next-generation applications such as smart intersections, traffic analytics and electronic free flow tolling.

Hear from:

Dr. Winston Fu
CFO
Cepton Technologies

Self-supervised learning enables a vectorized mapping of unlabeled datasets. When dealing with large visual datasets, self-supervised techniques remove those datapoints that are biased or redundant, which
would otherwise damage the AI. In autonomous driving, as companies gather petabytes of visual data, supervised learning enables them to identify the most relevant data points, thus increasing deployment speed while decreasing costs.

Hear from:

Igor Susmelj
CTO & Co-Founder
Lightly

In this talk we will discuss how to address ADAS application requirements via modular design of LiDAR system. We will look at specific ADAS applications and critical use cases and discuss how LiDAR systems can be tailored to meet performance requirements at minimum costs. As particular tailoring examples we will discuss L3 Highway Pilot and L4 Autonomous truck cases.

Hear from:

Alexandr Leuta
Business Development Manager
Opsys

The presentation gives an overview of a “real world data validation toolchain for ADAS / AD vehicle testing and validation” and describes the main aspects of such a toolchain which are:
• A High-precision sensor system with 360° FoV for an independent picture of the environment around the ego vehicle plus an adequate data logger for recording the data stream from the Reference system as well as the System under Test (sensor system of the vehicle).
• A Data management system that organizes / supports data-ingestion, (meta-) data management and statistical data analysis in the data center / IT-backbone.
• A perception algorithm that automatically detects, classifies and highly-accurate determines the position of the detected objects relative to the ego vehicle’s position

Hear from:

Dr. Armin Engstle
Main Department Manager Dynamic Ground Truth System
AVL Software & Functions

Together, with Audi Group, we show a concept on how the DMD based headlights in conjunction with the front camera can enable depth generation use cases based on Structured Light (SL) Algorithms. We provide an overview of existing SL algorithms and challenges in using them in the automotive environments due to Ego Motion and provide possible solutions to address these challenges.
Changing gears, we discuss the possible applications of DMD devices in Lidar devices for ambient noise reduction in the receiver path. We continue further to introduce a revolutionary pre-production MEMs device that works on the principle of Phase Light Modulation (PLM). We discuss the high-level architecture of the PLM device and the corresponding programming model based on Fourier Imaging. We then conclude with possible architectures highlighting the suitability of the PLM device for Lidar transmitters and its advantage over existing technologies.

Hear from:

Shashank Dabral
Lead Systems Architect
Texas Instruments

Ridon Arifi
PhD Student
KIT collaborating with Audi

On the one hand the amount of data recorded during vehicle test drives has to be collected by complex vehicle setups, which have to be managed properly. Measurement equipment and test drives have to be permanently accessible, monitored and updated in the field.
On the other hand, every bit collected has to be ingested into the data centers, which requires high bandwidth connections between ingest stations. At the same time these raw data coming from test drives typically comes in raw to the data lake.
The smart data pipeline offers a smart recording architecture, where relevant data is already pre-selected while recording in the vehicle. Selecting the right data for AI training and validation saves cloud storage and speeds up development process with sensorics, especially the acquisition to simulation time of data.

Hear from:

Adrian Bertl
Team Lead Product Marketing
b-plus

In this talk, Codeplay will beginning exploring the scope of the complexity of these distributed and federated data-streams and how open standard and industry adopted software like SYCL and OpenCL are already enabling all types of computer architectures in other markets. The talk will outline the requirements of the processing the data in the car, on the edge, in the cloud, in the datacenters and then back.

Hear from:

Andrew Richards
CEO
Codeplay Software

  • Use 135 years of experience in automotive engineering to create a sustainable omniscient platform by connecting engineers, sensors and data
  • Enable automotive engineers to become data driven and transform huge amounts of data towards shareable data products
  • Showcase examples and AI research results from acoustic engineering (interior noise analysis, NVH models and overarching data journey)

Hear from:

Frank Schweickhardt
Head of Sound & Isolation, Thermodynamics & Airflow, R&D
Mercedes-Benz

Jessica Gasper
Acoustics Engineer
Mercedes-Benz

Oliver Hupfeld
CEO
Inno-Tec