12-14 SEPTEMBER 2022 | AUTOWORLD, BRUSSELS

AutoSens Brussels | On-Demand

Automated vehicles (AVs) are complex systems operating in a very complex environment: the assessment of their safety performance cannot be limited to the approval before market introduction but should continue throughout their lifetime. The operational experience feedback is also key to accident prevention, enabling the identification of appropriate remedial actions and sustaining a process of continual improvement. Further to that, sharing of lessons learned plays a central role in improving AVs safety at all levels, beyond geographical or organizational borders. The presentation will give an overview on EU legislative provisions for AVs in-service monitoring and reporting, also comparing with approaches adopted by other Regulators on the same matter.

Hear from:

Maria Galassi

M. Cristina Galassi
Project Leader – Safety of Connected and Automated Vehicles
European Commission DG JRC


This work focuses on the latest automotive high dynamic range (HDR) image sensors with LED flicker mitigation (LFM) and on their characteristics impacting sensing for ADAS and AV systems. A comparative study of latest 2.1 µm and 3 µm automotive imaging sensors highlights their object sensing capabilities. Through the use cases covering typical scenarios across automotive temperatures and lighting conditions we showcase best imaging solutions. We highlight that image sharpness and resolution in combination with total signal-to-noise ratios in low light and in transitions are important considerations for high safety autonomous designs, especially in corner cases. Additionally we study limitations of basic HDR pixel architectures in relation to detecting colors and features in automotive space.

Hear from:

Sergey Velichko
Sr. Manager, ASD Technology and Product Strategy
onsemi


Implementation of camera systems in automotive applications continues to increase in number and to expand in functionality. From exterior surround view camera systems with multiple cameras and thermal imaging systems for night vision to in-cabin cameras for driver and passenger monitoring, improving image quality remains an important element to achieving optimal performance and functionality. Automobiles operate in countless types of illumination and environmental conditions, which necessitates numerous fundamental assessment protocols to consider when measuring and optimizing image quality for such aspects as ADAS and autonomous vehicle performance. This talk will provide insights into image quality methodologies and considerations for benchmarking visible and IR camera systems relevant to the automotive industry. Measurements of image quality factors including dynamic range, stray light, and sharpness will be described with regard to the differentiations between image quality improvements for machine vision purposes versus human vision interpretation.

Hear from:

Jonathan Phillips
VP of Imaging Science
Imatest LLC


The sensor technology of future automated driving functions and advanced driver assistance systems must work safely in all weather conditions. Currently, certification tests are usually only performed in good weather conditions, but not in rain and fog, for example. In order to be able to test weather effects independently of real outdoor conditions, AVL is currently establishing an indoor sensor center for the verification and validation of sensors for driver assistance systems.

Hear from:

Dr.-Ing. Armin Engstle
Main Department Manager
AVL Software and Functions GmbH


Today’s edge-processed imaging radar systems lack the compute capabilities to enable high-performance radar machine learning, as well as the memory speed to transfer the data and maximize the features of the latest radar transceivers. These systems also consume too much power, leading to thermal challenges. The solution is to use a central domain controller for radar data processing.This presentation addresses the benefits of combining centrally processed radar data with AI virtual aperture imaging software, which enables significantly better angular resolution and point-cloud density with standard and cost-effective radar semiconductor components. First, the limitations of edge-processed radar will be addressed, followed by how these challenges can be resolved via centrally processed radar. Next, data transport from the radar module to the central processor will be discussed. Then, virtual aperture imaging software technology will be introduced and its benefits discussed, including dynamic modulation of radar transmit signals and hardware/software sparsity.

Hear from:

Paul Dentel

Paul Dentel
Sr. Technical Product Manager, Radar Systems
Ambarella, Inc.


Road fatalities that represent 1.35 million death each year is the 8th cause of decease in the world, 23% of them are pedestrians. In US or Europe, accidents happen for 75% of the time in poor weather and lighting conditions.
Safe vehicles and active safety is a promising way to reduce it with features like Active Emergency Braking. Currently, it rely mainly on RGB camera and radar that suffer of limitation on challenging situations that represents the majority of accident situations.
Thermal imaging is a complementary technology to RGB to extend AEB use cases. This talk will explore thermal imaging physics and optic considerations for AEB use-cases. Dynamics simulation model made by Lynred coupled with Johnson criteria will provide range and performance estimation. The conclusion will explore different configuration to improve system performance.

Hear from:

quentin noir

Quentin Noir
Product Manger
Lynred


Recently, OEMs would like to utilize front cameras initially focused on sensing applications, like ADAS, to viewing applications, like AR (Augmented Reality) cameras. For such an application, there would be unique challenges. For instance, how to construct an image processing pipeline for both applications to realize maximum capability for machine vision and stunning viewing images simultaneously? Where should we put the ISP block? In this presentation, such key challenges of the system which realize both sensing and viewing applications from a single automotive camera will be discussed.

Hear from:

Yuichi Motohasi

Yuichi Motohashi
Product Manager of Automotive CMOS image sensor
Sony Semiconductor Solutions


Computer vision transitioned from traditional image processing to Machine Learning (ML) based solutions. Zendar believes the emerging radar architecture with satellite front-ends and central processing provides an opportunity for bringing machine learning to Radar signal processing.Traditional radar processing pipeline uses a variant of threshold detector to extract point cloud from the radar data cube. Majority of threshold detectors have a limited field of view, which reduces them to be a local peak detector. By using spatial and temporal information combined with a multi-scale field of view, ML detector can achieve higher true positive rate with significantly lower false positive rate. It also enables the ML detector to be able to detect and remove ghost targets which is not possible with current threshold detectors. End-to-end training of such ML based approach utilizes semantic and discriminative features encoded in the satellite radar architecture data.Zendar will present its advancements in the past three years of research to bring machine learning to Automotive radar.

Hear from:

Mohammadreza Mostajabi

Reza Mostajabi Ph.D.
Head of Machine Learning
Zendar Inc.


Deep learning and Computer Vision have shown immense strength in real-world applications. This presentation will focus on using thermal imaging technology for designing an intelligent forward sensing system that should be effective in all weather and environmental conditions. This work is carried out under Heliaus project funded by European Union’s Horizon 2020 research and innovation programme and France, Germany, Ireland, and Italy. The systems work by deploying the thermally tuned deep learning networks on GPU & single-board EDGE-GPU computing platforms for onboard automotive sensor suite testing. The state-of-the-art object detection models are trained and fine-tuned on a large-scale novel C3I thermal dataset comprising of more than 35K distinct thermal frames collected from 640×480 uncooled thermal cameras along with 4 different large-scale publicly available thermal datasets. The trained network variant of the YOLO object detector is further optimized using SoA neural inference accelerator (TensorRT) to explicitly boost the frames per second rate and cut the overall inference time. The optimized network engine increases the frames per second rate by 3.5 times when testing on low-power edge devices thus achieving 11 fps on Nvidia Jetson Nano and 60 fps on Nvidia Xavier NX GPU-Edge computing boards.

Hear from:

Waseem Shariff
R&D Associate Engineer
Xperi Corporation


As more sensor-based safety critical systems are added to vehicles, the benefits of a standards-based approach for sensor connectivity also increase – and even more so if functional safety requirements are already integrated into that solution.
This presentation will demonstrate how the MIPI Automotive SerDes Solutions (MASSSM) framework provides a standardized sensor-to-ECU solution for autonomous driving and ADAS systems with functional safety built into its core. The presentation will describe the MASS framework with specific focus on the functional safety features that have been embedded “throughout the stack,” starting from the Camera Service Extensions (MIPI CSE SM) layer to the baseline MIPI CSI-2® image sensor protocol, to the MIPI A-PHY SM SerDes physical layer. Also highlighted will be upcoming features for CSE and A-PHY, and a discussion of how this solution will enable developers to embed functional safety natively at the ‘edge’ — within the sensor and ECU components themselves.

Hear from:

ArielLasry

Ariel Lasry
Vice Chair
MIPI A-PHY Working Group
Director, Technical Standards
Qualcomm


One of the biggest challenges associated to sensing in assisted and automated driving is the amount of data produced by the environmental perception sensor suite, specifically when considering higher levels of autonomy requiring numerous sensors and different sensor technologies. Sensor data need to be transmitted to the processing units with very low latencies and without hindering the performance of perception algorithms, i.e. object detection, classification, segmentation, prediction, etc. However, the amount of data produced by a possible SAE J3016 level 4 sensor suite can add up to 40 Gb/s, and cannot be supported by traditional vehicle communication networks. There is therefore the need to consider robust techniques to compress and reduce the data that each sensor needs to transmit to the processing units. The most commonly used video compression standards have been optimised for human vision, but in the case of assisted and automated driving functions the consumer of the data will likely be a perception algorithm, based on well-established deep- neural networks. This work demonstrates how lossy compression of video camera data can be used in combination with deep neural network based object detection.

Hear from:

ValentinaDonzella

Dr. Valentina Donzella
Associate Professor, Head of Intelligent vehicles - sensors group
WMG, University of Warwick


In this talk we present a new approach for feature fusion between RGB and LWIR Thermal images for the task of semantic segmentation for driving perception. We propose a double DeepLab architecture with specialized encoder-decoders for thermal and color modalities and a shared decoder for final segmentation. We combine two strategies for feature fusion: confidence weighting and correlation weighting. We report state-of-the-art on the thermal-color semantic segmentation task.

Hear from:

oriel_frigo

Oriel Frigo
AI Engineer
AnotherBrain


This presentation will discuss trends in image sensor resolution and increases in frame rate for automotive camera’s. In combination with the need to capture multiple images every frame for HDR purposes, this has been driving a drastic increase for data transfer bandwidth. We will discuss how current interface technologies can cope with this higher bandwidth requirement and how new standards such as MIPI A-PHY and ASA can enhance video link architecture and functionality. We will present how OMNIVISION has been preparing to serve the needs of the market for the coming years.

Hear from:

Mario Heid COLOUR

Mario Heid
Vice President, Europe
OMNIVISION


Since autonomous driving (AD) systems need redundancy and ADAS needs more reliable smart functions, which lead to an essential need to enable fully autonomous applications by low-cost sensors. This seems also the only way to scale up AD in mass production. Within this talk, Panasonic’s novel solutions on computer vision, sensor fusion, and path planning will be presented; which are the core components in our level 3 & 4 autonomous parking systems using low-cost sensors such as cameras and ultrasonics.

Hear from:

Duong-Van Nguyen
Head of ADAS
Panasonic 


loading more...
Scroll to Top

Your Schedule

Below are your bookmarked sessions. Click on the bookmark icon to remove session from your schedule. You may need to refresh the page to see the latest updates.

You don't have any saved sessions yet. Please go back to the Agenda and start building your schedule.

Book by 17th June to save €200 on your AutoSens Brussels pass

Days
Hours
Minutes