Automated vehicles (AVs) are complex systems operating in a very complex environment: the assessment of their safety performance cannot be limited to the approval before market introduction but should continue throughout their lifetime. The operational experience feedback is also key to accident prevention, enabling the identification of appropriate remedial actions and sustaining a process of continual improvement. Further to that, sharing of lessons learned plays a central role in improving AVs safety at all levels, beyond geographical or organizational borders. The presentation will give an overview on EU legislative provisions for AVs in-service monitoring and reporting, also comparing with approaches adopted by other Regulators on the same matter.
M. Cristina Galassi
Project Leader – Safety of Connected and Automated Vehicles
European Commission DG JRC
This work focuses on the latest automotive high dynamic range (HDR) image sensors with LED flicker mitigation (LFM) and on their characteristics impacting sensing for ADAS and AV systems. A comparative study of latest 2.1 µm and 3 µm automotive imaging sensors highlights their object sensing capabilities. Through the use cases covering typical scenarios across automotive temperatures and lighting conditions we showcase best imaging solutions. We highlight that image sharpness and resolution in combination with total signal-to-noise ratios in low light and in transitions are important considerations for high safety autonomous designs, especially in corner cases. Additionally we study limitations of basic HDR pixel architectures in relation to detecting colors and features in automotive space.
Sr. Manager, ASD Technology and Product Strategy
Implementation of camera systems in automotive applications continues to increase in number and to expand in functionality. From exterior surround view camera systems with multiple cameras and thermal imaging systems for night vision to in-cabin cameras for driver and passenger monitoring, improving image quality remains an important element to achieving optimal performance and functionality. Automobiles operate in countless types of illumination and environmental conditions, which necessitates numerous fundamental assessment protocols to consider when measuring and optimizing image quality for such aspects as ADAS and autonomous vehicle performance. This talk will provide insights into image quality methodologies and considerations for benchmarking visible and IR camera systems relevant to the automotive industry. Measurements of image quality factors including dynamic range, stray light, and sharpness will be described with regard to the differentiations between image quality improvements for machine vision purposes versus human vision interpretation.
VP of Imaging Science
The sensor technology of future automated driving functions and advanced driver assistance systems must work safely in all weather conditions. Currently, certification tests are usually only performed in good weather conditions, but not in rain and fog, for example. In order to be able to test weather effects independently of real outdoor conditions, AVL is currently establishing an indoor sensor center for the verification and validation of sensors for driver assistance systems.
Dr.-Ing. Armin Engstle
Main Department Manager
AVL Software and Functions GmbH
Today’s edge-processed imaging radar systems lack the compute capabilities to enable high-performance radar machine learning, as well as the memory speed to transfer the data and maximize the features of the latest radar transceivers. These systems also consume too much power, leading to thermal challenges. The solution is to use a central domain controller for radar data processing.This presentation addresses the benefits of combining centrally processed radar data with AI virtual aperture imaging software, which enables significantly better angular resolution and point-cloud density with standard and cost-effective radar semiconductor components. First, the limitations of edge-processed radar will be addressed, followed by how these challenges can be resolved via centrally processed radar. Next, data transport from the radar module to the central processor will be discussed. Then, virtual aperture imaging software technology will be introduced and its benefits discussed, including dynamic modulation of radar transmit signals and hardware/software sparsity.
Sr. Technical Product Manager, Radar Systems
Road fatalities that represent 1.35 million death each year is the 8th cause of decease in the world, 23% of them are pedestrians. In US or Europe, accidents happen for 75% of the time in poor weather and lighting conditions.
Safe vehicles and active safety is a promising way to reduce it with features like Active Emergency Braking. Currently, it rely mainly on RGB camera and radar that suffer of limitation on challenging situations that represents the majority of accident situations.
Thermal imaging is a complementary technology to RGB to extend AEB use cases. This talk will explore thermal imaging physics and optic considerations for AEB use-cases. Dynamics simulation model made by Lynred coupled with Johnson criteria will provide range and performance estimation. The conclusion will explore different configuration to improve system performance.
Recently, OEMs would like to utilize front cameras initially focused on sensing applications, like ADAS, to viewing applications, like AR (Augmented Reality) cameras. For such an application, there would be unique challenges. For instance, how to construct an image processing pipeline for both applications to realize maximum capability for machine vision and stunning viewing images simultaneously? Where should we put the ISP block? In this presentation, such key challenges of the system which realize both sensing and viewing applications from a single automotive camera will be discussed.
Product Manager of Automotive CMOS image sensor
Sony Semiconductor Solutions
Computer vision transitioned from traditional image processing to Machine Learning (ML) based solutions. Zendar believes the emerging radar architecture with satellite front-ends and central processing provides an opportunity for bringing machine learning to Radar signal processing.Traditional radar processing pipeline uses a variant of threshold detector to extract point cloud from the radar data cube. Majority of threshold detectors have a limited field of view, which reduces them to be a local peak detector. By using spatial and temporal information combined with a multi-scale field of view, ML detector can achieve higher true positive rate with significantly lower false positive rate. It also enables the ML detector to be able to detect and remove ghost targets which is not possible with current threshold detectors. End-to-end training of such ML based approach utilizes semantic and discriminative features encoded in the satellite radar architecture data.Zendar will present its advancements in the past three years of research to bring machine learning to Automotive radar.
Reza Mostajabi Ph.D.
Head of Machine Learning
Deep learning and Computer Vision have shown immense strength in real-world applications. This presentation will focus on using thermal imaging technology for designing an intelligent forward sensing system that should be effective in all weather and environmental conditions. This work is carried out under Heliaus project funded by European Union’s Horizon 2020 research and innovation programme and France, Germany, Ireland, and Italy. The systems work by deploying the thermally tuned deep learning networks on GPU & single-board EDGE-GPU computing platforms for onboard automotive sensor suite testing. The state-of-the-art object detection models are trained and fine-tuned on a large-scale novel C3I thermal dataset comprising of more than 35K distinct thermal frames collected from 640×480 uncooled thermal cameras along with 4 different large-scale publicly available thermal datasets. The trained network variant of the YOLO object detector is further optimized using SoA neural inference accelerator (TensorRT) to explicitly boost the frames per second rate and cut the overall inference time. The optimized network engine increases the frames per second rate by 3.5 times when testing on low-power edge devices thus achieving 11 fps on Nvidia Jetson Nano and 60 fps on Nvidia Xavier NX GPU-Edge computing boards.
R&D Associate Engineer
As more sensor-based safety critical systems are added to vehicles, the benefits of a standards-based approach for sensor connectivity also increase – and even more so if functional safety requirements are already integrated into that solution.
This presentation will demonstrate how the MIPI Automotive SerDes Solutions (MASSSM) framework provides a standardized sensor-to-ECU solution for autonomous driving and ADAS systems with functional safety built into its core. The presentation will describe the MASS framework with specific focus on the functional safety features that have been embedded “throughout the stack,” starting from the Camera Service Extensions (MIPI CSE SM) layer to the baseline MIPI CSI-2® image sensor protocol, to the MIPI A-PHY SM SerDes physical layer. Also highlighted will be upcoming features for CSE and A-PHY, and a discussion of how this solution will enable developers to embed functional safety natively at the ‘edge’ — within the sensor and ECU components themselves.
MIPI A-PHY Working Group
Director, Technical Standards
One of the biggest challenges associated to sensing in assisted and automated driving is the amount of data produced by the environmental perception sensor suite, specifically when considering higher levels of autonomy requiring numerous sensors and different sensor technologies. Sensor data need to be transmitted to the processing units with very low latencies and without hindering the performance of perception algorithms, i.e. object detection, classification, segmentation, prediction, etc. However, the amount of data produced by a possible SAE J3016 level 4 sensor suite can add up to 40 Gb/s, and cannot be supported by traditional vehicle communication networks. There is therefore the need to consider robust techniques to compress and reduce the data that each sensor needs to transmit to the processing units. The most commonly used video compression standards have been optimised for human vision, but in the case of assisted and automated driving functions the consumer of the data will likely be a perception algorithm, based on well-established deep- neural networks. This work demonstrates how lossy compression of video camera data can be used in combination with deep neural network based object detection.
Dr. Valentina Donzella
Associate Professor, Head of Intelligent vehicles - sensors group
WMG, University of Warwick
This presentation will discuss trends in image sensor resolution and increases in frame rate for automotive camera’s. In combination with the need to capture multiple images every frame for HDR purposes, this has been driving a drastic increase for data transfer bandwidth. We will discuss how current interface technologies can cope with this higher bandwidth requirement and how new standards such as MIPI A-PHY and ASA can enhance video link architecture and functionality. We will present how OMNIVISION has been preparing to serve the needs of the market for the coming years.
Vice President, Europe
Since autonomous driving (AD) systems need redundancy and ADAS needs more reliable smart functions, which lead to an essential need to enable fully autonomous applications by low-cost sensors. This seems also the only way to scale up AD in mass production. Within this talk, Panasonic’s novel solutions on computer vision, sensor fusion, and path planning will be presented; which are the core components in our level 3 & 4 autonomous parking systems using low-cost sensors such as cameras and ultrasonics.
Head of ADAS