12-14 SEPTEMBER 2022 | AUTOWORLD, BRUSSELS

AutoSens Brussels | On-Demand

Hear from:

Nitsan Bouksdorf

Nitsan Bouksdorf
COO
RadSee


Today’s edge-processed imaging radar systems lack the compute capabilities to enable high-performance radar machine learning, as well as the memory speed to transfer the data and maximize the features of the latest radar transceivers. These systems also consume too much power, leading to thermal challenges. The solution is to use a central domain controller for radar data processing.This presentation addresses the benefits of combining centrally processed radar data with AI virtual aperture imaging software, which enables significantly better angular resolution and point-cloud density with standard and cost-effective radar semiconductor components. First, the limitations of edge-processed radar will be addressed, followed by how these challenges can be resolved via centrally processed radar. Next, data transport from the radar module to the central processor will be discussed. Then, virtual aperture imaging software technology will be introduced and its benefits discussed, including dynamic modulation of radar transmit signals and hardware/software sparsity.

Hear from:

Paul Dentel

Paul Dentel
Sr. Technical Product Manager, Radar Systems
Ambarella, Inc.


Road fatalities that represent 1.35 million death each year is the 8th cause of decease in the world, 23% of them are pedestrians. In US or Europe, accidents happen for 75% of the time in poor weather and lighting conditions.
Safe vehicles and active safety is a promising way to reduce it with features like Active Emergency Braking. Currently, it rely mainly on RGB camera and radar that suffer of limitation on challenging situations that represents the majority of accident situations.
Thermal imaging is a complementary technology to RGB to extend AEB use cases. This talk will explore thermal imaging physics and optic considerations for AEB use-cases. Dynamics simulation model made by Lynred coupled with Johnson criteria will provide range and performance estimation. The conclusion will explore different configuration to improve system performance.

Hear from:

quentin noir

Quentin Noir
Product Manger
Lynred


Computer vision transitioned from traditional image processing to Machine Learning (ML) based solutions. Zendar believes the emerging radar architecture with satellite front-ends and central processing provides an opportunity for bringing machine learning to Radar signal processing.Traditional radar processing pipeline uses a variant of threshold detector to extract point cloud from the radar data cube. Majority of threshold detectors have a limited field of view, which reduces them to be a local peak detector. By using spatial and temporal information combined with a multi-scale field of view, ML detector can achieve higher true positive rate with significantly lower false positive rate. It also enables the ML detector to be able to detect and remove ghost targets which is not possible with current threshold detectors. End-to-end training of such ML based approach utilizes semantic and discriminative features encoded in the satellite radar architecture data.Zendar will present its advancements in the past three years of research to bring machine learning to Automotive radar.

Hear from:

Mohammadreza Mostajabi

Reza Mostajabi Ph.D.
Head of Machine Learning
Zendar Inc.


Sensor fusion is key to robust environment perception. Unfortunately, the throughput requirements of “data-level” fusion are prohibitive, a problem exacerbated by higher-fidelity sensors: e.g. 16+bit HDR vs 10-bit video. Practical architectures instead rely on “late” fusion: Each sensor processes its data into low-throughput semantic data before fusion. This limits the potential accuracy improvement. Instead, Imec/Ghent University proposes “cooperative” fusion, introduced at Autosens 2019 by prof. Philips. This retains the simplicity of late fusion but increases robustness/accuracy: sensors improve their decision using well-chosen feedback from other sensors. Unfortunately, increased-fidelity-sensors like HDR cameras cause increased memory and compukevontational requirements. We demonstrate that this problem can be avoided using content- and picture-quality-preserving HDR-to-SDR conversion. This talk further covers real-world benefits of cooperative fusion, using Radar/Lidar/HDR-camera traffic data acquired in Belgium. These benefits are realized on many key performance indicators: vulnerable road user (VRU) detection/tracking accuracy and stability and processing-induced latency (track initialization delay).

Hear from:

Jan_Aelterman

Prof. Jan Aelterman
IPI Research Group
Ghent University-Imec


Advanced Driver Assistance Systems (ADAS) based on cameras and radars need to be connected safely and securely to the ADAS processors with serial-links that minimize the cost, weight, and complexity of vehicle cable harnesses.
Gigabit Multimedia Serial Link (GMSL) is widely used in vehicles for video interconnects. Three generations of GMSL with data rates up to 12Gb/s over a single cable are in volume production today. GMSL’s high-speed serial link solution enables small low-power camera modules and provides functional safety; enabling, more of the camera module’s size, power, and thermal budget to be allocated to the camera sensor and Integrated Sensor Processor (ISP.)
We present key considerations in the design of automotive camera interconnect architectures as we look toward to a future of sensor fusion and central processing – including a typical vehicle ADAS architecture, a discussion of automotive cables, connectors, and the role of high-speed serial links in these systems.

Hear from:

Kevin Witt

Kevin Witt
Fellow
Analog Devices


Advanced perception systems for ADAS, automated driving, and autonomous vehicles require robust and accurate computer vision and distance estimation in all driving scenarios. This is especially challenging under low-light and poor weather conditions and in highway scenarios where vehicle speeds can top 100kph / 62mph and require significant distances to either maneuver or stop. Many of today’s lidar, radar, and camera-based systems have limitations in robustness, resolution, and accuracy and may also be quite costly. Algolux will discuss perception challenges and present details on proven best-in-industry AI architectures to massively improve computer vision robustness in harsh conditions, and deliver unprecedented robust accuracy of dense depth perception

Hear from:

Matthias Schulze

Matthias Schulze
VP Europe and Asia
Algolux Inc


Validation of perception systems of ADAS/AD ECUs is a complex task. Apart from real world tests and simulation techniques fully based on software (SiL) it is indispensable for a holistic and consistent test coverage to do hardware in the loop tests.
Supplying a domain controller with multiple Cameras, Radar and LIDAR sensor data is a completely different task than supplying a single high-resolution sensor. Thus, interfaces have to fulfil various requirements for a rapid response in development during prototyping, when only an ECU PCB prototype is existing, to the final validation of a system.
A crucial requirement is temporal correlation of data streams during replay and side-band signals along with GMSL, FPDLink and I²C for the emulation of the perception engine.
This requires an architecture for Hardware in the Loop systems that is looking into an integrative setup of interfaces for single sensors up to a hybrid sensor emulation and data center technologies for big scale domain controllers.

Hear from:

Adrian Bertl 710.png

Adrian Bertl
Team Leader Product Marketing
b-plus technologies GmbH


Cascade or imaging radar is a growing up to be widely preferred front radar solution for L4 and L5 applications. Handling multi-chip cascade radar has its own issues ranging from hardware design, data size, and algorithms. In this presentation, we specifically concentrate on special algorithms that need to be designed for any production project. These would include BPM\/DDMA configurations & processing, interference detection & mitigation, etc. Having imaging radar with a high-density point cloud also enables us to perform functionalities that were reserved for Camera or Lidar, for example, self-localization and high accuracy classification. The multipath also remains a problem and has to be handled differently. In this presentation, we explain these topics and provide our experience in addressing these issues effectively.

Hear from:

Santhana Raj

Santhana Raj
Sr. Technical Architect
PathPartner Technology


Few technology trends are generating as much excitement as the promise of autonomous driving. As the world moves closer to fully autonomous vehicles, cars are continually increasing in their ability to independently navigate the roadway, follow rules, and avoid objects to keep their occupants safe. While every advanced driver assistance system (ADAS) is different, they all depend on accurate sensors, including radar, to gather information from the external environment and transmit it to artificial intelligence (AI) and machine learning (ML) algorithms that create a correct response in milliseconds. While every part of the ADAS is important, the sensors placed on various surfaces of the car are mission critical. Radar-based sensors mounted on the vehicle must be able to clearly “see” a variety of objects ― including humans, animals, other cars, road signage, traffic lights, and lane markings ― under a range of lighting and weather conditions, then trigger an appropriate response, such as steering or braking. If a sensor fails to accurately interpret external signals, the car’s response will be wrong, placing human lives at risk. As with many other product development tasks today, engineering simulation provides the answer. Ansys AVxcelerate ensures that radar-based sensors can be verified quickly, in a risk-free and low-cost virtual environment. It enables product development teams to reproduce the complex physical world, including challenging edge cases, and ensure that their radar-based sensor designs perceive that world precisely across a range of terrain, lighting, and weather conditions.

In this presentation, we will present simulation of high-resolution MIMO radar system simulation in realistic driving scenarios. A full physics-based radar scene corner case is modeled to obtain high-fidelity range-Doppler maps. Further, we will demonstrate and investigate the effects of construction metal plate radar-returns on false target identification. Ansys AVxcelerate introduces a new paradigm for sensor development by leveraging Nvidia GPUs and new algorithms to accelerate simulation by orders of magnitude without compromising accuracy while providing connections to the driving simulator of your choice to ensure the safety and accuracy of your radar sensors so you can shift your development and testing strategies from the physical to the virtual world.

   

Hear from:

Lional Bennes.jpg

Lionel Bennes
Senior Product Manager
Ansys  


Preventable human error is the cause of more than 90% of road accidents, leading to more than a million deaths and tens of millions of injuries worldwide every year. We at Provizio believe that the focus should be shifted back to understanding the underlying causes of car crashes and crucially, what sensors and processing capabilities do we need to be able to predict and prevent them successfully. With this goal in mind, we designed and built a platform with high resolution and long-range sensors (including world’s first 6K 4D radar) coupled with artificial intelligence ‘on-the-edge’, which is able to solve the borderline cases by augmenting and at the same time learning from human drivers to pave a path to safe, sustainable and ubiquitous autonomy.

Hear from:

Letizia mariotti

Letizia Mariotti
Senior Computer Vision Engineer
Provizio 


For the first time radar performance has achieved the level of a primary sensor, providing the vehicle with the potential to operate better than a human driver in any environment condition. This creates true safety and instills confidence and trust amongst consumers, which is a major factor in determining mass market adoption. In order to lead this market, and truly realize Vision Zero, automakers should integrate this sensor in their vehicles and integrate it into their perception strategy. In this session we’ll discuss perception radar functionality and processing that provides design flexibility, so vehicles can always be equipped with the most innovative perception technologies. By equipping vehicles early on with scalable perception radar systems, even when they still have much to develop on the perception algorithm side, automakers will have the possibility to offer a wide array of additional functionality down the line with software updates.

Hear from:

GONEN BARKAN - GM - HEAD SHOT

Gonen Barkan
Chief Radar Officer
Arbe


loading more...
Scroll to Top

Your Schedule

Below are your bookmarked sessions. Click on the bookmark icon to remove session from your schedule. You may need to refresh the page to see the latest updates.

You don't have any saved sessions yet. Please go back to the Agenda and start building your schedule.