Watch On Demand

The benefits of AVs can only be meaningful when deployed at scale. As Cruise moves from R&D into early commercialization, the approach to system architecture has evolved to provide for a more capable system at a cost point that enables rapid scaling. We will discuss this progression, some of the enabling technologies and paradigms, and what we anticipate for the future.

Hear from:

shane mcguire

Shane McGuire
Principal Systems Engineer, Systems Architecture
Cruise


Exterior automotive imaging applications are quickly evolving, to meet customer requirements image sensor manufactures are being forced to develop new technologies.   Many of these new technologies are necessary for both human and machine vision applications.    Exterior cameras are used for rear view, surround view, e-mirror, digital video recording, ADAS and AD applications.    In this paper we will discuss the requirements and challenges associated with developing these new technologies.   Unfortunately these requirements are often conflicting forcing image sensor manufactures to make tradeoffs based on cost, size and time to market.  Specifically, we will discuss high dynamic range image capture, LED flicker mitigation, low light sensitivity, high temperature operation and cyber security.

Hear from:

Boyd Fowler

Boyd Fowler
CTO
OMNIVISION


Hear from industry analysts, observers and those working directly in the autmotive sector as they explore the future of the supply chain and whether there will be consolidation in specific areas (e.g. SOCs, AD software suppliers, Lidar suppliers etc.)

Hear from:

RudyBurger

Rudy Burger
Managing Partner
Woodside Capital Partners

juergen hoellisch

Juergen Hoellisch
Founder
Hoellisch Consulting GmbH

abhay rai_internet

Abhay Rai
Senior Vice President
indie Semiconductor

chris van den elzen

Chris Van Dan Elzen
 EVP, Radar Product Area
Veoneer


liang downey

Liang Downey
Digital Advisor, Energy, Mobility and Sustainability Customer Transformation
Microsoft Industry Solutions
Chair
IEEE USA Women in Engineering

LiDAR remains one of the most critical sensors enabling autonomous driving. And while most agree with the criticality of this sensor, confusion remains regarding what performance is needed to address different use cases and enable different levels of autonomy.

Warren Smith, who helped develop the perception teams at Uber ATG and Aurora Innovation, will discuss LiDAR requirements from the point of view of a perception engineer. What key data is needed from the sensor and how is that data used by perception to address difficult edge cases. How does this boil down to lidar specifications and how can lidar manufacturers use this information to enable L4-L5 autonomous vehicles.

Hear from:

warren smith

Warren Smith
Director of Perception
Insight LiDAR


This talk compares the characteristics of digital code modulation (DCM) radar to traditional analog modulated radars used today, such as Frequency Modulated Continuous Wave (FMCW) radars. The speaker will explain how these radar systems operate, including the transmission, reception, and the associated signal processing employed to determine the distance, velocity, and angle of objects in the environment. By comparing these two radar systems, familiarity with digital radar is enhanced and the potential advantages of digital radar are better appreciated. The speaker will introduce two benchmarks of merit: 1) High Contrast Resolution (HCR), which is critical to resolving small objects next to large objects (e.g., a child in front of a truck), and 2) Interference Susceptibility Factor (ISF), which characterizes a radar’s resilience to self-interference and cross interference. These benchmarks are essential to understanding the value of radar in use cases that are crucial to achieving increased safety for vehicle automation and autonomy.

Hear from:

arunesh roy.jpg

Dr. Arunesh Roy
Senior Director Advanced Applications and Perception
Uhnder


It has become widely accepted that LiDAR sensors will be an indispensable part of a sensor suite that will enable vehicular autonomy in the future. However, sensor costs remain very high and prevent the ubiquitous adoption of LiDAR sensors.

Bringing knowledge and expertise in Cost-Engineering and Design for Manufacturing from the HDD space into the LiDAR space can accelerate the large-scale deployment of LiDAR sensors.
In this talk, some of the key manufacturing technologies will be highlighted.

Hear from:

Zoran Jandric

Dr Zoran Jandric
Engineering Director
Seagate Technology


Regardless of Radar type, simulation of the sensor is absolutely essential to reach the goals desired for ADAS and especially levels 4-5 for AV. Some aspects of this required simulation are discussed and how to implement these aspects into a simulation correctly.

Discussion points for Radar simulation include:

  1. World Material Property measurement including Angle of Incidence
  2. Advanced Ray Tracing
  3. Micro Doppler, Ghost Targets and Doppler Ambiguity
  4. Radar placement effects (bumper, grill, etc.)

Finally, a discussion of a cutting edge Hardware-in-the-Loop for radar is also presented.

Hear from:

Screenshot 2021-06-29 at 12.45.57_tony

Tony Gioutsos
Director Portfolio Development Autonomous Americas
Siemens


Steering, ranging and detection are the core elements that simultaneously operate a LIDAR system. At Baraja, our LiDAR combines our patented Spectrum-Scan steering technology and unique ranging technique, Random Modulation Continuous Wave (RMCW) paired with homodyne detection, to enable a high-performance Doppler LiDAR without any of the known issues found in other LiDAR designs. This novel combination of core technologies allows for no-compromise, unprecedented LiDAR performance, reliability and integrability that will enable a fully-autonomous future without the costly trade-offs of legacy technologies.

Hear from:

Federico Collarte

Federico Collarte
Founder and CEO
Baraja


Honda and, more recently, Mercedes-Benz have made history by rolling out the first level 3 vehicles on open roads. These achievements have been made possible notably thanks to one technology – LiDAR.
To bring these features to scale, LiDAR technology is undergoing 2 concurrent transitions that will
– bring the reliability and productization to automotive industry standards
– deliver uncompromising performance compared to the pre-LiDAR status quo.

Hear from:

Clement Nouvel

Clement Nouvel
LiDAR Technical Director
Valeo


4D imaging radar has become a technology of choice for in-cabin safety and ADAS, favored for its high-resolution imaging, versatile field of view configurations and precise target data. But high cost, substantial hardware and extreme complexity have restricted deployment to premium models. In this thought-provoking session, we will discuss a crucial turning point for 4D imaging radar, which made it affordable and accessible to all vehicle models, supporting dozens of applications. The “Democratization of 4D Imaging Radar” is a presentation about making high-end safety available for all vehicle models.

Hear from:

Dan Viza

Dan Viza
Head of US Business Development
Vayyar


To make the technology available for volume model vehicles, the measurement capability and reliability of LiDAR must be ensured in cost-effective production at large quantities.

A core task in mass production is the assembly of optical, mechanical, and electronic components. The precise alignment of emitting and receiving electronics with projection or imaging objective lenses plays a decisive role here. Tolerances in all components of the sensors prevent the assembly of an optomechanical system by a straightforward mounting process. Alignment requires an automated process with inline feedback on sensor performance to ensure that the required optomechanical parameters are of high quality for each device and within tight tolerances for the entire production.

The paper describes recent developments of different alignment procedures TRIOPTICS has developed for various types of LiDAR systems used in the automotive industry to ensure repeatable and reproducible quality under production requirements.

Hear from:

dirk-seebaum-foto.1024x1024

Dirk Seebaum
Business Unit Manager
TRIOPTICS


For the longest time, radar applications deployed DSPs featuring fixed point arithmetic, as floating point operations were considered to be inferior in terms of performance, power efficiency and area (PPA), which is critical for any embedded system.

Yet there has always been a desire to move to floating point arithmetic, as it allows for a larger dynamic range as required by the latest radar systems, achieving the required signal to noise ratio (SNR). This presentation will cover a detailed floating point / fixed point tradeoff analysis, featuring radar use cases.

It will also discuss the growing interest in AI enhanced radar algorithms, and how these can be enabled using a vector DSP, either standalone or combined with a tightly coupled AI accelerator. Specific focus will be given to the programming flow featuring support for TensorFlow, Caffe or ONNX.

Hear from:

markus_willems

Markus Willems
Senior Product Manager
Synopsys


A discussion that will consider whether stringent OEM requirements can be met and whether it is possible to achieve functional safety and automotive-grade reliability whilst preserving modern vehicle design.

Hear from:

Kevin van der Putten

Kevin Vander Putten
Director
Cepton

amit mehta

Amit Mehta
Head of Innovation
North American Lighting

juergen hoellisch

Juergen Hoellisch
Founder
Hoellisch Consulting GmbH

ella

Paula Jones
President
ibeo Automotive USA


Automotive radar has been around for decades, but over the past few years there has been a flurry of activity in the new uses of radar in the car – from new applications to exotic antennas. This talk will introduce the audience to some new radar-based applications in vehicle localization, in-cabin health monitoring, and occupancy detection as well as cover notable new approaches to classic automotive radar. We will discuss how they work, why they are useful and, in some cases, why it took so long for them to appear.

Hear from:

Harvey Weinberg

Harvey Weinberg
Director of Sensor Technologies
Microtech Ventures


There are many ways to evaluate camera image quality using standardized equipment and metrics. However, after the results are tabulated, how do you assess which camera is most suitable for your specific application?

In this presentation DXOMARK will introduce an example of evaluation benchmark protocol for Automotive camera Image Quality.

Hear from:

Pierre Yves

Pierre-Yves Maitre
Senior Image Quality Engineer
DXOMark


The largest cost of developing artificial intelligence-based automated driving solutions is collecting and labelling data for training and validation regardless of autonomy level. Furthermore, data quality and diversity are also critical to enable truly robust and intelligent systems.

The use of synthetic and augmented data coupled with automatically annotated real-world data will be a game-changer for developing, testing and updating the next generation of Automated Driving software solutions.

This talk will discuss state-of-the-art data generation and labelling methods, introduce an integrated, cost-efficient, data-driven pipeline, and use different hardware platforms at different stages, from training to in-vehicle integration.

Hear from:

Peter_Kovacs

Dr. Peter Kovacs
Production SVP
aiMotive


By developing an end-to-end optical simulation pipeline including AI, we are able determine the impact of optical parameters on learning-based approaches.

We will show how to use this method?to jointly determine post-processing image rectification and optical characteristics for optimized ADAS and autonomous driving applications.

We will demonstrate that we can ease most of the image rectification processes by directly obtaining an optimized image with a camera designed according to such optical characteristics.

Hear from:

Patrice Roulet

Patrice Roulet
Co-Founder
Immervision


Recently, non-RGB image sensors gain a traction in the automotive applications. One traction is from the demand to achieve smaller pixel size with keeping low light SNR. We did a pros/cons study of the popular color filter arrays such as RCCB, RYYCy, RCCG and RGB, including the analysis of so-called Yellow / Red traffic signal differentiation issue. The other traction is from the demand to use one camera for both of Machine Vision and Human Vision purposes, especially in Driver Monitoring Systems. RGB-Ir is under study for this application.

In this presentation, we will present those color filter options and discuss what is useful for what applications.

Hear from:

Dr. Eiichi Funatsu

Dr. Eiichi Funatsu
VP of Technology
OMNIVISION


The commercialisation of Autonomous Systems including autonomous cars will require rigorous methods to certify artificial intelligence and make it safe. However, no solutions or standards exist today to guide the OEM and Tier 1 companies in that challenge, which is why CS Group has invested two years of research to develop a process – based on avionics certification – that aims to make the embedded artificial intelligence functionally safe.

Hear from:

Amine Smire profile pic

Amine Smires
Director Autonomous Systems
CS Group


Image sensors remain the leading sensing solution for automotive ADAS applications. While driver monitoring, in the NIR spectrum has already gained a lot of attraction in the recent past, new applications such as occupant sensing are possible when combining NIR and visible video in the same sensor. Application challenges exist such as high temperature operation, high contrast scenes, and low light operation that are greatly affected by pixel parameters.

In this presentation we shine the spotlight on the key sensor specifications to generate the expected end user experience. In some cases, the right tradeoff between MTF and QE or NIR QE versus Color Accuracy must be found. Other requirements like color HDR, low dark current, SNR at high temperatures and the effects of PLS must not be compromised. The importance and details of these parameters are often mis-understood and not connected back to the overall application.

Hear from:

CKingston

Charles Kingston
Senior Imaging Application Development Engineer
ST Microelectronics


How can we allow for additional time and greater accuracy in understanding objects’ motion for ADAS to take effective action? Existing perception methods have several shortcomings, and these limitations of ADAS systems cause several false positives/false negatives and often leave the motion planning system with very little reaction time and information needed to handle safety-critical situations. As a result, the usability and trust in the current ADAS systems have been low.

In this presentation, we will discuss how a motion-first approach to perception provides the time advantage that can enable advanced ADAS features.

Hear from:

Joel Pazhayampallil headshot

Joel Pazhayampallil
Founder and CEO
BlueSpace.ai


Deep learning advancements have shown significant promise in monitoring driver habits and actions. A robust Driver Monitoring Alert system is an essential component of Euro NCAP regulations, and the most widely adopted systems today are RGB Camera-based. RGB cameras demonstrate a great promise in modelling driver behaviour, but they have issues with illumination changes, occlusions, and anti-spoofing.

As AI technology advances, so does sensor technology, with indirect Time of Flight cameras being a prominent example (iToF). A iToF sensor can provide 2D amplitude images and distance images, giving it the advantage of being resilient to ambient lighting and providing additional depth information.

MulticoreWare will present our findings from using neural networks on Melexis iToF sensors and demonstrate their efficiency for modules such as Face Detection and Face Recognition that enable applications like Driver Authentication, Drowsiness Detection, etc. Furthermore, we will illustrate iToF cameras’ extensive ability to detect Anti-Spoofing (Print Attack) leveraging the distance images.

Hear from:

vish rajalingam

Vish Rajalingam
Lead AI Solutions Architect
MultiCoreWare


Advanced driver-assistance systems (ADAS) require low-latency and high-accuracy inference with an additional constraint of low-power performance that can only be achieved with custom designed hardware technologies. We present one such technology that distinguishes itself from traditional machine learning accelerators by utilizing an event-based processing architecture, low-bit computation, and an on-chip learning algorithm. In this talk we explain how our event-based, neuromorphic architecture enables efficient inference for person detection, face identification, keyword spotting, and LIDAR-based object detection applications that are critical for ADAS deployments.

Hear from:

kristofer carlson

Kristofer Carlson
Manager of Applied Research
Brainchip


There is now a mandate for driver monitoring hardware. How quickly will this tech be integrated? What price will consumers pay? What are the performance benchmarks?

Hear from:

SenthilSeetharaman

Senthil Seetharaman
AV Advanced Engineering
MOBIS Technical Center of North America

Paul George

Paul George
Director, Product Management,
Computer Vision
Technologies and Edge AI, Automotive
Xperi

Allen Lin

Allen Lin
Technical Specialist – Driver Monitoring,
Night Vision,
and Interior Camera Systems
GM Global Technical Center

Mark Fitzgerald

Moderated by Mark Fitzgerald
Director, Autonomous Vehicle Service
Strategy Analytics


Despite great advancements in sensor technology, different sensor types have their own strengths and weaknesses. Sensor fusion creates a synergetic environment where the strengths of one sensor compensate for the weaknesses of the others.

As the automotive industry moves towards increasing levels of ADAS and eventually full autonomy, sensor fusion will play an essential role. We will be discussing the limitations with current sensor technologies and vehicle architectures before exploring the way to turn sensor fusion into a reality.

Hear from:

Daniel Shwartzberg

Daniel Shwartzberg
Director of Automotive System Solutions
Valens


Different parts of automotive sub-systems have differing sensing and On-Device AI requirements, and requirements evolve over time. We will show how each sensing application has evolved into AI algorithm by citing specific examples. In addition, diverse algorithms are often employed within an application for pre-processing, inference, and post-processing.

We will also highlight key trends in DNN topologies, and make the case that, due to rapid progress in neural network research and varying processing requirements, a combination of hardwire and programmable solutions are essential.

Hear from:

Pulin Desai

Pulin Desai
Vision & AI Product Marketing & Management
Cadence


This work focuses on the latest automotive high dynamic range (HDR) image sensors with LED flicker mitigation (LFM) and their advanced features for sensing and viewing applications. Both 3 um and 2.1 um pixel architectures were developed to address flickering LED automotive sources and motion artifacts associated with fast moving objects. Advanced features of these sensors include dual output, 24-26-bit HDR advanced pipelines, different types of binning, and multiple contexts switching.

We highlight how these features are enhancing better sensing systems as well as provide for pleasing visually videos. Especial attention is given to non-bayer color filter array processing into high fidelity color images, enhancing usage of cameras for simultaneous sensing and viewing.

All the latest advances in pixel technology and advanced features enable lower cost and faster time to market surround viewing and sensing solutions, without compromise on performance.

Hear from:

sergey velichko

Sergey Velichko
Senior Manager
onsemi


Cameras and sensors are becoming more and more proliferate in the automotive industry. As humans and advanced driver-assistance systems rely more heavily on vision systems, an effective cleaning solution is needed. With numerous patents in the field, Texas Instruments has developed ultrasonic lens cleaning technology to address rain, ice, mud, and more.

Hear from:

Dave Magee

Dr. Dave Magee Distinguished Member of Technical Staff
Texas Instruments


An event camera, or a neuromorphic camera, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise.
Potential applications include object recognition, autonomous vehicles, and robotics. The US military is considering infrared and other event cameras because of their lower power consumption and reduced heat generation.
Automotive applications are emerging event cameras could be used either for interior and/or exterior applications. It could be applications related to the development of autonomous vehicles but also related to driver monitoring applications. Partnerships between technology providers and automotive players have already started.
This presentation will show the difference between conventional CMOS image sensor architecture and event-based CIS. It will also highlight the different applications where event cameras can be used

Hear from:

Pierrick Boulay_colour

Pierrick Boulay
Senior Analyst, Photonics and Sensing Division
Yole Developpement


In this talk, the self-cleaning performance of a UV durable hydrophobic (UVH) coating on four types of sensors including vision camera, IR camera, LiDAR, and Radar for autonomous vehicle under four different weathering conditions will be presented. This includes the evaluation results of UVH coatings using a lab testbed under stationary weathering conditions of rain, mud, fog, and bug, and a mobile testbed under outdoor weathering, such as snow. The presentation will describe key matrices for self-cleaning coatings, and the evaluation results of the UVH coatings as-prepared, and 500 hours to 3000 hours after Weather-O-Meter (WOM) testing using the ASTM D7869 method. Current results point to a significant benefit of using UVH coatings to improve the signal reading of sensors under inclement weather.

Hear from:

peter votruba

Peter Votruba-Drzal
Director, Global Research and Product Development, Automotive, Industrial, and Mobility
PPG Industries


With the emergence of single-photon sensitive image arrays, new capabilities for autonomous vehicles can be realized, such as improved object detection and tracking in adverse environments and perception beyond the line-of-sight. This talk will provide an overview of this emerging class of image sensors, their complimentary processing algorithms, and the unlocked possibilities for ADAS and autonomous sensing as a whole. Several capabilities are enabled through this new technology and processing approach, including improved performance in simultaneous High Dynamic Range (HDR), fast motion, and low-light scenarios and vision beyond the line-of-sight, including detection of oncoming vehicles at intersections and detection of obstructed pedestrians in parking lots, highways, dense urban environments, and neighborhoods.

Hear from:

Kirsten Vilcans

Kristen Vilcans
COO
Ubicept


Near Infrared light is utilized in an increasing number of ADAS sensing applications from LiDAR based environmental mapping to LED based cabin monitoring systems. Use of NIR & visible filters can dramatically reduce ambient noise, enhance sensor performance, prevent driver distraction, and improve cabin aesthetics.

The presentation addresses how light absorbing materials can be formulated to address specific application challenges in a variety of supply forms including on-chip, within sensor lenses or covers, or via application of formulated or printed film.

Hear from:

Donald Tibbitt

Don Tibbitt
Technical Marketing Director
Epolin


In this presentation IIHS will provide a briefing on the program’s overview and objectives.

The evaluations centre on the types of safeguards these systems ought to implement to help drivers fulfil their roles and understand their responsibilities when supported by the partial automation. The safeguard categories addressed in the program include driver monitoring, attention reminders, emergency escalation, cooperative steering, as well as responsible application of automation functionality. The program also addresses whether these L2 systems permit unsafe scenarios, such as disabling crash avoidance features or unbuckling of a driver’s seat belt.

Hear from:

Alexandra Mueller

Dr. Alexandra Mueller
Research Scientist
IIHS


Automotive OEMs expect that sensors used in safety-critical applications are designed to meet the requirements of the ISO 26262 Functional Safety standard. In addition, sensors used for autonomous driving use cases, need to consider ISO 21448 Safety of the intended functionality (SOTIF) standard in their design. This talk will compare the two standards and discuss NVIDIA’s approach to addressing both standards. Also to be discussed are the expectations on deliverables from sensor suppliers to show compliance with the standards.

Hear from:

Mark Costin

Mark Costin
Distinguished Functional Safety Engineer
NVIDIA


How can the industry move closer to true safety and autonomy by harnessing investment in software innovation? What is needed to overcome architecture, supply chain and organizational challenges?

Hear from:

juergen hoellisch

Juergen Hoellisch
Founder
Hoellisch Consulting GmbH

Mark Fitzgerald

Mark Fitzgerald
Director, Autonomous Vehicle Service
Strategy Analytics

Partha Goswami

Partha Goswami
Senior Manager of Technology Trends and Insights
General Motors

Soshun Arai

Soshun Arai
VP of Strategy
TIER IV


Scroll to Top