AutoSensONLINE 2021 | Watch On-Demand

Please note that you must be signed in and be a ticket holder in order to watch the content below. Buy an “On-Demand Pass” to gain immediate access to all this event’s content.

Order by
Topics
Show more
Company
Show more

LG Electronics introduced ADAS front camera reaching beyond the functional scope of conventional mono cameras.

Based on the close collaboration between Mercedes-Benz and LG Electronics, this camera leads to new innovations of Mercedes-Benz "Solo System".

In this presentation, we will present technical and other challenges leading to new innovations and key success factors for the development.

  • Motivation for collaboration between Mercedes-Benz and LG Electronics
  • Technical and other challenges leading to new innovations.
  • Key success factors (Perception in wide Fov, sensor fusion, system, validation and etc.) for the most optimum yet comprehensive ADAS system.
  • Other behind stories (cultural, geographical and etc.)

Hear from:

Dr. Youngkyung Park
Vision AI Unit Leader
LGE

Dr. Benjamin Marx
Project Leader Multi-Purpose Camera
Mercedes-Benz

What does this mean when we talk about ICE and BEV cars? Providing a focus on exterior lighting.

Hear from:

Paul-Henri Matha
Exterior Lighting, Technical Leader
Volvo Cars

Alongside the rapid progress in image sensor-based autonomous driving capabilities and associated sensor technologies, human viewing applications are an increasingly common and important differentiator for consumers. This presentation will examine the challenges we face when the same image sensors are used for both purposes simultaneously, and how we can avoid sacrificing image quality for flexibility in our hardware choices and software algorithms.

Hear from:

Matthew Hellewell
Principal Image Quality Engineer
Arm

  • Existing visual perception methods scale badly to adverse weather and lighting conditions
  • Weather phenomenon simulation and image translation can generate effective training data for adverse conditions ?Other domain adaptation techniques such as domain flow and self-training can also increase the robustness of perception methods
  • New benchmarks with real-world data, such as our ACDC dataset, are strongly needed for method training and evaluation
  • Other robust sensors such as Radar and Microphones should be leveraged

Hear from:

Dengxin Dai
Group Leader, Vision for Autonomous Systems Group
MPI for Informatics

Autonomous driving will be the future of transportation and cameras play a vital role on the path to full autonomy. Artificial intelligence approaches are the steppingstone to achieve that goal. It is crucial that the data these approaches rely on is supplied with highest information content, which means best image quality. Optical sensors are required to be manufactured within tight tolerances to meet these standards.

This presentation describes the challenges in manufacturing ADAS cameras. Based on practical examples, the need to actively take control of the chain of tolerances for the manufacturing process is illustrated. In addition, the level of manufacturing precision that is required for the current camera sensor technology is presented.

Hear from:

Sebastian Frisch
Development Engineer
TRIOPTICS

This presentation presents a tool, which summarizes the most crucial characteristics and provides a common ground to compare each solution's pros and cons, by drawing a scoring envelope based on 8 major parameters of the LiDAR system, representing its performance, suitability to an automotive application, and business advantages.

Hear from:

Dima Sosnovsky
Principal System Architect
Huawei

This presentation introduces the general philosophy of the AUTOSAR SW Framework and explains the characteristics of the two fully compatible SW-platforms classic and adaptive AUTOSAR. It will also show how AUTOSAR can be implemented in a so-called zone architecture and explains the role of the AUTOSAR sensor interface.

Hear from:

Günter Reichart
Spokesperson
AUTOSAR

The ADAS and AV problem create great complexities that through experience, frustration and blindsided expectations can bring simplicity, elegance and synergies that could lead to the ubiquity of autonomous vehicles. This presentation will present a background on some complex ADAS and AV situations that have led to simplifications and synergies, symplexities if you will. As we know development often brings about situations that call for a reality check and often when these reality checks cross well-meaning people from the developer level to the executive, it can bring about some frustrations, but can lead to spectacular results. This presentation will cover a few key examples past to current and an approach that can change the paradigm from endless complexities to symplexity.

Hear from:

Shane Elwart
Founder
NIFT

The automotive market beyond 2030 will look quite different compared to today’s, and consequently so will traditional operating strategies. The development of a reliable and complete ecosystem is key for future success, both inside and outside the vehicle. The new functionalities found in cars today and expected in the future are not about hardware; instead the industry is entering the “software-defined everything” era. Software-defined vehicles and “functionality as a service” will continue to drive new revenue streams in the future, as well as create cost-reduction opportunities along the entire automotive value chain.

  • Hear the results of a Wards Intelligence/Dell Technologies automotive industry survey on future vehicle architectures
  • Learn how to put back-end processes and infrastructures in place to efficiently adjust and scale to the needs of the complex and broader ecosystem of the future

Hear from:

Larry Vivolo
Senior Business Development Manager for Automotive and Semiconductor Design and Manufacturing
Dell Technologies

Hear from:

Gor Hakobyan
Technical Project Manager
Bosch - invited

The industry was optimistic in its early projections of achieving full autonomy. However, several issues including lack of technology maturity and economic viability have now pushed this timeline out considerably. Zendar believes that the road to full autonomy should be based on a solid platform of reliable ADAS systems that can act as a springboard. In order for that to happen, the perception problem needs to be solved by developing a reliable 4D perception system that does not add to the overall cost of the vehicle. Zendar’s radar system is developed with that goal in mind.

Zendar’s approach to high resolution radar through transferring hardware complexity to software creates an opportunity to implement a superior radar based ADAS system without increasing
system cost. This presentation illustrates the unique architecture and techniques employed in this revolutionary radar system.

Hear from:

Vinayak Nagpal
Co-Founder
Zendar

This work describes a first generation 8.3MP 2.1 ?m dual conversion gain (DCG) pixel image sensor developed and released to the market. The sensor has HDR up to 140 dB and cinematographic image quality. Non-bayer color filter arrays improve low light performance significantly for front and surround ADAS cameras. This enables transitioning from level 2 to level 3 AD and fulfilling challenging Euro NCAP requirements.

Hear from:

Sergey Velichko
Sr. Manager, ASD Technology and Product Strategy
onsemi

The rising of the retail-autonomy segment (between ADAS and the RoboTaxi) pushes the limits of the sensing capabilities in many ways.

It is one thing to design dirt-cheap sensors for well-defined tasks or performance-beasts with little consideration to cost, packaging etc.

It is a completely different challenge to hit all the spots at once, while keeping agility & scalability.

This presentation provides a high-level overview of this challenge from the OEM standpoint, talks about the different modalities within this context and what Gonen believes can make the Retail-autonomy vision a reality, with focus on Radar.

Hear from:

Gonen Barkan
Group Leader, Radar Development & Domain Processing
GM

Not all pixels are created equal, neither are all lenses, or sensors, or manufacturers. This causes a large variance in image quality from cameras with nominally the same fundamental specifications, such as pixel size, focal length and f-number. Individual objective camera metrics can provide insight into the sharpness or noise performance of cameras, for example, and instinctively we desire more of everything.

Far too often in papers exploring DNN performance, the description of the images used is limited to the pixel count, total number and split between training and validation sets. This talk explores some desirable characteristics of image quality metrics, approaches, and pitfalls of combining them and some strategies for ranking camera performance for use with autonomous systems.

Hear from:

Robin Jenkin
Principal Image Quality Engineer
NVIDIA

This panel will address questions around whether it is possible to give customers what they want from HD maps when it comes to more accuracy, more coverage and more features. Or is there going to be a middle ground with a medium-definition map? The panel will also touch up challenges and solutions relating to map making and maintenance and addressing scalability. Please note this panel was recorded under Chatham House rules.

Hear from:

Henning Lategahn
CEO
atlatec

Sinisa Durekovic
Architect, Extended Environment Model
CARIAD

Moderated by Phil Magney
Founder
VSI Labs

Ro Gupta
CEO CARMERA / Head of AMP North America
Woven Planet Group

Panellists from Khronos, EMVA, and other members of the Exploratory Group will discuss how a consistent set of interoperability standards and guidelines for embedded cameras and sensors will help solve the problems impeding growth in advanced sensor deployment and share insights into the innovative Exploratory Group process that is bringing the industry together to generate consensus on catalyzing effective standardization initiatives. Please note this panel was recorded under Chatham House rules.

Hear from:

Neil Trevett
Vice President of Developer Ecosystems
NVIDIA, speaking as President, Khronos Group

Mayank Mangla
Imaging Systems Architect
Texas Instruments

Laurent Pinchart
Company Owner
Ideas on Board

Moderated by Chris Yates
President
EMVA

Thomas Hopfner Product Manager Licensing and Interfaces
MVTec

Machines have outperformed humans in most tasks but human vision remains far superior. For ADAS and ultimately AV applications, having vision technology on par with human vision in efficiency, effectiveness, and to some extent reliability is necessary. With significant limitations on current imaging/camera technology, the world has jumped too prematurely into slapping other types of sensors that are more power and resource hungry and less reliable. Humans drive vehicles safely and they don't have lidars or radars. As biology and nature have been the inspiration for much of the technology innovations, developing imaging technology that mimics the human eye seems to be a more prudent path. The time will come when additional sensing modalities will add unique value but the lower hanging fruit (performance versus price) is the proposed approach. Also unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels. Machine vision needs to simply extract the “best” actionable information very efficiently (in time and energy) from the available signal (photons). We advocate that there is significant room for improvement still by simply optimizing the architecture, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. At Oculi, we have developed a new architecture for computer and machine vision that promises efficiency on par with human vision but outperforms in speed.

Hear from:

Charbel Rizk
Founder, CTO and CEO
Oculi

Deep learning has led to remarkable progress in artificial intelligence and has significantly advanced performance across areas in vehicle perception. Will deep learning be central to the next stages of development or will the challenges of advanced AI and limitations of deep learning mean otherwise? Please note this panel was recorded under Chatham House rules.

Hear from:

Firas Lethaus
Head, Deep Learning Expert Center, Software Innovation Center, Data Lab
VW Group

Danil Prokhorov
Project Leader and Research Manager
Toyota Tech Center

Jonathan Horgan
Deep Learning and Computer Vision Architecture Manager/Valeo Senior Expert
Valeo Vision Systems

Szabolcs Fulop
Director of Engineering
Xperi Corporation

Moderated by Dominique Bonte
Managing Director and Vice President
ABI Research

Following the success of last year’s online Awards, we’re back, and running the ceremony interactively and live on Zoom, with a whole new set of categories to recognise achievement and creativity in our Community.

The shortlist has been revealed after some careful deliberation from our team of expert judges. Don’t forget to read through the ‘Most Exciting Start-Up’ shortlist as the Award will be decided, by you, in a live vote during the ceremony.

LIVE ON ZOOM | 7:00PM (GMT) | TUESDAY 23 NOVEMBER

Tickets to join the ceremony, for free, are now available!