AutoSensONLINE 2021 | Watch On-Demand
LG Electronics introduced ADAS front camera reaching beyond the functional scope of conventional mono cameras.
Based on the close collaboration between Mercedes-Benz and LG Electronics, this camera leads to new innovations of Mercedes-Benz "Solo System".
In this presentation, we will present technical and other challenges leading to new innovations and key success factors for the development.
- Motivation for collaboration between Mercedes-Benz and LG Electronics
- Technical and other challenges leading to new innovations.
- Key success factors (Perception in wide Fov, sensor fusion, system, validation and etc.) for the most optimum yet comprehensive ADAS system.
- Other behind stories (cultural, geographical and etc.)
Dr. Youngkyung Park
Vision AI Unit Leader
Dr. Benjamin Marx
Project Leader Multi-Purpose Camera
What does this mean when we talk about ICE and BEV cars? Providing a focus on exterior lighting.
Exterior Lighting, Technical Leader
Alongside the rapid progress in image sensor-based autonomous driving capabilities and associated sensor technologies, human viewing applications are an increasingly common and important differentiator for consumers. This presentation will examine the challenges we face when the same image sensors are used for both purposes simultaneously, and how we can avoid sacrificing image quality for flexibility in our hardware choices and software algorithms.
Principal Image Quality Engineer
- Existing visual perception methods scale badly to adverse weather and lighting conditions
- Weather phenomenon simulation and image translation can generate effective training data for adverse conditions ?Other domain adaptation techniques such as domain flow and self-training can also increase the robustness of perception methods
- New benchmarks with real-world data, such as our ACDC dataset, are strongly needed for method training and evaluation
- Other robust sensors such as Radar and Microphones should be leveraged
Group Leader, Vision for Autonomous Systems Group
MPI for Informatics
Autonomous driving will be the future of transportation and cameras play a vital role on the path to full autonomy. Artificial intelligence approaches are the steppingstone to achieve that goal. It is crucial that the data these approaches rely on is supplied with highest information content, which means best image quality. Optical sensors are required to be manufactured within tight tolerances to meet these standards.
This presentation describes the challenges in manufacturing ADAS cameras. Based on practical examples, the need to actively take control of the chain of tolerances for the manufacturing process is illustrated. In addition, the level of manufacturing precision that is required for the current camera sensor technology is presented.
This presentation presents a tool, which summarizes the most crucial characteristics and provides a common ground to compare each solution's pros and cons, by drawing a scoring envelope based on 8 major parameters of the LiDAR system, representing its performance, suitability to an automotive application, and business advantages.
Principal System Architect
This presentation introduces the general philosophy of the AUTOSAR SW Framework and explains the characteristics of the two fully compatible SW-platforms classic and adaptive AUTOSAR. It will also show how AUTOSAR can be implemented in a so-called zone architecture and explains the role of the AUTOSAR sensor interface.
The ADAS and AV problem create great complexities that through experience, frustration and blindsided expectations can bring simplicity, elegance and synergies that could lead to the ubiquity of autonomous vehicles. This presentation will present a background on some complex ADAS and AV situations that have led to simplifications and synergies, symplexities if you will. As we know development often brings about situations that call for a reality check and often when these reality checks cross well-meaning people from the developer level to the executive, it can bring about some frustrations, but can lead to spectacular results. This presentation will cover a few key examples past to current and an approach that can change the paradigm from endless complexities to symplexity.
The automotive market beyond 2030 will look quite different compared to today’s, and consequently so will traditional operating strategies. The development of a reliable and complete ecosystem is key for future success, both inside and outside the vehicle. The new functionalities found in cars today and expected in the future are not about hardware; instead the industry is entering the “software-defined everything” era. Software-defined vehicles and “functionality as a service” will continue to drive new revenue streams in the future, as well as create cost-reduction opportunities along the entire automotive value chain.
- Hear the results of a Wards Intelligence/Dell Technologies automotive industry survey on future vehicle architectures
- Learn how to put back-end processes and infrastructures in place to efficiently adjust and scale to the needs of the complex and broader ecosystem of the future
Senior Business Development Manager for Automotive and Semiconductor Design and Manufacturing
The industry was optimistic in its early projections of achieving full autonomy. However, several issues including lack of technology maturity and economic viability have now pushed this timeline out considerably. Zendar believes that the road to full autonomy should be based on a solid platform of reliable ADAS systems that can act as a springboard. In order for that to happen, the perception problem needs to be solved by developing a reliable 4D perception system that does not add to the overall cost of the vehicle. Zendar’s radar system is developed with that goal in mind.
Zendar’s approach to high resolution radar through transferring hardware complexity to software creates an opportunity to implement a superior radar based ADAS system without increasing
system cost. This presentation illustrates the unique architecture and techniques employed in this revolutionary radar system.
This work describes a first generation 8.3MP 2.1 ?m dual conversion gain (DCG) pixel image sensor developed and released to the market. The sensor has HDR up to 140 dB and cinematographic image quality. Non-bayer color filter arrays improve low light performance significantly for front and surround ADAS cameras. This enables transitioning from level 2 to level 3 AD and fulfilling challenging Euro NCAP requirements.
Sr. Manager, ASD Technology and Product Strategy
The rising of the retail-autonomy segment (between ADAS and the RoboTaxi) pushes the limits of the sensing capabilities in many ways.
It is one thing to design dirt-cheap sensors for well-defined tasks or performance-beasts with little consideration to cost, packaging etc.
It is a completely different challenge to hit all the spots at once, while keeping agility & scalability.
This presentation provides a high-level overview of this challenge from the OEM standpoint, talks about the different modalities within this context and what Gonen believes can make the Retail-autonomy vision a reality, with focus on Radar.
Group Leader, Radar Development & Domain Processing
Not all pixels are created equal, neither are all lenses, or sensors, or manufacturers. This causes a large variance in image quality from cameras with nominally the same fundamental specifications, such as pixel size, focal length and f-number. Individual objective camera metrics can provide insight into the sharpness or noise performance of cameras, for example, and instinctively we desire more of everything.
Far too often in papers exploring DNN performance, the description of the images used is limited to the pixel count, total number and split between training and validation sets. This talk explores some desirable characteristics of image quality metrics, approaches, and pitfalls of combining them and some strategies for ranking camera performance for use with autonomous systems.
Principal Image Quality Engineer
This panel will address questions around whether it is possible to give customers what they want from HD maps when it comes to more accuracy, more coverage and more features. Or is there going to be a middle ground with a medium-definition map? The panel will also touch up challenges and solutions relating to map making and maintenance and addressing scalability. Please note this panel was recorded under Chatham House rules.
Architect, Extended Environment Model
Moderated by Phil Magney
CEO CARMERA / Head of AMP North America
Woven Planet Group
Panellists from Khronos, EMVA, and other members of the Exploratory Group will discuss how a consistent set of interoperability standards and guidelines for embedded cameras and sensors will help solve the problems impeding growth in advanced sensor deployment and share insights into the innovative Exploratory Group process that is bringing the industry together to generate consensus on catalyzing effective standardization initiatives. Please note this panel was recorded under Chatham House rules.
Vice President of Developer Ecosystems
NVIDIA, speaking as President, Khronos Group
Imaging Systems Architect
Ideas on Board
Moderated by Chris Yates
Thomas Hopfner Product Manager Licensing and Interfaces
Machines have outperformed humans in most tasks but human vision remains far superior. For ADAS and ultimately AV applications, having vision technology on par with human vision in efficiency, effectiveness, and to some extent reliability is necessary. With significant limitations on current imaging/camera technology, the world has jumped too prematurely into slapping other types of sensors that are more power and resource hungry and less reliable. Humans drive vehicles safely and they don't have lidars or radars. As biology and nature have been the inspiration for much of the technology innovations, developing imaging technology that mimics the human eye seems to be a more prudent path. The time will come when additional sensing modalities will add unique value but the lower hanging fruit (performance versus price) is the proposed approach. Also unlike photos and videos we collect for personal consumption, machine vision is not about pretty images and the most number of pixels. Machine vision needs to simply extract the “best” actionable information very efficiently (in time and energy) from the available signal (photons). We advocate that there is significant room for improvement still by simply optimizing the architecture, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. At Oculi, we have developed a new architecture for computer and machine vision that promises efficiency on par with human vision but outperforms in speed.
Founder, CTO and CEO
Deep learning has led to remarkable progress in artificial intelligence and has significantly advanced performance across areas in vehicle perception. Will deep learning be central to the next stages of development or will the challenges of advanced AI and limitations of deep learning mean otherwise? Please note this panel was recorded under Chatham House rules.
Head, Deep Learning Expert Center, Software Innovation Center, Data Lab
Project Leader and Research Manager
Toyota Tech Center
Deep Learning and Computer Vision Architecture Manager/Valeo Senior Expert
Valeo Vision Systems
Director of Engineering
Moderated by Dominique Bonte
Managing Director and Vice President
Following the success of last year’s online Awards, we’re back, and running the ceremony interactively and live on Zoom, with a whole new set of categories to recognise achievement and creativity in our Community.
The shortlist has been revealed after some careful deliberation from our team of expert judges. Don’t forget to read through the ‘Most Exciting Start-Up’ shortlist as the Award will be decided, by you, in a live vote during the ceremony.
LIVE ON ZOOM | 7:00PM (GMT) | TUESDAY 23 NOVEMBER
Tickets to join the ceremony, for free, are now available!