“AutoSens show report: 5 things we learned” by Marco Jacobs at Videantis

Independent AutoSens review by Marco Jacobs, VP at videantis

This was the first AutoSens show ever, but it sure didn’t show. The organizer Robert Stead from Sense Media is a veteran at running technical conferences, and combining Rob’s experience and network with the very hot field of autonomous cars was sure to result in a great show. The location couldn’t be more appropriate either: the Autoworld vintage car museum in the center of Brussels, Belgium, which has over 350 unique cars on display.

Day 1 started with workshops. In the morning, the newly formed IEEE P2020 working group that specifies methods and metrics for measuring and testing automotive image quality for automotive ADAS applications had its first face-to-face meeting. After introducing the group’s goals and way of working, the officers gave an overview of 10 subgroups they defined, focusing on areas like image quality for viewing versus for computer vision, LED flicker measurement, ISO 26262, how image quality relates to SAE levels, and more.

The P2020 session was followed by a session held by the organizers of theSelf Driving Track Days, which organizes open testing events for autonomous vehicle development. The rest of the afternoon was allocated to the workshop “Advanced sensing and image processing for automotive applications,” given by Prof Albert Theuwissen from Harvest Imaging, a well-known expert in the field of CCD and CMOS image sensing. Albert also wrote a report about the show, which primarily highlights ST’s new sensor that’s specifically been designed to deal with LED light sources.

The AutoSens show then really got started with a nice welcome reception in the evening. With more than 330 attendees and over 20 exhibitors, all focusing on automotive sensing and processing, we ran into many of our business contacts.

The following two days were filled with 3 tracks of talks, so it was hard to see all the great talks. The talks came from a healthy mix of analysts, OEMs, Tier 1s, semiconductor and software vendors. We presented a talk titled “Visual processing crucial to ADAS: applications, architectures and algorithms”. We’ll post a link to the slides once they’re available.

Here’s five things we learned from the show.

Self-driving cars are hard

It seems that in the last few months we’ve had a bit of a setback here, perhaps initiated with the famous Tesla Autopilot accident, the resulting breakup between Mobileye and Tesla, and the new Autopilot software that’s more conservative in letting the driver let go of the wheel. At the conference one analyst even mentioned 2040 as a possibility for driverless cars to become available, almost 25 years from now, and beyond the horizon for most technology companies. Driverless cars will happen eventually, but let’s focus on what we can bring to market in the next few years was the general consensus at the show.

Deep learning is hard

Similar to self-driving cars, reality has perhaps sunk in a bit on deep learning too. The nets need to be trained with huge amounts of image data, which needs to be properly annotated by hand, with some companies having hundreds of people on staff to perform this task.

Another problem is that it can be hard to understand how the nets perform in new scenarios that weren’t seen while training. In one of the conversations I had, an R&D engineer gave the example where a net was trained to detect houses, but when in operation, the net suddenly classified a car as a house. The reason? Both the home as well as the car had the word “restaurant” on it and that’s what the neural net had trained itself on. The fact that we don’t really understand how these deep neural networks work and behave means we don’t really know their limitations well either. In addition, the memory and compute resources that are required to run these detectors is still beyond what embedded systems that can go into mainstream cars offer. Deep learning is not a silver bullet that can fulfill any computer vision task. At the same time, the networks are already beating humans at many computer vision tasks, and they’re bound to get smaller, faster and more accurate. It’ll be an exciting area to follow for quite some time.

Image quality is hard

Another key aspect of any ADAS system is its image quality. But there are many different parameters that are related to quality: resolution, color, frame rate, sensitivity, noise, distortion, vignetting, flicker, etc. In addition, many components contribute to the picture quality: the scene that’s being captured, the lens, sensor, and ISP. Of course, most of these are interrelated, and then there’s the bigger question of how image quality impacts the computer vision algorithms downstream, or the user’s viewing experience for mirror, surround or rear camera systems. Image quality will remain a key topic for quite some time.

We need more sensors

Olivia Donnellan from Valeo mentioned in her talk that they’re already seeing OEMs designing 12 cameras into their cars, and this number continuous to go up. In addition, ultrasound is not going away, radar is growing, time of flight sensors are catching on, and nightvision and lidar are continuing to make inroads. Adding more sensors and using different types of sensors to complement each other’s shortcomings is another trend that’s likely to continue for some time.

Surround view replacing rear view

Several talks and booths highlighted their surround view systems. These systems are quickly becoming the rear view camera of yesteryear. Surround view systems combine the images from multiple cameras into a single integrated view of the car’s surroundings from above, making the task of parking or unparking the car much easier.

Conclusion

Everyone agreed it was a very good show with the right people there and high quality and relevant presentations. For a visual impression of the show you can view the photos yourself online.

The automotive and transportation industries are very large markets that are in rapid transformation. It all starts with the sensors, then come the algorithms, then come the embedded systems that implement them at low power and low cost. We’re excited to be part of this transformation and to deliver such a crucial visual processing component to the market. The next AutoSens show is tentatively scheduled to take place in the spring in the Detroit area. We’re looking forward to meeting everyone again there.

Scroll to Top
2024 ADAS Guide

The state-of-play in today’s ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.