Article with Immervision: Challenges of varying illumination for automotive

In this article Julie Buquet, AI Developer and Scientist at Immervision, explores the challenges of handling different illumination scenarios when designing intelligent vision systems for in-cabin monitoring. What is the impact on algorithmic performance? How can optimization be achieved?

Automotive constitutes one of the most demanding and booming industries for computer vision applications. As initially focusing on road-facing images for tasks such as pedestrian or white lanes detection, computer vision applications are now expanding to cover the entire surroundings of the vehicle including in-cabin monitoring.  Such applications require the combination of many vision tasks and separate components. On the one hand, the driver is monitored through gaze-tracking, facial recognition, emotion or health detection, or hands-on-wheel detection to identify potential distractions and re-enforce the safety on board. On the other hand, the entire cabin is monitored for a full comprehension of the vehicle environment using front seat and back seat occupancy detection and by identifying potentially dangerous behaviour of any vehicle occupant. As all these tasks present different requirements in terms of image quality, the camera design for in-cabin monitoring becomes a challenge which can be dealt with using several cameras or detection devices, or through the use of a more complex end-to-end design for the optimization of one single camera. Yet, in this context of the vehicle interior cabin, one additional challenge remains which is to maintain constant performance of all algorithms in the various illumination scenarios encountered within a vehicle’s cabin. 

Interested in this topic?


Join us for a free-to-access FOCUS session on Precision Imaging: Automotive Image Quality in Challenging and Complex Conditions with experts from Immervision, Owl AI, AVL, DXOMark and OMNIVISION

27th March 2024  |  3:30pm GMT

In contradiction to many computer vision tasks, the images used in automotive applications will be captured under different weather and different illuminations: from dazzling sunlight and streetlights to dark and hazy nights. However, this must not affect the vision algorithms’ performance which should remain constant. There are currently different ways of addressing this problem at different levels of the image formation pipeline (the lens, the sensor, the image signal processor (ISP) or the vision task itself). However, optimizing them together is crucial to ensure harmoneous constant performance while keeping the costs reduced and the time-to-market realistic. 

Most of the current automotive cameras are optimized for a narrow range of illumination. Hence, a camera optimized for daylight has a decreased SNR (signal-over-noise ratio) in low light as not enough light enters the system. This will lead to an image with an amplitude of noise closer to the amplitude of the signal making it harder to identify features of interest. In the opposite situation, using a camera optimized for low light under strong illumination might result in saturation on the final image. As automotive constitutes a critical application, the faithfulness of the images used for vision tasks is crucial. Consequently, using traditional de-noising operations in the ISP is not desirable as it brings a bias in the image and increases the risk of a loss of information. To address this issue, optimizing the lens and the sensor to provide constant performance for a broader range of illumination is required. This can be done for example by increasing the aperture size, lowering the f-number of the optical design and/or optimizing the pixel size on the sensor to control the amount of light entering the system. 

Furthermore, some illumination scenarios imply strong localized incidental light which can be reflected by different surfaces of the vehicle’s interior as well as in the camera itself (lens element, mechanical part or sensor) to produce visual artefacts like ghosts or straylight. A convenient way to deal with this at the optics level is by conducting proper simulation and analysis (stray light analysis) during the design phase considering as many of the application scenarios as possible. In addition to increasing the complexity of the lens design, this also implies more constraints upon the choice of the sensor. 

As some combined RGB/IR sensor options are available on the market today, serving both vision tasks as well as human vision applications is challenging. Using this combined RGB + IR sensor configuration can result in poorer machine perception as it leads to a smaller resolution for both the IR and RGB images, especially when using a traditional wide-angle lens. Consequently, improving a vision system for low light constitutes a cumbersome optimization challenge which often implies lowering the resolution due to the extended spectrum and the larger pixel size. To avoid repercussions on the performance of the perception stack, we perform smart pixel management at the optics level to increase the resolution in the regions of interest of the cabin (such as the driver’s face). As a result, for the same field of view, the resolution around the driver’s eyes is increased to improve gaze tracking accuracy while the area covering the occupants’ seats covers less pixels as the vision tasks performed in these zones are easier (occupancy and distraction detection).  

Finally, as improving low-light imaging for in-cabin monitoring implies a joint optimization of all the components of the perception stack, one might think that the vision tasks themselves should be adapted to perform constantly under various ranges of illumination. For instance, a filtering operation on a few frames can be applied to reduce temporal noise while maintaining real-time analysis of the entire scene. Besides, in the case of learning-based application, using a well distributed dataset for training that ensures the presence of images taken in many different scenarios, is crucial. As low light images are often under-represented in available datasets, using data augmentation processes to generate degraded, noisy, or darker images can be a good way to even out or normalize the performance between the different illuminations. This is also a use case where using synthetic, augmented or converted image data can be of interest as it allows for a complete control of the dataset distribution.

In a nutshell, handling different illumination scenarios for in-cabin monitoring constitutes a huge challenge for designing intelligent vision systems. Yet, it can be addressed at different levels of the perception tasks. Using a harmonized end-to-end optimization in the overall end-to-end vision solution design is a convenient way to reduce the cost and time-to-market of the complete system.  Besides, as a substantial part of low-light imaging is tackled at the optics and sensor level, leveraging the optimization of vision algorithms is a good way to relax and loosen the requirements while maintaining an extended field of view, therefore avoiding multiplying the number of camera devices in the vehicle. 

Find out more about Immervision here.

About the author

After earning her master’s degree in Optics and Applied Computer Vision from the Institut d’Optique Graduate School in France, Julie Buquet attended the Laval University PhD program. Julie’s PhD is specialized in wide-angle imaging systems applied to supervised learning-based approaches. At Immervision, Julie’s responsibilities include building and evaluating embedded algorithms for optimized performances of smart wide-angle imaging systems and applying her research results to industrial applications. In addition to her expertise in computer vision, Julie has been invited to speak at various industry conferences and co-authored multiple publications and patents.

Shopping cart0
There are no products in the cart!
0
2024 ADAS Guide

The state-of-play in today’s ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.