Christophe Lavergne joined RENAULT in 1991, where he worked on driver’ drowsiness detection. In 1999, he joined upstream ADAS department of RENAULT to develop a laboratory of “image sensor and processing” dedicated to the specification, the characterization and the validation of the “Environment Perception Function” of camera sensor and camera system for machine vision application.
Christophe will be delivering a session on “The use of incremental SNR (iSNR) as criteria for the SOTIF of HDR front camera sensor once integrated in the vehicle” at AutoSens in Brussels. We were honoured to have the opportunity to ask him a few questions about his presentation and his work within the industry.
Q: Your presentation at AutoSens is about the use of iSNR, for those that may be unfamiliar with this term, please could you explain it.
A: iSNR stands for “Incremental Signal to Noise Ratio”. This criterion has been defined through the ISO 15739 norm and was first promoted for machine vision automotive application by Dirk Hertel in the 2010 years. Technically speaking, iSNR is the signal to noise ratio of the image sensor projected and measured in the luminous photo space. This criterion really makes sense to measure, in the photo space scene, the luminous dynamic range in which the HDR image sensor is actually operational, which means the luminous dynamic range in which the HDR image sensor is going to send Digital Numbers of pixels that allow a detection of the image of local contrasts in the photo space scene with the good level of “accuracy” in the statistical sense. iSNR makes it possible to link the pure technical performance of the image sensor and its functional performance, whether the OECF of the HDR image sensor is linear or not (OECF stand for Optoelectronic Conversion Function). This criterion might be very useful for OEM’s like carmakers to specify and to validate in vehicle camera sensor. On the other hand, iSNR computation is requiring the full knowledge of the HDR image sensor OECF, and that is still a challenge to get from our suppliers the full knowledge of OECF.
Q: Why were you attracted to being involved with P2020, with this focus? What are the next steps ?
The specification and the validation of camera sensor and camera systems whether it is in “the machine vision” or in “the human vision “ area are still in their early stages. Even for questions as simple as the range of luminance in which camera sensor must be efficient or the value of minimal contrast (Weber) in photo space which must be detected so that pattern recognition image algorithms can work properly. Because ADAS vision systems are now becoming a public safety issue, through massive introduction of systems like AEB, the scientific and technical foundations, which are the prerequisites for a reliable industrialization of these systems, must be clearly defined, shared and validated by all. Grey areas are no longer possible given the public safety issues.
My work with P2020 experts on criteria like iSNR is all about the definition and validation of scientific and technical foundation of automotive vision system. This work at P2020 is focused on the camera sensor part of the camera system. From my point of view the performance of HDR camera sensor issue is just as important as the algorithms of image analysis issue for the final performance of the system for automotive application. The part of the camera sensor is only much less publicized than the “algorithmic” part of the vision system.
Ideally, the next step would be convergence of the image sensor experts’ community on a single criterion on HDR camera sensor for machine vision application. But there is still a lot of work to be done to achieve this first fundamental step without which it is quite impossible to progress on this subject.
Q: Can you tell us more about your work in the ADAS department of RENAULT to develop a laboratory of “image sensor and processing”, what are the biggest challenges that you faced and are facing now?
Currently, for most of carmakers, ADAS Vision systems have a status very closed from the black box status. Unfortunately, these systems are facing “the world” and its set of infinite use cases. That is why it is very difficult and expensive for us to specify and to validate ADAS system. It is becoming critical if we are looking at safety ADAS application like AEB and it is becoming impossible to manage when we are jumping to Autonomous Vehicle application. Even if we assumed that a sensing technology for the autonomous vehicle was now available, which is far from being the case, we do not yet have methods, tools, technologies to validate it. The goal of this “image sensor and processing” laboratory is to develop methods, tools, and technologies to bring back the validation of the Environment Perception Function of the vision system in a finite set of use cases, that can be validate as much as possible with “indoor” test benches. We are currently focusing on design and evaluation of HDR image sensor in this aim. The question is: “What are the features of the HDR image sensor OECF to make it possible to fully validate each in vehicle integrated camera sensor on an industrial bench?”
The past and current challenge is all about creating a tool for a thorough validation of the Environment Perception Function of the Machine vision camera system. The necessary precondition is the split of camera system in two blocks: the camera sensor block and the image processing block. The first step of the challenge is to find out a way to bring in the market of machine vision system, HDR camera sensor which have a fixed and known OECF fully independent of the luminous road conditions. Technologies which enable this kind of fixed OECF are available, but it is very challenging for many reasons to ensure these technologies are used by our TIER 1’s and camera system makers.
Q: You have also worked on driver’ drowsiness detection and one of the panels as AutoSens will explore how sensing can change the future of the in-cabin experience. What do you see as the future here?
I’m probably not up to date on this subject anymore. What struck me the most about driver’s drowsiness was the incredible variability of human behaviour, whether for driving parameters (Steering angle, vehicle trajectory …) or physiological parameters (EEG, eyelid blinks …). So, if Autonomous Vehicle application, at an intermediate stage of autonomy, required knowledge on the driver’s level of alertness, my best guess would be on the use of kind of connected monitoring bracelet of physiological parameters. Each driver having to tune this monitoring bracelet to his/her own values and thresholds.
Q: What are you most looking forward to about speaking in-person at AutoSens in Brussels?
Bringing to the attention of a greater number of people, the criterion of iSNR to have in return their point of view on the subject. And of course, to meet people again, I have not met for a while because of Pandemic, and to meet new ones.