Sign in | Register to Bookmark

Video camera compression for neural network based perception in assisted and automated driving

One of the biggest challenges associated to sensing in assisted and automated driving is the amount of data produced by the environmental perception sensor suite, specifically when considering higher levels of autonomy requiring numerous sensors and different sensor technologies. Sensor data need to be transmitted to the processing units with very low latencies and without hindering the performance of perception algorithms, i.e. object detection, classification, segmentation, prediction, etc. However, the amount of data produced by a possible SAE J3016 level 4 sensor suite can add up to 40 Gb/s, and cannot be supported by traditional vehicle communication networks. There is therefore the need to consider robust techniques to compress and reduce the data that each sensor needs to transmit to the processing units. The most commonly used video compression standards have been optimised for human vision, but in the case of assisted and automated driving functions the consumer of the data will likely be a perception algorithm, based on well-established deep- neural networks. This work demonstrates how lossy compression of video camera data can be used in combination with deep neural network based object detection.

Want to watch this session?

You must be logged in and have an AutoSens Brussels 2022 pass to watch this session.

Please either login or create an account.

Already registered? Please Log in

Hear from:

ValentinaDonzella

Dr. Valentina Donzella
Associate Professor, Head of Intelligent vehicles - sensors group
WMG, University of Warwick




Scroll to Top