You must be logged in  to watch this session

Please either login or create an account.

Video camera compression for neural network based perception in assisted and automated driving

Event: AutoSens Brussels
| Published: September 2022
Sign in | Register to Bookmark

Hear from:

Valentina Donzella
Professor,

University of Warwick

One of the biggest challenges associated to sensing in assisted and automated driving is the amount of data produced by the environmental perception sensor suite, specifically when considering higher levels of autonomy requiring numerous sensors and different sensor technologies. Sensor data need to be transmitted to the processing units with very low latencies and without hindering the performance of perception algorithms, i.e. object detection, classification, segmentation, prediction, etc. However, the amount of data produced by a possible SAE J3016 level 4 sensor suite can add up to 40 Gb/s, and cannot be supported by traditional vehicle communication networks. There is therefore the need to consider robust techniques to compress and reduce the data that each sensor needs to transmit to the processing units. The most commonly used video compression standards have been optimised for human vision, but in the case of assisted and automated driving functions the consumer of the data will likely be a perception algorithm, based on well-established deep- neural networks. This work demonstrates how lossy compression of video camera data can be used in combination with deep neural network based object detection.

Scroll to Top