A pass is required to view this video.

Redefining Radar – Camera Sensor Fusion: A Leap Towards Autonomous Driving Without LiDAR

Sign in | Register to Bookmark
Andras Palffy
Co-Founder,

Perciv AI

Released on October 05, 2023

In an ever-evolving autonomous driving landscape, the need for efficient, reliable and cost-efficient sensor fusion strategies is paramount. At Perciv AI, we’ve been innovating a novel approach to this challenge, leveraging the power of next-generation 4D radar and monocular RGB camera to generate ‘pseudo-LiDAR 3D point clouds. Cameras and radars have been used for driver assistance for decades. Our solution pushes the fusion of these sensors to the next level: the way we combine these two sensors will challenge the performance of LiDAR sensors for an 80% cheaper price. This ground-breaking technique presents a potential path to a future where high-end LiDARs may not be a necessity in consumer vehicles, only for research and evaluation purposes. Our introduced fusion paradigm is modular by design and can be trained without the reliance on expensive manual annotations. The outcome is a point cloud that mirrors the density of a LiDAR-generated one, but it also seamlessly integrates semantic information from the camera and velocity data from the radar. The variety and depth of this data offer a wide scope for potential applications in the field of autonomous driving, e.g. it can be effectively utilized for multiple downstream tasks such as object detection, free road estimation, or SLAM.

Want to watch this session?

You must be logged in and have a pass to watch this session.
Scroll to Top