Fusion of Data from Lidar and Camera in Self Driving Cars

Fusion of Data from Lidar and Camera in Self
Driving Cars 

Sensor data fusion is one of the important solutions
for the perception problem in self-driving cars, the main aim
is to enhance the perception of our system without losing real-time performance and therefore,it is a trade-off problem and its
often observed that most models that have a high environment
perception cannot perform in a real-time manner.
In this paper we discuss how we can address this problem using
a 3D detector model (Complex-Yolov3) and a 2D detector model
(Yolo-v3) , then applying the Image-Based Fusion method that
could make a sensor fusion between Lidar & camera information
with a fast and efficient late fusion technique that is discussed in detail in this paper.
Then we use the mean average precision metric in order to
evaluate our object detection model and to compare the proposed
approach with them as well.
In the end, we show the results on the Kitti data set as well
as our real hardware setup, which prove that our proposed
approach could work efficiently in a real-time manner. 

Read the article


Authors:


Mohamed Ahmed (Robotics Institute Innopolis University, o.ahmed@innopolis.university)

Alexandr Klimchik (Robotics Institute Innopolis University, A.Klimchik@innopolis.ru)

Riby Abraham Boby (Mechanical Engineering IIT Madras, ribyab@gmail.com)

in Proceedings of the Third International Conference Nonlinearity,Information and Robotics 2022, August 24, 2022