Multimodal 3D object detection for autonomous driving

Institut
Lehrstuhl für Fahrzeugtechnik
Typ
Semesterarbeit /
Inhalt
experimentell /  
Beschreibung

In the context of autonomous driving, perception is a crucial task that provides the necessary input for successful navigation. One of the key components of perception is the accurate positioning of 3D objects in the environment, which allows later algorithms to calculate the desired trajectory and control the vehicle. However, in a real-world application, these models cannot consume all the available computing power and must be executed in real time. To achieve this, various sensors such as Lidar, cameras, and radar are available for use in vehicles.

The goal of this project is to implement a performant object detector for the EDGAR vehicle subject to the restrictions proper of a real-world application. So if you're interested in testing your knowledge of autonomous driving and deep learning in a real vehicle implementation, this is your chance to work with cutting-edge technology and gain experience in an applied automotive project.

The first step of the project is a literature review of single and multi-sensor-based object detection algorithms. Second, the implementation of several 3D detector modules in a ROS2 environment. Third, a comparison of the implemented modules in terms of accuracy and execution time with respect to known datasets and finally the implementation in the EDGAR vehicle.

Work packages:

  • Literature research on object detection for autonomous driving
  • ROS2 implementation of selected multi-sensor object detection architectures
  • Quantitative analysis of the implemented approach in terms of accuracy and execution times
  • Implementation of the architectures in a ROS2 node compatible with the EDGAR software stack
Voraussetzungen
  • Programming experience in Python or C++
  • Experience with Pytorch or Tensorflow
  • Basic knowledge of deep learning and computer vision
  • Experience with ROS2/ROS1
Software packages
Python, Pytorch, Tensorflow, ROS2
Tags
FTM Studienarbeit, FTM AV, FTM AV Perception, FTM Rivera, FTM Informatik
Möglicher Beginn
sofort
Kontakt
Esteban Rivera, M.Sc.
Raum: MW 3508
esteban.riveratum.de
Ausschreibung