Automated camera dataset labelling for autonomous driving
- Institut
- Lehrstuhl für Fahrzeugtechnik
- Typ
- Semesterarbeit Masterarbeit
- Inhalt
- experimentell
- Beschreibung
In the context of the EDGAR autonomous driving project, we record perception data from various sensors such as Cameras, Lidar, and Radar in the regions of Garching and München.
The data we acquire serves a crucial purpose in the later stages of autonomous driving development, including tracking, prediction, and planning. However, labelling this data can be quite challenging, especially when dealing with 3D bounding boxes in camera images, due to the lack of depth information in this domain.
However, there are already several approaches which label 3D bounding boxes with semi-supervised learning techniques in the LiDAR domain. The goal of this project is to transfer the information of the LiDAR point clouds into the camera domain to get reliable labels to train camera detection models. For this, the LiDAR pointclouds and their labels are projected into the camera image. The challenge to be tackled is to deal with real-world implementation issues, like ego-motion compensation and lack of synchronization.
Work packages:
- Design a module to transfer LiDAR detected bounding boxes into camera images
- Module optimization for Munich scenario und EDGAR sensors
- Labelling of a image 3D detection dataset
- Label quality assessment and evaluation.
- Voraussetzungen
- Programming experience in Python
- Experience with Pytorch/Tensorflow and Docker
- Knowledge of computer vision
- Experience with Deep Learning, CNNs, Object detection, Semantic segmentation
- Desired: Experience with OpenPCDet or OpenMMLab
- Software packages
- Python, Pytorch, Tensorflow, ROS2
- Tags
- FTM Studienarbeit, FTM AV, FTM AV Perception, FTM Rivera, FTM Informatik
- Möglicher Beginn
- sofort
- Kontakt
-
Esteban Rivera, M.Sc.
Raum: MW 3508
esteban.riveratum.de - Ausschreibung