Context-Aware Motion Prediction with VLMs for Autonomous Driving
- Institut
- Lehrstuhl für Fahrzeugtechnik
- Typ
- Semesterarbeit Masterarbeit
- Inhalt
- experimentell theoretisch
- Beschreibung
Motion prediction of surrounding objects is a key requirement for autonomous driving. Current state-of-the-art algorithms typically rely on past trajectories and map data to forecast future movements of vehicles, cyclists, and pedestrians. While this setup provides strong performance, it neglects critical contextual cues from the scene and objects — such as turning signals, brake lights, or dynamic interactions — which human drivers naturally incorporate into their decision-making. As a result, prediction models fail in such complex urban scenarios.
In this thesis, we aim to improve motion prediction performance by introducing context-awareness to the model. Specifically, vision-language models (VLMs) should be used to extract context attributes from image data, which are then input to our motion prediction algorithm. The idea is depicted in Figure 1.
Possibility for publication in case of excellent work.
- Voraussetzungen
Requirements:
-
Very good programming skills in Python.
-
High personal motivation and independent working style.
-
Very good language proficiency in German, English or French.
-
- Software
- Python
- Tags
- FTM Studienarbeit, FTM AV, FTM AV Perception, FTM Stratil, FTM Informatik, FTM IDP
- Möglicher Beginn
- sofort
- Kontakt
-
Loïc Stratil, M.Sc.
Raum: MW 3508
Tel.: +49.89.289.15898
loic.stratiltum.de - Ausschreibung
-