Diffusion-based safety-critical scenario generation

Institut
Professur für autonome Fahrzeugsysteme (TUM-ED)
Typ
Semesterarbeit / Masterarbeit /
Inhalt
 
Beschreibung

Motivation

Safety-critical scenario generation is a cornerstone of validating autonomous driving and ADAS systems. Real-world datasets such as nuScenes capture valuable long-tail events, but they are inherently limited in coverage, diversity, and controllability. At the same time, classical simulation scenarios often lack photorealism, limiting their usefulness for perception-centric evaluation.

Recent advances in diffusion models enable controllable, high-fidelity generation and editing of complex scenes. By conditioning diffusion models on semantic, geometric, or risk-related constraints, it becomes feasible to systematically create safety-critical scenarios that are both realistic and diverse. In particular, diffusion models allow:

  • Editing real recorded data (e.g., introducing hazardous interactions into nuScenes scenes)

  • Translating synthetic or abstract scenarios into photorealistic sensor data suitable for perception testing

This thesis explores diffusion-based methods to generate and transform safety-critical driving scenarios, bridging the gap between simulation, real-world data, and perception-level validation.


Goal

Develop an end-to-end pipeline to generate safety-critical driving scenarios using diffusion models, focusing on both dataset editing and photorealistic synthesis.

Specifically, the thesis aims to:

  • Generate new safety-critical scenarios by editing existing real-world datasets (e.g., nuScenes)

  • Transform abstract or synthetic scenarios into photorealistic sensor representations

  • Enable controllable generation based on risk-related constraints (e.g., proximity, collision likelihood, agent interactions)

 

Expected Deliverables

  • Diffusion-based scenario generation / editing pipeline

  • Safety-critical scenario dataset (edited real-world + synthesized scenarios)

  • Training and inference scripts with clear documentation

  • Evaluation report on realism, diversity, and safety relevance


Required Skills

  • Excellent English or German proficiency.

  • Strong python skills; familiarity with Pytorch and basic computer-vision/ML.

  • Interest in generative models (diffusion models, conditioning, evaluation)

  • Familiarity with autonomous driving datasets (nuScenes) and simulator (CARLA) is helpful but not mandatory

 

Start

Work can begin immediately. If you are interested in this topic, please first have a look at our recent survey paper: https://ieeexplore.ieee.org/document/11370877

Then send a brief cover letter explaining why you are fascinated by this subject, along with a current transcript of records and your CV to: yuan_avs.gao@tum.de

Tags
AVS Gao
Möglicher Beginn
sofort
Kontakt
Yuan Gao
yuan_avs.gaotum.de