Explainable AI (XAI) for VLM/VLA Decision-Making in Edge Cases
- Institut
- Professur für autonome Fahrzeugsysteme
- Typ
- Semesterarbeit Masterarbeit
- Inhalt
- experimentell theoretisch konstruktiv
- Beschreibung
Join us in developing cutting-edge Explainable AI (XAI) techniques to demystify how Vision-Language Models (VLMs) and Vision-Language-Action (VLAs) make decisions in autonomous vehicles, especially in complex, safety-critical "edge-case" scenarios!
Are you driven to make advanced AI models transparent and trustworthy? Do you want to work on bridging the gap between intricate neural networks and human understanding in a high-stakes environment like autonomous driving? This project offers a unique opportunity to enhance the safety, auditability, and regulatory compliance of self-driving systems.
The "black-box" nature of many large AI models, particularly in unpredictable edge cases, poses a significant challenge for autonomous vehicles. It's not enough to know what the vehicle did; we need to understand why it did it, especially when decisions deviate from human intuition, or in situations leading to near-misses or incidents. This research is fundamentally about building trust in AI outcomes where human lives are at stake. Our goal is to create XAI systems that can not only articulate the model's internal state but also contextualize its actions within traffic rules, social norms, and human expectations, potentially leveraging the reasoning capabilities of LLMs.
You'll investigate methods for generating natural language explanations or producing interpretable visual cues (like attention maps, saliency maps) that illuminate the VLM/VLA's decision-making process in response to unusual or hazardous driving situations. This could involve developing "post-hoc" explanations for detailed accident analysis or "local interpretability" techniques for real-time understanding of specific decisions. A key aspect of your research will be making these explanations actionable for human operators, engineers, and even regulatory bodies, enabling effective debugging, auditing, and continuous improvement of autonomous systems.
Example Thesis Topics
-
Post-Hoc XAI for VLM/VLA Incident Analysis: Develop a framework to generate natural language explanations and visual evidence for VLM/VLA decisions immediately following near-misses or accidents, facilitating root cause analysis and system debugging.
-
Real-time Local Interpretability for Edge Cases: Implement and evaluate techniques (e.g., LIME, SHAP adapted for VLMs/VLAs, or gradient-based methods) that provide real-time, localized explanations for an autonomous vehicle's specific actions in ambiguous or hazardous driving situations.
-
Integrating LLM Reasoning for Contextualized XAI: Research how to leverage LLMs to enrich VLM/VLA explanations by incorporating broader contextual knowledge, traffic laws, and social norms, making them more human-understandable and actionable.
-
Human-Centric Evaluation of XAI for Autonomous Driving: Design and conduct user studies to evaluate the effectiveness and usefulness of different XAI modalities (e.g., natural language, visual overlays, counterfactuals) for various stakeholders (engineers, drivers, regulators) in understanding AV behavior.
-
Counterfactual Explanations for VLM/VLA Decision-Making: Explore methods to generate counterfactual explanations for AV decisions, illustrating what the VLM/VLA would have done if specific input conditions were slightly different, aiding in understanding model sensitivities.
Technologies Used
Python, PyTorch/TensorFlow, Autonomous Driving, Vision-Language Models (VLMs), Vision-Language-Action Models (VLAs), Explainable AI (XAI) frameworks (e.g., Captum, LIME, SHAP), Natural Language Processing (NLP), Computer Vision, Deep Learning, Attention Mechanisms, Saliency Mapping, Counterfactual Explanations, Simulation (e.g., CARLA, Waymo Open Dataset, nuScenes), Data Visualization Libraries.
-
- Voraussetzungen
Requirements
We're looking for students with a strong background in deep learning and a keen interest in making AI systems transparent and trustworthy.
-
Solid understanding of deep learning frameworks (PyTorch/TensorFlow).
-
Experience with computer vision and/or natural language processing (NLP).
-
Familiarity with Explainable AI (XAI) concepts and methods.
-
Proficiency in Python.
-
Motivation to work on high-impact, safety-critical applications.
If you're ready to make a tangible impact on the future of autonomous vehicles, send us an initiative application.
Please include:
- A short motivation letter highlighting your interest in efficient AI, VLMs/VLAs, and autonomous driving.
- Your CV.
- A recent transcript of records.
- (Optional) Any relevant project work or code samples demonstrating your experience in relevant fields.
-
- Möglicher Beginn
- sofort
- Kontakt
-
Roberto Brusnicki
roberto.brusnickitum.de