Bias Mitigation and Fairness in VLM/VLA Perception and Decision-Making

Institut
Professur für autonome Fahrzeugsysteme
Typ
Semesterarbeit / Masterarbeit /
Inhalt
experimentell / theoretisch / konstruktiv /  
Beschreibung

Join us in pioneering research to identify and mitigate inherent biases in Vision-Language Model (VLM) and Vision-Language-Action (VLA) perception and decision-making for autonomous vehicles, ensuring fair and equitable performance across diverse human populations and varying environmental conditions!

Are you passionate about building ethical, inclusive, and socially responsible AI systems? Do you want to address critical challenges where AI bias can have real-world consequences, particularly in safety-critical applications like autonomous driving? This project offers a unique opportunity to directly contribute to the trustworthiness and public acceptance of future transportation.

AI systems, including large models, are known to inherit and amplify biases from their training data, leading to unfair or discriminatory outcomes. This is particularly critical for autonomous vehicles, where differential performance based on demographic factors (e.g., varying skin tones, age groups, disabilities) or environmental conditions (e.g., adverse weather) is unacceptable. For instance, studies have shown that facial recognition technology's accuracy can decrease for darker-skinned subjects, and concerns exist around automatic gender recognition algorithms. In autonomous driving, a system that performs worse for certain groups or under specific, yet common, conditions poses a significant safety and ethical risk. Our research will move beyond simple "bias mitigation" to focus on proactive design for fairness and inherent robustness to demographic and environmental variations, guaranteeing minimum performance thresholds for all relevant subgroups.

You will focus on a specific type of bias, such as demographic bias in pedestrian detection (e.g., performance disparities for different age groups or ethnicities) or environmental bias (e.g., differential performance in varying lighting conditions, rain, or snow). Your project will involve systematically analyzing existing datasets to quantify biases, developing advanced data augmentation techniques (potentially leveraging generative AI) to create more balanced and representative datasets, and proposing novel fairness-aware training algorithms for VLMs and VLAs that explicitly aim to reduce discriminatory outcomes. You will also contribute to developing new metrics and evaluation protocols to quantify and continuously monitor bias in the real-world performance of autonomous vehicles.


Example Thesis Topics

  • Demographic Bias Detection and Mitigation in Pedestrian VLMs: Develop methods to identify performance disparities in VLM-based pedestrian detection (e.g., accuracy, recall) across different demographic groups (e.g., age, ethnicity, clothing styles) and implement fairness-aware loss functions or adversarial debiasing techniques during VLM training.

  • Generative Data Augmentation for Equitable Environmental Robustness: Utilize generative AI (e.g., Diffusion Models, GANs) to synthesize diverse and representative driving scenes under challenging environmental conditions (e.g., heavy snow, dense fog, low-light) that are under-represented in existing datasets, specifically targeting fairness across conditions.

  • Fairness-Constrained Planning for VLA Decision-Making: Integrate fairness constraints directly into the learning objective of VLA planning modules, ensuring that the autonomous vehicle's decisions (e.g., yielding, path planning) exhibit equitable behavior towards all road users, regardless of their visual appearance or predicted demographics.

  • Novel Metrics and Benchmarking for Bias in Autonomous Driving: Propose and evaluate new quantifiable metrics and benchmarking protocols to systematically measure and compare fairness across VLM/VLA perception and decision-making for various demographic and environmental subgroups.

  • Cross-Domain Bias Transfer and Adaptation for AVs: Investigate how biases present in large pre-trained VLMs (trained on internet-scale data) manifest in autonomous driving tasks, and develop adaptation strategies (e.g., domain-specific fine-tuning with fairness regularization) to mitigate these transferred biases.

 

Technologies Used

Python, PyTorch/TensorFlow, Autonomous Driving, Vision-Language Models (VLMs), Vision-Language-Action Models (VLAs), Deep Learning, Bias Mitigation Techniques (e.g., adversarial debiasing, re-weighting, fairness-aware regularization), Fairness Metrics, Data Augmentation, Generative AI (optional for data augmentation), Ethical AI Frameworks, Computer Vision, Simulation (e.g., CARLA, Waymo Open Dataset, nuScenes), Demographic Attribute Estimation (for analysis).


Voraussetzungen

We're looking for students with a strong background in deep learning and a passion for building fair, equitable, and socially responsible AI systems.

  • Solid understanding of deep learning frameworks (PyTorch/TensorFlow).

  • Experience with computer vision and/or natural language processing (NLP).

  • Familiarity with bias and fairness in AI, causal inference, or robustness metrics.

  • Proficiency in Python.

  • Motivation to work on ethical and societal challenges in AI.

 

If you're ready to make a tangible impact on the future of autonomous vehicles, send us an initiative application.

Please include:

  • A short motivation letter highlighting your interest in ethical AI, bias mitigation, and autonomous driving.

  • Your CV.

  • A recent transcript of records.

  • (Optional) Any relevant project work or code samples demonstrating your experience in relevant fields.

Möglicher Beginn
sofort
Kontakt
Roberto Brusnicki
roberto.brusnickitum.de