Quantifying Sim2Sim and Sim2Real Gaps using High-Precision Motion Capture
- Institut
- Professur für autonome Fahrzeugsysteme (TUM-ED)
- Typ
- Masterarbeit
- Inhalt
- experimentell theoretisch konstruktiv
- Beschreibung
Background
Join our team to build the ultimate ground-truth evaluation framework for advanced robotics, closing the loop between theoretical AI and physical reality! Are you driven by data, metrics, and rigorous evaluation? Do you want to be the engineer who mathematically proves whether an AI model is actually doing what it claims to do? This project offers a unique opportunity to act as the central verification hub for our entire robotics pipeline, ensuring the safety and accuracy of our deployed systems. As we train complex Reinforcement Learning policies and Vision-Language-Action (VLA) models, a critical question arises: How do we accurately evaluate their performance in the real world? Relying solely on the robot's internal sensors is insufficient, as those same sensors are used by the policy itself. We need an objective, external source of truth to measure exactly how far the robot's physical execution deviates from its simulated or intended trajectory. You will utilize our high-precision Motion Capture (MoCap) system to track the exact kinematics of the Unitree robots during the execution of AI policies. You will develop an automated pipeline to mathematically compare real-world trajectories against the intended trajectories from simulators (MuJoCo/Isaac Sim) or the VLA's action outputs. Your work will provide the critical feedback loop needed to quantify the Sim-to-Real gap, directly informing how we improve our data collection, simulation, and model training.
Example Thesis Topics (subject to availability):
- Automated Pipeline for Sim-to-Real Gap Quantification: Develop a software framework that seamlessly aligns and compares MoCap data from physical robot executions with the corresponding trajectories generated in Isaac Sim/MuJoCo.
- Evaluating VLA Action Precision in Dexterous Manipulation: Use external MoCap to measure the sub-millimeter precision and jitter of a Unitree Z1 arm controlled entirely by a deployed VLA model during complex grasping tasks.
- Identifying Unmodeled Dynamics through Trajectory Divergence: Analyze the specific points where real-world MoCap data diverges from simulated data to mathematically identify and isolate unmodeled physical parameters (e.g., unexpected joint friction or center-of-mass errors).
- Runtime Assurance via External MoCap Monitoring: Investigate the use of real-time MoCap data as an external safety monitor to immediately trigger emergency stops if an RL or VLA policy exhibits unsafe or out-of-distribution behavior in the real world.
Technologies Used Python, ROS 2, Motion Capture Systems (e.g., OptiTrack, Vicon), Data Analysis, Kinematics, 3D Geometry, Simulation (Isaac Sim, MuJoCo), Evaluation Metrics.
Your Benefits: Join a High-Performance Robotics Team
- Impactful Research: Work on a project where your code doesn't live in a silo; it is a critical gear in an end-to-end pipeline. Your results will directly enable robots to perform complex tasks.
- Top-Tier Hardware Stack: Gain exclusive hands-on experience with NVIDIA DGX (training), Jetson Thor (inference), and Unitree Humanoids/Quadrupeds - very similar stack used by industry leaders like Tesla, Figure AI, and Physical Intelligence.
- Scientific Publication: We aim for high-impact results. If your work meets the quality standards, we will co-author and submit a paper to top-tier robotics/AI conferences (e.g., ICRA, IROS, CoRL, or CVPR).
- Professional Career Launchpad: This thesis is designed to mirror the workflow of elite AI labs. We provide dedicated mentorship and professional support to help you land roles at top-tier robotics startups or Big Tech AI labs.
- Dynamic Lab Culture: You will be part of a "squad" of motivated Master’s students working in parallel, fostering a collaborative, fast-paced, and supportive environment.
Requirements
We are looking for students who know their thesis is not just as a degree requirement, but as a career-defining project.
Must-Have:
- English Proficiency: High level of written and spoken English (the language of our research and documentation).
- Proactive Mindset: You are comfortable with a "fail fast, learn fast" approach and is comfortable solving hands-on hardware/software integration challenges.
- Independence: Ability to own a technical module and drive it forward while communicating effectively with the rest of the team.
- Growth Path: A passion for Robotics/AI and an eagerness to learn new technologies.
Nice-to-Have (The "Plus"):
- Technical Foundation: Proficiency in Python and/or C++.
- Domain Experience: Prior exposure to PyTorch, ROS 2, or physics simulators (Isaac Sim/MuJoCo).
Hardware Skills: Experience working with robotic hardware, sensors, or VR systems.
Ready to build the future of Embodied AI? Send your CV, recent transcript, and a brief email on why you are the right fit for this specific "squad" and your career goals.
- Möglicher Beginn
- sofort
- Kontakt
-
Roberto Brusnicki
roberto.brusnickitum.de