Real-Time 3D Reconstruction and Digital Twins for Real2Sim Transfer
- Institut
- Professur für autonome Fahrzeugsysteme (TUM-ED)
- Typ
- Semesterarbeit
- Inhalt
- experimentell theoretisch konstruktiv
- Beschreibung
Background
Join our team to build the spatial intelligence layer of our next-generation robots by developing real-time 3D reconstruction pipelines and digital twin environments, pushing the boundaries of Real2Sim transfer! Are you fascinated by the intersection of Visual SLAM, physics simulation, and embodied robotics? Do you want to build systems that not only perceive the world, but continuously model it in real time so that robots can train, adapt, and deploy faster?
This project offers a unique opportunity to work on a tightly integrated pipeline spanning simulation, hardware deployment, and robot system integration. We are moving toward a unified Real2Sim framework where digital twins, built from live sensor data, serve as the foundation for synthetic data generation, reinforcement learning, and real-world deployment. Your work will directly feed into the training and deployment stack of our Unitree G1 humanoid and B2 quadruped platforms.Open Positions
Group 1: RL and Simulation
Position 1: Digital Twin and Synthetic Data Pipeline
The Goal: Build a Sim2Real bridge by creating a synchronized data generation pipeline that mirrors our physical lab in simulation.
Tasks:- Develop a modular environment in Isaac Sim that replicates the layout of our physical lab
- Programmatically generate VSLAM datasets including RGB-D, IMU, and ground truth trajectories using Isaac Lab
- Implement sensor noise models to make simulated camera and IMU data match the real G1 and B2 hardware
Position 2: Sim2Real VSLAM Benchmarking and RL Integration
The Goal: Use simulation to stress-test SLAM solutions under varied and challenging conditions, and integrate them into a reinforcement learning loop.
Tasks:
- Integrate VSLAM as a state estimator within an Isaac Lab reinforcement learning environment
- Test SLAM robustness against dynamic obstacles, lighting changes, and sensor degradation scenarios
- Quantify the Sim2Real gap by comparing simulation SLAM results against real-world captures from the G1 and B2
Group 2: HW Deployment
Position 3: Real-Time Edge Optimizer (Jetson Thor / Orin / DGX)
The Goal: Ensure VSLAM runs at high frequency and low latency on the robot's onboard compute, making real-time spatial awareness possible in the field.
Tasks:- Deploy and optimize VSLAM back-ends on the Jetson Thor and Jetson Orin platforms
- Manage the high-bandwidth data flow from the G1 and B2 onboard cameras to the edge device
- Implement Hardware-in-the-Loop (HIL) testing using the DGX as a compute-heavy server for off-board SLAM processing
Position 4: Robot System Integration and Deployment
The Goal: Get VSLAM working reliably on the actual robots in real-world conditions, closing the loop between simulation and physical deployment.
Tasks:
- Integrate VSLAM outputs with the robots' low-level controllers via ROS 2
- Handle vibration and motion blur compensation specific to the G1's walking gait and the B2's locomotion patterns
- Field testing: map the lab and building using the robot's internal sensors and compare against the digital twin created by Group 1
Technologies Used
Python, C++, ROS 2, Isaac Sim, Isaac Lab, Visual SLAM, Reinforcement Learning, Sim2Real Transfer, NVIDIA DGX, Jetson Thor, Jetson Orin, Unitree G1, Unitree B2, Edge Computing, Hardware-in-the-Loop Testing, Sensor Fusion, Digital Twins, Synthetic Data Generation.
Your Benefits:Join a High-Performance Robotics Team
- Impactful Research: Work on a project where your contributions are a critical part of an end-to-end pipeline. Your results will directly enable robots to perceive, model, and interact with the physical world.
- Top-Tier Hardware Stack: Gain hands-on experience with NVIDIA DGX for training, Jetson Thor and Orin for inference, and Unitree humanoid and quadruped platforms, the same stack used by industry leaders like Tesla, Figure AI, and Physical Intelligence.
- Scientific Publication: We aim for high-impact results. If your work meets the quality standards, we will co-author and submit a paper to top-tier robotics and AI conferences such as ICRA, IROS, CoRL, or CVPR.
- Professional Career Launchpad: This thesis is designed to mirror the workflow of elite AI and robotics labs. We provide dedicated mentorship and professional support to help you land roles at top-tier robotics startups or Big Tech AI labs.
- Dynamic Lab Culture: You will be part of a squad of motivated Master's students working in parallel, fostering a collaborative, fast-paced, and supportive environment.
Requirements
We are looking for students who see their thesis not just as a degree requirement, but as a career-defining project.
Must-Have:- English Proficiency: High level of written and spoken English, the language of our research and documentation.
- Proactive Mindset: Comfortable with a fail fast, learn fast approach and hands-on hardware and software integration challenges.
- Independence: Ability to own a technical module and drive it forward while communicating effectively with the rest of the team.
- Growth Path: A passion for Robotics and AI and an eagerness to learn new technologies.
Nice-to-Have:
- Technical Foundation: Proficiency in Python and/or C++.
- Domain Experience: Prior exposure to ROS 2, Isaac Sim, Isaac Lab, or physics simulators such as MuJoCo.
Hardware Skills: Experience working with robotic hardware, sensors, or real-time embedded systems.
Ready to build the future of Embodied AI? Send your CV, recent transcript, and a brief email on why you are the right fit for this specific "squad" and your career goals.
- Tags
- AVS Brusnicki
- Möglicher Beginn
- sofort
- Kontakt
-
Roberto Brusnicki
roberto.brusnickitum.de