Learning-based Hybrid Control in Autonomous Driving

Institute
Lehrstuhl für Robotik, Künstliche Intelligenz und Echtzeitsysteme (TUM-CIT)
Type
Semester Thesis /
Content
 
Description

Autonomous driving systems require both robust motion planners [1, 2, 3] and reliable control-
lers [4]. Our chair’s open-source project CommonRoad [5], the motion planning and benchmar-
king suite, has become a widely recognized and used framework over the years.

We are in the process of enriching our open-source project with CommonRoad-Control1, a
toolbox that implements several state-of-the-art controllers. Few open-source projects offer a
comprehensive, closed-loop perspective between motion planning and control, so we are the
first to close this gap from a practical coding perspective. We have designed a modular archi-
tecture for developing control algorithms, integrating them with different dynamic models and
closing the loop with state-of-the-art motion planners.
Now, we want to investigate different strategies for automatically configuring, tuning and evalua-
ting the controllers. In this project, you’ll research, develop, implement and evaluate advanced
methods tackling these tasks.

 

This thesis unites a deep knowledge in control with an interest in open-source software deve-
lopment and the application of learning techniques. Good skills and understanding in control
and reasonable knowledge in practical learning are prerequisites. With this thesis, you have the
chance to learn how to write code that can actually be released as a pipy package. We are
planning to (officially) release the toolbox at the end of the thesis.
High-level, we want you to develop a two-facetted approach that (1) uses learning methods
to automatically learn / tune relevant parameters of the controller and the dynamics model
based on feedback from their execution; and (2) uses a compatible and scalable method to
automatically generate reference trajectories for the learning task.
We already have a concept based on dynamic models for the second part. For the first part,
promising methods include reinforcement learning [6, 7, 8, 9] and genetic algorithms [10, 11].
However, a literature deep dive and your interests might expand this list. Note that we use the
term learning loosely here, so not only deep learning and reinforcement learning are options.

 

Tasks
• Familiarization with the current state of the art in motion planning and control.
• Literature deep dive into (learning-based) methods for automatic parameter identification
and tuning
• Selection, implementation, training and evaluation of the most promising method(s)

• Implementation and evaluation of our method for creating reference trajectories without
scenarios.
• Evaluation of the overall concept of automatically tuning our controllers and dynamic
models.

Requirements

If you are interested in the topic, please send an email to the contact information provided on
the right and attach a short CV, your current grade report, and your Bachelor grade report.
Required skills are a good understanding in control, good Python3 coding skills, practical know-
ledge in using Ubuntu, and an interest in applying novel learning algorithms and practical deep
learning.

Possible start
sofort
Contact
Tobias Mascetta, M.Sc.
Room: 5607.03.060 (MPI Building, Floor 3, Part 7)
tobias.mascettatum.de
Announcement