Automated Multi-View Motion Magnification and 3D Reconstruction for Quantitative Structural Dynamics

Institut
Lehrstuhl für Angewandte Mechanik
Typ
Inhalt
 
Beschreibung

Topic:

This thesis builds on our published pipeline that combines learning-based motion magnification with Dynamic 3D Gaussian Splatting to obtain full-field visualizations of operational deflection shapes from multi-view video recordings. The previous work established the conceptual framework and demonstrated feasibility on synthetic and real-world datasets. The focus now shifts from methodological exploration to creating a reliable, automated, and quantitatively meaningful toolchain.

A first objective is to automate the complete data acquisition and processing pipeline. This includes system excitation, multi-view video capture, synchronization, motion magnification, COLMAP pose estimation, and Dynamic Gaussian Splatting. You will design routines that automatically determine optimal processing parameters, so that the pipeline becomes repeatable and user-independent.

The robustness of the motion magnification stage must be improved for complex motion, occlusions, and non-harmonic dynamics. This involves adapting or extending the existing learning-based approach to handle spatially varying motion, varying illumination, and scenes with richer temporal behaviour than those explored so far.

To move from purely visual inspection toward quantitative analysis, the method will be fused with reference measurements. You will integrate accelerometers, embedded IMUs, or laser vibrometry and develop strategies to align, validate, and scale the reconstructed 3D deflection shapes. This fusion aims to turn the visual output into a metric depiction of motion amplitude and frequency content.

Finally, the thesis will address the detrimental effect of reflections, which currently degrade both motion magnification and 3D reconstruction. The work includes investigating how reflections propagate through the pipeline and designing mitigation strategies—ranging from acquisition-side adjustments (lighting, surface treatment, filters) to algorithmic corrections (masking, adaptive weighting, reflection-robust filtering).

The outcome will be a more reliable and semi-automated system capable of producing stable 3D motion visualizations and, with sensor fusion, quantifiable operational deflection information.

 

This topic ties into active research that I aim to publish. Depending on the strength of your thesis contributions, you may be invited to join as a co-author.

 

Application:

Please send all previous transcripts of records and your full CV to tomas.slimak@tum.de with the subject Application MA/SA/BA "name of thesis". Write a few sentences about what motivates you to apply for this thesis and why you would fit the topic. Please be specific in citing the experience/expertise that you have which would be relevant to this topic.

 

After submitting your application, it will be reviewed and if you are deemed suitable for the topic, you will be invited for an interview. Please be aware that part of this interview will be an oral evaluation of your background and understanding of concepts relevant to the thesis. The more preparation, creativity and initiative you are able to show, the better. It should be noted that given the iteresting but very advanced nature of this topic, only correspondingly strong students will be selected.

 

If you are interested in multiple theses that I am offering, do not send multiple applications, just name all the titles in a single mail. Applications and theses can be submitted in English or German as well.

 

Möglicher Beginn
sofort
Kontakt
Tomas Slimak, M.Sc.
Raum: 3104
Tel.: +49 (89) 289 - 15226
tomas.slimaktum.de
Ausschreibung