Starting point
Collecting machine data from several companies enables the training of generalisable condition monitoring models, which can then be used by the companies. However, this data potentially contains industrial secrets. Federated learning can be used to protect these secrets, allowing companies to train a common model without sharing data. Federated learning is already being used successfully for training models in the healthcare sector, for smartphones and for autonomous driving. However, so-called model inversion attacks still make it possible to reconstruct the training data and thus disclose industry secrets. The lack of evidence to date as to whether federated learning is sufficient as a protective measure inhibits the participation of companies in such collaborative methods.
Job description
AI models for tool condition monitoring are to be trained using federated learning and then a reconstruction of the original data is to be attempted. Methods such as generative adversarial networks will be used for this purpose. The reconstruction will be carried out using different protection strengths in order to achieve the optimal trade-off point between protection strength and model performance.
Requirements
High interest in
If you are interested, please apply with your CV and proof of performance.