Exploration of LLMs and Prompting Engineering in Electrical Drive Tests

Institut
Lehrstuhl für Nachhaltige Mobile Antriebssysteme
Typ
Semesterarbeit / Masterarbeit /
Inhalt
experimentell / konstruktiv /  
Beschreibung

In testing of electrical drives emerge large amount of data. The value of those data other than measurement are always underestimated and overlooked in the mountain of unstructured data, whose massiveness and unstructured characteristics hinder the individuall check and the automatic processing.

Large Language Models (LLMs) arises since recent years as an Nature Language Processing (NLP) solution for summarization of unstructured data formats. With proper promting and clear human instruction, LLMs are able to process documents in replacement of human works. In this thesis, the ability of LLMs in processing domain-specific documents, especially the codes and the visualization of test results, is going to be researched. Meanwhile, the possibility in optimizing the precision of LLMs output with Prompting Engineering will be explored.

Your tasks:

  • Local deployment of LLMs in Python environment
  • Benchmarking LLMs performance in summarizing domain-specific documents generated on electrical testbenches
  • Design of prompts for specific tasks for LLMs
  • Prompt Engineering experiment with designed prompts on chosen LLMs
  • Documentation of benchmarking and Prompt Engineering experiments
  • Building automatic document processing algorithm
Voraussetzungen
  • Experience in Python
  • Experience in electrical drive testing is welcome
  • Interest in Machine Learning and LLM
  • Self-initiative
  • Good English and German knowledge
Möglicher Beginn
sofort
Kontakt
M.Sc. Kai Cui
Raum: 2107.EG.008
Tel.: +49 8928924108
k.cuitum.de