Niclas Flehmig
About
I completed my bachelor's degree in Mechanical Engineering at the Technical University of Munich (TUM) and continued my academic journey at the TUM with a master's degree in Mechatronics and Robotics. During this period, I stayed a semester abroad on Svalbard, where I studied Arctic Engineering at the University Centre of Svalbard. After this, I returned to Norway for a joint project between my home university (TUM) and the Norwegian University of Technology and Science (NTNU) to write my master's thesis.
During my master's, I specialized in applied machine learning mainly for technical processes such as using Gaussian processes for quality assurance processes in the automotive industry and investigating the applicability of Gaussian processes for predictive maintenance in the fish farming industry.
Currently, I am part of the SUBPRO-Zero project at NTNU as PhD candidate. My research area is Incorporating AI in safety-critical systems.
Research
My research is focused on how we can incorporate AI into safety-critical systems. For me, the question is not 'whether we can use it or not' because there are plenty of applications where AI can be helpful namely image recognition for cracks in pipelines, predictive maintenance for valves, or predicting leakage of liquid hydrogen. The research aims more towards how we can ensure that during deployment the AI operates in the same way as during our training and testing and how can we prevent any misbehavior of the AI. So, we want to make AI safe during operation. This can be achieved by adequate training and testing, monitoring during operation, and an appropriate system architecture that embeds the AI into a safe environment.
The objectives for this research are:
- Investigating the Current State: Based on ISO/IEC TR 5469, we conduct challenges for AI in safety-critical systems and get a first look at some ideas of possible solutions. Moreover, we investigate the current state of AI in safety-critical systems which is mainly focused on autonomous systems.
- Building a Safe System Architecture: One way to increase safety is to set up an architecture around the AI that enhances its safety. We want to develop a framework for such an architecture that can help us design a safety-critical with AI.
- Monitoring our AI: Same as for parts in a machine, we want to know what is going on in and around the AI. So, we want to have a real-time monitoring tool that tells us something about the inputs, the model itself and the outputs. This can help the operator to make decisions on the use of the AI.
-
Appropriate Training and Know Your Limits: Before implementing the AI, our training and testing should be as good as possible, especially for safety-critical systems. We have to make sure that our data contains all potential hazards and therefore our AI knows about it. If the data does not contain everything then this is also good to know so that we are aware of our limitations.
-
Impacts and Benefits Assessment: In addition to those technical aspects, we want to evaluate the impacts of AI in this field in a beneficial way but also the potential downsides.
Outreach
2024
-
LectureLundteigen, Mary Ann; Myklebust, Thor; Flehmig, Niclas. (2024) AI and Functional safety – Pain or gain or both?. International Society of Automation, Division Safety and Sec ISA SAFESEC event October 9th 2024 , Online 2024-10-09 - 2024-10-09