Incorporating artificial intelligence (AI) in Safety-critical systems for CO2 capture- SUBPRO-Zero
Incorporating artificial intelligence (AI) in Safety-critical systems for CO2 capture, injection, and storage
Detailed project
Motivation and background
Industrial facilities offshore like oil and gas, offshore windfarms and (to come) blue hydrogen production and CO2 capture are becoming more digitalized, in terms of increased automation, more autonomy, and the use of more intelligent sensors. While these technologies enable remote monitoring and operation, the building of trustworthy situational awareness and decision-support become essential at the remote locations. The availability of large amounts of data can be used for this purpose to improve the safety, compared to traditional plat operation. The data can be used to provide most optimal and real-time responses by operators (or an autonomous controller) in an emergency, early notification about drifts towards hazardous states, and predictions of potential escalation scenarios. Some of these features need to be part of the safety instrumented systems, as they are responsible for detecting and mitigating the consequences of hazardous events.
The opportunities of improved safety mentioned above rely on the ability to use machine learning (ML) and other artificial intelligence (AI) technologies. AI (including ML) can be used and trained to detect patterns, causations, and correlations, and learn as new data are becoming available. The current standards that govern design of safety-instrumented systems, like IEC 61508 [33] and IEC 61511 [34] explicitly exclude the use of AI. One of the major challenges is the lack of transparency in the algorithms and the self-learning capability that, as a side effect, can result in unpredicted or unexpected behavior not tolerated for safety systems.
Yet, it is believed that there are possibilities to overcome and compensate for the challenges of using AI for safety. It is expected that future versions of functional safety standards, like IEC 61508, may be less restrictive on the use of AI for some applications, considering the effort by the same people that are drafting an IEC technical report IEC TR 5469 on the topic of AI and functional safety. Based on the latter, the effort seems to point at two alternative paths: One path is to apply tools and methods that ensure sufficient trustworthiness of the AI (including ML) algorithms themselves and the second is to build an architecture of an AI safety-controller, an supervisor monitoring the performance of the AI, and a conventional safety-related controller that can take over if supervisor detects a faulty behavior. For AI to be accepted for safety in industrial applications, it is important to find ways to satisfy the goals and intents of requirements in the functional safety standards, such as
- How to estimate safety performance of AI algorithms?
- How to incorporate features like fault tolerance, diagnostics coverage, within AI algorithms and in combination with conventional controllers?
- How to collect data to support performance analyses?
- How to prevent and reveal systematic faults during specification, training, and use of AI algorithms?
In short, measures and requirements on the specification and realization of AI L algorithms must be integrated into the concept of safety-integrity and alignment with the four safety-integrity levels (SIL).
State of the art of AI and Safety
The interest of AI and perhaps more specifically on ML for safety-critical applications has grown steadily during recent years, alongside the work in e.g., the automotive industries to develop autonomous cars. The introduction of higher autonomy in other industries have led to some nice position papers on the topic, for example by DNV ([35, 36]. The following sections provide a brief overview of some of the literature identified.
Standardization and recommended practices
One of the most relevant standardization initiatives is the ISO/IEC technical committee JTC1, which is developing a new technical guideline IEC TR 5469 on AI and functional safety [37]. This guideline discusses opportunities as well as challenges of using AI with safety-critical systems and suggests possible architectures where conventional and AI-based safety-critical systems operate alongside.
The automotive industry has developed the guideline ISO/PAS 21448 [38] on safety of the intended functionality. The guideline is proposing methods and tools for identifying possible unsafe scenarios that can come with the increasing number of automated driving functionalities, where, to some extent, AI and ML may be incorporated in particular for the sensor technologies.
A recent technical report on software ISO/IEC AWI TS 29119-11 [39] provides guidelines for testing of AI-based systems. The report gives a brief summary of some of the challenges of using AI for safety-critical systems (4.3.2) and provides an overview of standardization initiatives on the topic (see 4.3.3.2.3).
IEEE gives an overview of standardization in artificial intelligence systems (AIS) at IEEE SA [40]webpage. A detailed review of the standards has not been done, but a couple can be mentioned: IEEE P2802, currently only on terminology, is the first step in developing future standards for medical devices using AI where also safety is an important aspect.
DNV has already written a position paper on the topic of trustworthy AI [36] in response to the increased applications of AI in general, and is currently also developing a recommended practice on the same topic
EU legal frameworks
The European Commission has emphasized the ability to “boost research and industry capacity while ensuring safety and fundamental rights” [41]. The commission has developed an European AI strategy [42] and a complementary whitepaper on the European approach to excellence and trust [43]. Of most concrete relevance in our context the requirements that will come with legal initiatives such as:
- A European legal framework for AI to address fundamental rights and safety risks specific to the AI systems: Safety-instrumented systems will be part of high-risk technologies according to the proposed classification of AI-usages, for which additional requirements are expected.
- A Proposal for a product liability directive: The current product liability directive is implemented in the machinery directive, a directive already being applicable for safety-instrumented systems used for such systems. One may expect, even if not explicitly stated, that these directives will be updated to address the use of AI and ML.
[43] explains in more detail the argumentations for the mentioned legislation initiatives.
Academic status
Most of the literature addressing AI and safety is quite new. Eldevik [35] provides a good summary of challenges related to the application of AI in safety-critical systems, and give examples of how AI algorithms can be made more trustworthy by combining physicsbased and data-driven approaches in creating datasets for training of AI-systems.
Functional safety is about how systems are specified, built, installed, and operated safely, and is important to address as it goes beyond the process of developing the AI algorithms. There are few authors that discuss this topic. One exception is Braband and Schäbe [44] give some very preliminary views and thoughts on the applicability of IEC 61508 for E/E/PE safety related systems that incorporate AI. However, the paper is not very thorough and the contribution to further research is therefore limited.
John Alexander McDermid and Jia [45] discuss how more trustworthiness of AI and ML can be achieved by applying safety and software engineering practices. The refer to the work by Amodei, Olah [46] lists several undesirable behaviors of AI technologies that must be handled. John Alexander McDermid and Jia [45] has introduced a potential collaborative model that shows how the mentioned disciplines can be applied for ensuring safe performance of AI/ML (see figure to the left).
Rudolph, Voget [47] proposes a set of technical safety methods for the three types of process models incorporated in an AI system as shown in the figure to the right: Knowledge-based, rule-based, and skillbased. In addition, the authors discuss how goal structuring notation (GSN) diagrams can be used in the safety demonstration process, allowing overall goals to be decomposed into sub-goals with assumptions, context, justification, and solutions.
[48] provides some specific examples of methods required by IEC 61508 that are not compatible with the application of AI and ML in the current version of the standard. The paper also explains some possible ways to overcome these challenges, based on a literature review.
Other literature found using search criteria like “functional safety +artificial + intelligence” on google scholar and seems to address AI and functional safety for autonomous systems. For example, how to inject faults and tests to detect ability of the AI system to prevent undesired behavior. However, many of the papers are on the conceptual level, and few experiments and real experience have been reported. A further exploration of available research results from this effort will be further investigated.
Aim of the proposed research
To conclude, it seems that standardization groups are in the process of trying to propose some best industrial practices also for AI applied to safety-critical systems, while EUdirectives are working in parallel to provide legal requirements that will eventually result in some new or revised directives and revisions of harmonized standards. Many of the concepts discussed are quite general, except for some literature that has been more specific on the application for road safety. To my knowledge, few initiatives except the ongoing PhD project in SFI SUBPRO are looking into how AI and ML can be incorporated with safety-instrumented systems within hazardous process industries. Addressing a specific domain of application is important to ensure that domain knowledge and aspects of safety are persevered when introducing new technologies.
Overarching research question is
How to apply AI and ML with safety-instrumented systems to enhance performance while achieving the same level of safety integrity as conventional systems?
Specific challenges to address
- What are the relevant use cases of AI to enhance performance of safetyinstrumented systems in SUBPRO-ZERO type of facilities? For which systems can AI provide a safety benefit?
- What are specific challenges of using AI for typical “on demand” safety functions?
- How can all aspects of requirements to safety functions be translated into requirements for AI-algorithms?
- How can new tasks and methods can be applied for safety demonstration within the framing of IEC 61508 and SIL?
- How to incorporate a strategy to tackle degradations and limitations in AI performance, using e.g. simplified conventional controller algorithms to detect and take over in case of unacceptable drift in AI performance?