TRUST - Trustworthy AI
Trustworthy AI
- Establish trust in safe and responsible AI
- Ensure privacy-preserving in AI technologies
- Create guidelines for sustainable and beneficial use of AI
- Develop principles for explainable and transparent AI
- Develop principles for independent assurance of AI deployment
Trust in AI is a necessary condition for the scalability and societal acceptance of these technologies. Without trust, innovation can be stalled. This research investigates, from an interdisciplinary perspective, the multiple dimensions of trust raised by the deployment of AI and builds tools, methods, and a framework for assuring the safe and responsible deployment of AI in industry and society. This work package aims to answer the question: How can such tools address the safety and needs of individuals, organizations and society at large, addressing both non-technical and technical issues? The research will address issues related to safety, explainability, transparency, bias, privacy and robustness, as well as human-machine interactions and co-behaviour all in the context of industry regulations and societal expectations.
A review of current trust in AI and assurance guidelines and regulations in place effecting the various NorwAI work package applications and innovation pilots.
To be found here: https://sfi-norwai.github.io/regreview/iso/
Trusting technology is to understand its performance
Frank Børre Pedersen, VP and Programme Director at DNV, says that the starting point for trusting the technology is understanding its performance.
2022-04-28
Research activities - Visions and plans
2021-04-20