EXAIGON
EXAIGON
The recent rapid advances of Artificial Intelligence (AI) hold promise for multiple benefits to society in the near future. AI systems are becoming ubiquitous and disruptive to industries such as healthcare, transportation, manufacturing, robotics, retail, banking, and energy. However, in order to make AI systems deployable in social environments, industry and business-critical applications, several challenges related to their trustworthiness must be addressed first: Lack of transparency and interpretability, lack of robustness, and inability to generalize to situations beyond their past experiences.
Explainable AI (XAI) aims at remedying these problems by developing methods for understanding how black-box models make their predictions and what are their limitations. The call for such solutions comes from the research community, the industry and high-level policy makers, who are concerned about the impact of deploying AI systems to the real world in terms of efficiency, safety, and respect for human rights.
The EXAIGON project (2020-2024) will deliver research and competence building on XAI, including algorithm design and human-machine co-behaviour, to meet the society’s and industry’s standards for deployment of trustworthy AI systems in social environments and business-critical applications.
Key researchers
-
Helge Langseth Professor
helge.langseth@ntnu.no Department of Computer Science -
Eric Monteiro Professor
+47-73596751 +4795213088 eric.monteiro@ntnu.no Department of Computer Science -
Adil Rasheed Professor
+47-90291771 adil.rasheed@ntnu.no Department of Engineering Cybernetics -
Inga Strumke Associate Professor
contact-ailab@ntnu.no Department of Computer Science
International collaborators
Industry collaborators
Vacant positions
4 PhD positions: Explainable Artificial Intelligence (XAI) for critical applications
Application deadline: April 27, 2020