Webinar series

Previous webinars/seminars in the NorwAI&NAIL Series on AI Research & Innovation

Previous webinars/seminars in the NorwAI&NAIL Series on AI Research & Innovation

                                                                                                 

Speaker Topic
            

Idelfonso Nogueira, Associate Professor, the Department of Chemical Engineering, NTNU and the PSE Group, and Vinicius Santana, Postdoctoral Researcher, NTNU.

 

            

Idelfonso Nogueira is an Associate Professor in the Department of Chemical Engineering at the Norwegian University of Science and Technology. He leads an AI-focused lab within the Process Systems Engineering Group, dedicated to developing domain-aware, robust, reliable, and interpretable AIsolutions for enhanced process control, optimization, and modeling. His work emphasizes the new paradigms of digitalization and automation, aiming for a sustainable transition towards a circular Industry 5.0.

 

Vinicius Santana is a Postdoctoral Researcher, at NTNU with the Department of Chemical Engineering, and PSE Group. Vinicius is a passionate and dedicated professional in data science, machine learning, and systems engineering for processes. He completedhis PhD at the University of Porto with institutional support from the MIT Portugal program, under the supervision of professors Idelfonso Nogueira (NTNU), Chris Rackauckas (Julia Lab, MIT), and Ana Mafalda Ribeiro (UPorto). His doctoral work focused on applyingdata science in Industrial AI, particularly in Scientific Machine Learning (SciML) and Hybrid Modeling.

 

            

Scientific Machine Learning: Bridging Domain Knowledge and Artificial Intelligence for Sustainable Industrial Transition

            

 

            

Recent advances in machine learning have catalyzed innovations across various sectors. Yet, their applicability to domain-specific data remains limited, primarily because traditional machine learning approaches do not fulfil the specific demands of scientific applications. These include the need for generalization, interpretability, scalability, and uncertainty quantification. Scientific Machine Learning (SciML) is an emerging field that develops methodologies to incorporate domain knowledge into machine learning developing scalable, domain-aware, robust, reliable, and interpretable solutions. This seminar will explore how machine learning can effectively combine process systems domain knowledge to ensure reliable and safe applications in scenarios where accurate extrapolation, interpretability, and uncertainty quantification are crucial. In the second part of the seminar, we will briefly demonstrate these concepts and their implementation in the Julia language.

                                                                                                 

Speaker Topic
            

Benjamin Dunn, Associate Professor at the Department of Mathematical Sciences at NTNU

 

            

Benjamin Dunn is an Associate Professor in (Neural) Data Science at the Department of Mathematics at NTNU. His background includes applied mathematics, aerospace engineering, neuroscience, some physics and other such things, but now he mainly thinks about how brains might work and methods for understanding/explaining neural networks. Ben also enjoys sailing.

 

            

Assembling doughnuts in the brain from parts

            

 

            

Ben talked about his group’s efforts to uncover neural representations (e.g. the torus behind the “grid cells”) from recordings of many neurons over time. A big difficulty with this type of data is that, for any given data set, neither all neurons nor all of the time points will be involved in a common representation, but rather there will be many representations, each involving different subsets of neurons and time points. This problem is made more interesting by the fact that such representations often take nice shapes such as circles and tori. It is through that observation that we are furthering the methods used to study neural representations (by assembling from parts). As a worked example, I will discuss how we used these ideas to find doughnuts in rat pups and why that suggests that the Little Prince would be lost all the time.

                                                                                                 

Speaker Topic
            

Pål Haugen, product developer at Furuno Norway and researcher in the DeepInMotion project at NTNU

 

            

Pål has a varied background, spanning from working in the Norwegian army, a MSc in biology (parasitology) and a PhD in medical research. He has started his own business, worked in a software company and is currently working in Furuno Norway and NTNU Trondheim. Pål is a genuine geek with love for the weird sides of life, running a podcast on parasites, and playing drums to black metal.

 

            

Life after PhD, Failtastic!

            

 

            

Pål presented his lessons learned from going from a PhD in statistics and medical research over to creating his own start up. His view on the “Big Data” and “AI” hype in the industry, working in a software company as a data scientist and how to cope with the lone wolf syndrome in small and medium sized business. 

                                                                                                 

Speaker Topic
            

Odd Erik Gundersen, Chief AI Officer at Aneo AS and Associate Professor, the Department of Computer Science, NTNU

 

            

Odd Erik Gundersen is the Chief AI Officer at Aneo AS and an associate professor at the Department of computer science at NTNU. He has applied AI in industry since 2006, mostly in start-ups, and lately in renewable energy. Odd Erik dreams about reproducibility, and sometimes he publishes papers on the topic as well.

 

            

Nothing to see here; is reproducibility even relevant for computer science?

            

 

            

The focus of the scientific community at large has changed towards large language models (LLMs), especially the most competent ones which include OpenAI’s ChatGPT and Google’s Gemini. This is not surprising as LLMs represent another paradigm shift in AI. Apparently, they can unlock huge values and increase productivity in many different domains. However, most research based on commercial LLMs is probably not reproducible.Where does this get us? Is this just a problem for systems operated by commercial entities or are there deeper problems with computer science? In this talk, we will investigate reproducibility issues in computer science with examples from AI.

                                                                                                 

Speaker Topic
            

Keith Downing, Professor, Department of Computer Science, NTNU

 

            

Keith Downing is a professor of Artificial Intelligence (AI) and Artificial Life (Alife) at NTNU, with a strong interest in the connections between life and intelligence in both natural and artificial systems. His 40 years of research in these fields has culminated in two books with MIT Press: Intelligence Emerging (2015), and Gradient Expectations (2023). Most of Keith's work involves evolutionary computation and artificial neural networks, and he teaches courses in AI Programming and Deep Learning at NTNU. He enjoys lecturing on AI and ALife to diverse audiences throughout Norway.

 

            

The Ascent of AI:  From Toys to Tools to Terror

            

 

            

Since its inception in the 1950's, the field of Artificial Intelligence has experienced a roller-coaster ride of popularity, hype and disillusionment due to researchers who promise too much and a media willing to inflate those claims even more. Unfortunately, not long after AI finally advanced from the "toys" stage to that of legitimate and widely applicable "tools", the adaptability and unpredictability of some of the more potent techniques caused great concern among AI researchers, journalists, and a wide range of scientists and engineers, including some of the world's most famous brilliant. This talk covers a brief history of AI and several of its recent successes before delving into some of the long- and short-term dangers, none of which can be fully debunked, but some of which do stretch the imagination.

                                                                                                 

Speaker Topic
            

Jun Wang, Professor, Department of Computer Science and School of Data Science City University of Hong Kong

 

            

Jun Wang is the Chair Professor of Computational Intelligence in the Department of Computer Science and School of Data Science at City University of Hong Kong. Prior to this position, he held various academic positions at Dalian University of Technology, Case Western Reserve University, University of North Dakota, and the Chinese University of Hong Kong. He also held various short-term visiting positions at USAF Armstrong Laboratory, RIKEN Brain Science Institute, Huazhong University of Science and Technology, Shanghai Jiao Tong University, and Swinburne University of Technology. He received a B.S. degree in electrical engineering and an M.S. degree from Dalian University of Technology and his Ph.D. degree from Case Western Reserve University. He was the Editor-in-Chief of the IEEE Transactions on Cybernetics. He is an IEEE Life Fellow, IAPR Fellow, and a foreign member of Academia Europaea. He is a recipient of the APNNA Outstanding Achievement Award, IEEE CIS Neural Networks Pioneer Award, CAAI Wu Wenjun AI Achievement Award, and IEEE SMCS Norbert Wiener Award, among other distinctions.

 

            

Empowering News Summerization with Pre-trained Language Models

            

 

            

The past four decades witnessed the birth and growth of neurodynamic optimization, which has emerged as a potentially powerful problem-solving tool for constrained optimization due to its inherent nature of biological plausibility and parallel and distributed information processing. Despite the success, almost all existing neurodynamic approaches a few years ago worked well only for optimization problems with convex or generalized convex functions. Effective neurodynamic approaches to optimization problems with nonconvex functions and discrete variables are rarely available. In this talk, the advances in neurodynamic optimization will be presented. Specifically, In the proposed collaborative neurodynamic optimization framework, multiple neurodynamic optimization models with different initial states are employed for scattered searches. In addition, a meta-heuristic rule in swarm intelligence (such as PSO) is used to reposition neuronal states upon their local convergence to escape local minima toward global optima. Experimental results will be elaborated to substantiate the efficacy of several specific paradigms in this framework for supervised/semi-supervised feature selection, supervised learning, vehicle-task assignment, financial portfolio selection, and energy load dispatching.

                                                                                                 

Speaker Topic
            

Lemei Zhang & Peng Liu, Postdoctoral fellows, Department of computer science, NTNU

 

            

Lemei Zhang is a postdoctoral fellow at the Norwegian Research Center for AI Innovation (NorwAI) at NTNU, Norway. Her research topics include natural language processing, recommender systems, and user modeling. Specifically, she is focusing on training a Norwegian Generative Model for news summarization. She is also working on applying data mining and machine learning techniques to design effective algorithms that enhance the performance of recommender systems in various domains, such as e-commerce and social networks.

 

Peng Liu works as a postdoctoral fellow at the Norwegian Research Center for AI Innovation (NorwAI) at NTNU. His research focuses on natural language processing and recommender systems. He is working as a technical lead for NorGLM (Norwegian Generative Language Modelling) project. His primary interests lie in the areas of large-scale language model training and its applications for especially news summerization. He is also working on sentiment analysis, topic modeling, lexical semantics, and recommendation algorithms based on data streams and multimodal contexts such as text, image, and so forth.

            

Empowering News Summerization with Pre-trained Language Models

            

 

            

Given the advancements and widespread adoption of ChatGPT and other variations of large language models (LMs), we are witnessing significant progress in natural language understanding and generation. These LMs have revolutionized the way we interact with AI systems and have found applications in various domains. In this presentation, we explore the development of Norwegian Generative Language Model (NorGLM) on  news summerization task. Especially, we will discuss their underlying architecture, training methodologies, and the data employed in their training. Furthermore, we uncover the limitations of our model, paving the way for future research endeavors.

Speaker Topic

Gavin Taylor, Professor, Computer Science Department, US Naval Academy

 

Gavin Taylor is a Professor of Computer Science at the US Naval Academy.  He graduated with his PhD in Computer Science from Duke University in 2011, and has done research in a wide variety of topics in ML, including reinforcement learning, data poisoning, high performance computing, and neural network optimization.  He has won numerous teaching awards, and is the chair of the Naval Academy’s effort to build an undergraduate major in Data Science.

 

Opting out of facial recognition

 

Despite the great empirical success of deep neural networks, they remain vulnerable to focused attacks at training time (data poisoning) and at testing time (adversarial attacks).  While this is unfortunate for people who want to make robust ML models, it provides an opportunity for those who would prefer to opt-out of the consistent surveillance large-scale ML enables.  This talk will discuss LowKey, a technical approach to manipulating images which results in the image appearing very similar to human eyes, while being useless for facial recognition algorithms.

Speaker Topic

Steffen Mæland, associate professor at the Western Norway University of Applied Sciences

 

Steffen Mæland is an associate professor at the Western Norway University of Applied Sciences and works in the ATLAS experiment at CERN. For his PhD he studied the properties of the Higgs boson, which proved to be a difficult matter, ushering him towards using machine learning to improve the prospects of the analysis. After discovering that proper application of machine learning methods is in fact difficult too, he continued researching the statistical interpretation of machine learning applied to different types of data, before recently returning to the field of particle physics.

 

Small particles, big data: Machine learning at the Large Hadron Collider

 

Exploring the fundamental laws of the universe involves a bottle of hydrogen, the world's biggest machine, and some 300 petabytes of data. At CERN, the international research collaboration of which Norway has been a member since its foundation, several experiments are looking for extremely rare events in huge amounts of data. Out of both interest and necessity, machine learning is being used at practically all levels of the experiments, from data selection, reconstruction, compression, to analysis. This talk gives an overview to the challenges involved at the experiments at CERN, and how modern machine learning methods are used to tackle them.

Speaker Topic

Professor Marija Slavkovik (Faculty for Social Sciences of the University of Bergen)

 

Marija Slavkovik is a Professor with the Faculty for Social Sciences of the University of Bergen. She is a computer scientist who does research in AI. She publishes in collective reasoning and decision making, specifically in the sub-area of multi-agent systems. She has been doing research in machine ethics since 2012. Machine ethics studies how moral reasoning can be automated. Marija works on formalising ethical collective decision-making. She has held held several seminars, tutorials and graduate courses on AI ethics (http://slavkovik.com/teaching.html). Marija is interested in the phenomenon of autonomous systems increasingly becoming moral arbitrators by virtue of the dissipation of the machine-society segregation. Automation, particularly of cognition, is not always possible without automating aspect of ethic reasoning or values. The problem then is whose moral values should have standing, what moral opinions and values should be elicited, how should that be done and what is the right way to aggregate these "measurements"?

 

 

The AI Privacy Problem

 

Privacy is often brought up as a value that AI should uphold or as a risk created by AI. How can we know if an algorithm or a computational system is aligned with privacy? Can we measure how privacy preserving a particular model is, for instance? To be a able to do such things we need to understand what kind of epistemic citizen privacy is. Here is where the problems starts. Beyond differential privacy, we do not have a systematic analysis of privacy. In this talk I will discuss the various aspects of privacy that are relevant for AI, the challenges in transforming calls for respect for privacy into properties of algorithms we can mathematically verify. There will be no answers, just questions which we need to pay attention to in our work with AI. 

Speaker Topic

Professor Lars Aiolo Bongo (Department of Computer Science, UiT)

 

Lars Ailo Bongo is a professor at the department for Computer Science at UiT – The Arctic university of Norway, in Tromsø. He is the principal investigator of the Health Data Lab at UiT, where they provide the systems, methods, and tools needed to analyze and interpret complex health datasets. They combine experimental computer science with real problems, applications, and data obtained from their biomedical research collaborators. Bongo is also co-founder of 3StepBio, where they do cloud-based bioinformatics analyses. Furthermore, he is co-founder of Medsensio, where they use deep learning to understand lung sounds.

 

 

Medical machine learning: from basic research to startups

 

In this talk Bongo will present their lessons learned in research and innovation projects where they have both succeeded and failed to developed machine learning solutions for several medical domains. Their projects cover basic research, applied research, clinical innovation projects, and startups. In the Norwegian Women and Cancer project, they developed machine learning methods for metastasis prediction using a small, but unique, dataset with gene expression data. In SFI Visual Intelligence and Consortium for Patient-centered AI at UiT they are developing computational pathology methods for cancer prognosis and prediction for very large whole-slide-images. In collaboration with BreastScreen Norway, they are setting up a project for validating commercial AI solutions for mammography analyses. Finally, Bongo will present their research on machine learning for abnormal lung and heart sounds that resulted in the Medsensio startup.

Speaker Topic

Associate Professor Rosina Weber (Drexel University, USA)

 

Rosina Weber is an Associate Professor of Information Science in the College of Computing and Informatics at Drexel University, USA. Her lab investigates explainable artificial intelligence methods (XAI) from the lens of use-inspired research. She has co-chaired four international workshops in the last four years, i.e., IJCAI-19, 20, 22, and AAAI 21. Weber has over 100 peer-reviewed papers in case-based reasoning and textual methods, including a widely adopted textbook. Currently, she is responsible for one of the reasoning agents for the NIH-NCATS Biomedical Data Translator and investigates XAI methods in small-data classifiers for the DARPA point-of-contact ultrasound (POCUS-AI) program. Her current visit to Europe is part of a collaboration with Mälardalens University funded by the Swedish Vinnova foundation where she is the international XAI expert orienting how to equip AI applications with explanatory capabilities.

 

 

XAI: Roadblocks, Trends, and Directions

 

The sub-field of artificial intelligence (AI) known as explainable AI (XAI) faces several roadblocks that prevent its advancement. This talk introduces and contextualizes roadblocks stemming from a lack of consensus on foundational term definitions, biased motivations, inconsistent evaluations, and dependencies on multiple disciplines. This talk also presents latest trends such as personalized XAI and re-orientation of XAI methods to improve performance. These trends however steer away from resolving said roadblocks. These considerations lead to new directions in support of alleviating existing roadblocks toward promoting the resilience of the field.

 

Presentation

Speaker Topic

Professor Nirmalie Wiratunga (NTNU)

 

Nirmalie Wiratunga is a Professor with 20+ years of research experience in the field of Artificial Intelligence (AI). She has a 20% adjunct professor role at the NTNU (Norwegian University of Science and Technology) in their initiative on women in AI. She is best known for her work on textual CBR and in particular the use of ontologies and language modelling to generate semantic representations for case retrieval. In her recent work with the selfBACK (H2020) project, she leads the work package on user adherence monitoring; and her team has developed deep metric learning to reason with sensor data to deliver interactive motivational content for users in the self-management of chronic diseases. She was the keynote speaker for ICCBR’20, co-chair ICCBR’12 and a senior PC member for IJCAI, AAAI and ICCBR for 5+ years.

 

 

The iSee project: Building the AI you trust

 

 

Speaker Topic

Professor Mikko Kurimo (Aalto University, Finland)

 

Mikko Kurimo is a professor in speech and language processing at Aalto University, Finland. He has lead Aalto's speech recognition research group since 2000 as well as several national and international research projects. He has published over 200 scientific articles and his most cited works are in ASR and language modeling for morphologically complex languages such as Finnish, Estonian, Hungarian and Arabic.

 

 

Machine learning in ASR at Aalto University

 

I will introduce recent results from our current research projects in automatic speech recognition (ASR). These include collection,  analysis and benchmarks for the new open access Finnish conversational speech data (Donate Speech and Parliament Sessions) and assessment of speaking skills in Finnish and Swedish on new data collected from language learners. The methods include neural subword language models, self-supervised and end-to-end models and curriculum learning.

Speaker Topic

Professor Keith L. Downing (NTNU)

 

Keith L. Downing is a professor of Artificial Intelligence (AI) in the Department of Computer Science (IDI) at NTNU.  His main research interests are in the "Sciences of the Artificial": AI and Artificial Life (ALife), with his doctorate (1990) firmly embedded in the former but the decades that followed inspired equally by both.  The most popular methods in his own toolkit are Evolutionary Computation and Neural Networks (from both Connectionist Deep Learning and more biologically-based Computational Neuroscience).  His work at the crossroads of AI, ALife and Neuroscience culminated in the book "Intelligence Emerging" (2015, MIT Press), which has motivated his more recent investigations into the role of neural systems, both natural and artificial, as predictive machines.

 

 

Stupidity Emerging

 

Are you aware of the constant presence of Artificial Intelligence (AI) in your life, and do you appreciate the advice, recommendations and hints that it provides?  Most of us are, and do.  But have you considered the effects that this bombardment of behavioral "nudges" might be having on your cognitive capabilities, your problem-solving skills, your capacity for "deep thought", and, in general, your intelligence?

In this lecture, based on the writings of several prominent technology experts, I will elaborate the view that the vast majority of AI-generated, online stimuli are designed to appeal to the more primitive (reptilean) areas of our brains, often refered to as "System 1", where quick pattern recognition and reactive behavior dominates (often with an emotional accelerant), while the deep, rational thought of "System 2" (the sapien brain) rarely gets involved, to the great benefit of the businesses buying ads on the platforms that employ these AI oracles.  When suggestions stem from choices made by other people "like you", the population of users trends toward homogenous clusters, which, again, are a thing of beauty for businesses who want your money and stronly prefer that you "just do it" instead of "thinking it through".

Designed to be provocative, this talk will emphasize these potential problems that directly relate to the daily activities of many of us, as AI researchers, consumers and members of society.  I will then suggest a few solutions to the problem, but I hope to get many more (and better) non-AI-generated suggestions from the audience, as part of our discussion.

Speaker Topic

Lisa Reutter (NTNU)

 

Lisa Reutter is a PhD fellow at the Department of Sociology and Political Science at the Norwegian University of Science and Technology. She researches the datafication of public administration with a special focus on how datadriven technology is produced and alters the welfare state. Her research is located at the intersection between the fields of public administration, sociology and science and technology studies.

 

 

The Norwegian Data Rush: A Brief Introduction to the Datafication of Public Administration

 

The idea of datadriven public administration consists of two interwoven processes: the use of more and different data (big government data) and the recirculation of data through ever more complex technologies (AI, machine learning). Interest in the socio-technical imaginary of datadriven public administration is growing globally. Governments translate and mediate this vision in a variety of ways. Norway is a particularly interesting case to research in this context. There is a widespread trust in government which ‘intersects with a popular belief that technological progress is inevitable, apolitical, and problem free’ (Sandvik 2020:2). This technology optimism coupled with the enormous amount of detailed data on citizens collected by the welfare state for decades, has recently culminated in a massive push for the datafication, both in policy and practice.

 

The presentation will provide you with a unique empirical insight into the inner workings of public administration datafication in Norway. It draws on fieldwork in the Norwegian Tax Administration and the Norwegian Labor and Welfare Administration and a general mapping of datafication efforts across the public sector. 

 

This research project aims to unpack the social, political, and material aspects of datafication in Norway and directs its analytical attention away from the hype of big data and artificial intelligence and towards mundane work processes, people and organizations that are bringing this process into being. Turns out, AI doesn’t solve everything.

Speaker Topic

Simen Eide, FINN.no

 

Simen Eide is an industrial PhD candidate at the University of Oslo and FINN.no focusing on recommender systems, decision making and model uncertainty. Prior to the PhD, he worked as a practitioner building recommender systems and other machine learning products for the FINN.no marketplace. He has a masters in mathematics with focus in mathematical finance and also worked on building portfolio risk systems for the Oslo and Swiss exchange.

 

 

Recommender systems, bandits and bayesian neural networks

 

Internet platforms consist of millions or billions of different items that users can consume. To help users navigate in this landscape, recommender systems have become an important component in many platforms.

 

The aim of a recommender system is to suggest the most relevant content on the platform to the user based on previous interactions the user has done with the platform.

 

A model being used in recommender systems is faced with multiple sources of uncertainty: There are limited interactions per user, the signals a user makes may be noisy and not always reflecting her preferences, and new items may be introduced to the platform giving few signals on these items as well. This talk will focus on model uncertainty and decision making in the recommender systems domain, with focus on the Norwegian marketplace FINN.no. We will discuss various ways to quantify, reduce and exploit these uncertainties through the use of bayesian neural networks, hierarchical priors and different recommender strategies

Speaker Topic

Professor Mark Keane (University College of Dublin)

 

 

Augmenting the Weather: Using counterfactuals to deal with dataset drift caused by climate change

 

In recent years, counterfactuals has become very popular for explaining the predictions of black-box AI systems. For example, if you are refused a loan by an AI and ask “why”, a counterfactual explanation might tell you, “well, if you asked for a smaller loan, then you would have been granted the loan". These counterfactuals are generated by methods that perform perturbations of the feature values of the original query (e.g., we perturb the value of the loan), and are typically synthetic data-points that did not originally occur in the dataset. 

This aspect of counterfactual methods prompted us to consider whether they might also work for data augmentation; that is, the supplementation of a dataset with generated (rather than actual) data-points. Data augmentation is important to Deep Learning models (where there may be a scarcity of data) and to prediction problems where there is data-drift or concept-drift (because the data is changing over time). A classic case of the latter is climate change. As our climate changes, past data on the weather is drifting towards more (and often more extreme) climate-disrupted events. 

If we are to predict phenomena using climate data, we need to be able to track these changes. We report work we have done using counterfactual methods to augment data to improve prediction in the face of such drifting. We also show that this method seems to generalise to imbalanced datasets and does better than very popular data augmentation methods (such as SMOTE). (Joint work with Mohammed Temraz & Barry Smyth)

Speaker Topic
  • Anastasios Lekkas (Associate Professor)
  • Inga Strümke (Postdoctoral fellow)
  • Vilde Gjærum  (PhD student)
  • Sindre Remman (PhD student)

Explainable AI

 

The EXAIGON project (2020-2024) delivers research and competence building on Explainable AI, including algorithm design and human-machine co-behaviour, to meet the society’s and industry’s standards for deployment of trustworthy AI systems in social environments and business-critical applications.

Speaker Topic

Jurica Šprem, GE Healthcare

 

Jurica Šprem has a Bachelor's degree in Computing from the University of Zagreb and a Master’s degree in ICT with a focus on signal processing, also from University of Zagreb.

In his PhD, he worked with enhanced cardiovascular risk prediction by machine learning at the Image Science Institute, UMC Utrecht. After obtaining his PhD in 2019, Jurica joined GE Healthcare in Oslo as AI Tech Lead with a focus on combining AI with cardiac ultrasound. He is currently a Product Owner of AI within cardiac ultrasound where he continues to follow his interests in combining machine learning and AI together with medical imaging.

Making Healthcare More Human With AI

 

We have experienced significant changes in our everyday life in the past years. We see new technologies emerge almost on daily basis, surrounding us with different tools and solutions with the goal to improve our lifestyle. Artificial intelligence (AI) is one of such technologies that has been entangled in almost all aspects of our daily lives. 

But can AI help healthcare professionals do their jobs the way they always wanted to by providing them with time and tools to focus on what matters and build a more efficient and intelligent ecosystem for patient care? Here we discuss problems and solutions AI brings within healthcare, how the cardiac ultrasound division at GE has adopted AI, and how AI is making health care more human.

Speaker Topic

Armin Catovic, Schibsted

 

Armin Catovic graduated 2007 in Computer Science/Telecom Engineering double degree from Swinburne University, Melbourne, Australia. He worked for a number of startups from 2005 before joining Ericsson in 2008. He spent the next 13 years at Ericsson working in various roles including radio engineer, system tester, software engineer and a machine learning engineer, and in various countries - Australia, Indonesia, Bangladesh, Singapore, US and Sweden. He's been working as a data scientist at Schibsted since the beginning of 2021, focusing on natural language processing. He currently lives in Stockholm with his wife and two kids.

Machine Learning in Contextual Advertising

 

Contextual advertising is a form of targeted advertising where the content of an ad segment is directly correlated with the content of a news article or a web page. In this presentation we walk through our use of unsupervised topic models, in order to optimally map demand with our inventory. We discuss caveats and challenges when working with unsupervised models in production. We also look into future work combining machine learning and contextual advertising.

Speaker Topic

Sara Malacarne, Telenor & Massimiliano Ruocco, SINTEF & NTNU

 

Sara Malacarne is a Research Scientist in the Analytics and AI team at Telenor Research. Sara has PhD in pure math. Since joining Telenor Research in 2018, she has been developing an interests towards AI/DL methods for solving tasks in the time series domain for 4G/5G Telco data. She is a collaborator in the ML4ITS “Machine Learning for Irregular Time Series” project lead by Massimiliano Ruocco (PI).
 
Massimiliano Ruocco is a Senior Researcher at SINTEF Digital and Adj. Associate Professor at the Dept of Computer Science (IDI), at NTNU. 

Generative Adversarial Networks for Anomaly Detection on Telco multivariate time series

 

Anomaly detection is the process of identifying interesting events that deviate from the data’s “normal” behaviour and has many important applications to real case scenarios. In the telecommunications domain, efficient and accurate anomaly detection is vital to be able to continuously monitor the network infrastructure key performance indicators (KPIs) and alert for possible incidents in time.
 
Network KPIs are in the form of multivariate time series which, for costs reasons, are not labelled. The main challenges for performing anomaly detection on network data are the following: 1) it is an unsupervised learning problem, 2) temporal and feature-wise correlations have to be exploited in order to reduce false positives,  3) anomalies are not necessarily rare events in the data, 4) the data is high-dimensional.  
 
This work is a first attempt to simultaneously address the first three challenges listed above, with the use of a novel Generative Adversarial Network (GAN), called RegGAN. GANs present in the literature -- such as MAD-GAN, BeatGAN, TadGAN -- have serious drawbacks on highly contaminated data, that is, data with frequent abnormal events. Thus, RegGAN was specifically built to overcome this issue, and it has proven to be robust to contamination experiments performed on open benchmark datasets.

Speaker Topic

Ole Jacob Mengshoel , NTNU & Ritchie Lee, NASA Ames Research Center

 

Dr. Ole Jakob Mengshoel is a Professor in Artificial Intelligence at the Department of Computer Science (IDI) at the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. At NTNU, he is affiliated with the Norwegian Open AI Lab. Dr. Mengshoel has published over 100 articles and papers in peer-reviewed journals and conferences, and holds 4 U.S. patents. He holds a Ph.D. in Computer Science from the University of Illinois, Urbana-Champaign. His undergraduate degree is in Computer Science from the Norwegian Institute of Technology, Trondheim (now NTNU).

Ritchie Lee is a research scientist in the Robust Software Engineering (RSE) group at NASA Ames Research Center.
His research interests are in safety validation and testing, decision-making systems, machine learning, and controls.  Of particular interest is developing algorithmic tools for the design and analysis of safety-critical cyber-physical systems including aircraft collision avoidance systems, air traffic automation, autonomous ground vehicles, and unmanned aerial systems (UASs). Ritchie holds a Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University, an M.S. degree in Aeronautics and Astronautics Engineering from Stanford University, and a B.S. degree in Electrical Engineering from University of Waterloo.

Developing Complex but Trustworthy Computing Systems with Artificial Intelligence

 

This talk centers on the development of complex and trustworthy computing systems with artificial intelligence (AI).  Clearly, one can focus on or mean different things when discussing "AI and trustworthiness."  A first possible meaning is the development of trustworthy complex computing systems, systems that use AI, likely along with other computational methods.  Here, the development method itself may or may not use AI. A second meaning is to use AI methods to develop trustworthy complex computing systems, systems that may or may not contain or use AI themselves. 

In this case, the use of AI during development of a complex engineered system is front and center. In this talk we discuss both types of approaches. A particular focus is on a method called adaptive stress testing (AST), which falls in the second category mentioned above. Using simulations, AST finds likely failure events of complex aerospace systems by means of reinforcement learning and other AI techniques.  The AST method has, for example, been used to validate next-generation aircraft anti-collision systems.  

The development and validation of complex engineered systems with AI and different degrees of autonomy, up to fully autonomus systems, are key in developing trustworthy and human-centric AI. As these complex engineered systems proliferate and interact with each other and humans, we consider trustworthiness to be an essential topic in both fundamental and applied AI.

Speaker Topic

Stein H. Danielsen

 

Stein H. Danielsen is Co-founder and Chief Solutions Officer at Cognite.

Towards smart and autonomous industry

 

The NorwAI Innovate conference, took place in Trondheim, October 20-21. The conference brought together AI enthusiasts to present great examples of innovation research and industrial excellence within AI. In addition to the conference, Cognite and NTNU hosted a hackathon for students, as a side program to the main event.

One week after the conference and hackathon days, we’re happy to invite you to an open webinar, Friday October 29th. In this webinar, you’ll get to learn about the hackathon challenge and the winner team will pitch their solution to it. After that, Stein H. Danielsen from Cognite will give a talk about the mission of Cognite to work towards smart and autonomous industry: 

When we envision technology in the future we often think about robots - and their ability to solve complex tasks that would normally only be possible by humans. For industrial companies, robots are an essential part of their digitalization efforts, as their core business revolves around physical assets. To avoid repetitive, boring, and dangerous tasks, it is necessary for computer systems to interact with the real world. 

Stein Danielsen has always been passionate about robots, and is now co-founder and CSO of Cognite. He will tell us what robots can already do today, and show how robots are being put to use in Cognite. Furthermore, Stein will discuss how Cognite develops human-like understanding of the industrial reality for robots, and how we can exploit robot’s super-human capabilities. Finally, Stein will tell us where he thinks we’re heading for the future.

Speaker Topic

Martin Tveten

 

Martin Tveten is a research scientist at the Norwegian Computing Center, and a former PhD student at the Department of Mathematics, University of Oslo, specialising in methods and algorithms for change and anomaly detection. Applied interests currently include real-time monitoring of IT and industrial system. 

Introduction to change detection

 

In this seminar, I will introduce some basic ideas underlying statistical change detection methods. Such methods are important for answering any questions of the form "has some statistical property of the data changed over time?" and "if so, when does the change(s) occur?". An important AI-related application of change detection methods is anomaly detection in streaming data, for example from sensor networks.
 
Both the offline and online version of the change detection problem will be considered. In the offline problem, the aim is to retrospectively estimate the points in time where some statistical properties of a time series change. In the online problem, streaming data is processed in real-time with the aim of detecting a change as quickly as possible. A few simple, real and simulated, data examples will guide the presentation throughout, as the focus is on giving an intuition for the general methodology. I will also briefly present my own research.

Trustworthy Complex and Intelligent Systems Webinar Series

This series is a collaboration between the European Safety, Reliability & Data Association (ESReDA), the ETH Zürich Chair of Intelligent Maintenance Systems, the ETH Risk Center, ETH Zürich-SUSTech Institute of Risk Analysis, Prediction and Management (Risks-X), the Norwegian Research Center for AI Innovation (NorwAI) and DNV.

Webinars explore the themes of trust, ethics and applications of AI and novel technology in complex and safety critical intelligent systems.

Previous webinars in the Trustworthy Complex and Intelligent Systems Webinar Series

Previous webinars in the Trustworthy Complex and Intelligent Systems Webinar Series

 

Speaker Topic

Maziar Raissi, University of Colorado Boulder

 

Maziar Raissi is currently an Assistant Professor of Applied Mathematics at the University of Colorado Boulder. Dr. Raissi received a Ph.D. in Applied Mathematics & Statistics, and Scientific Computations from University of Maryland College Park. He moved to Brown University to carry out postdoctoral research in the Division of Applied Mathematics. Dr. Raissi worked at NVIDIA in Silicon Valley for a little more than one year as a Senior Software Engineer before moving to Boulder. His expertise lies at the intersection of Probabilistic Machine Learning, Deep Learning, and Data Driven Scientific Computing. In particular, he has been actively involved in the design of learning machines that leverage the underlying physical laws and/or governing equations to extract patterns from high-dimensional data generated from experiments.

Data-Efficient Deep Learning using Physics-Informed Neural Networks 

 

A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: 

  1. designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and
  2. designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.
Speaker Topic

Enrico Zio

CRC MINES ParisTech, France

Politecnico di Milano, Italy

 

Enrico Zio is full professor at the Centre for research on Risk and Crises (CRC) of Ecole de Mines, ParisTech, PSL University, France, full professor and President of the Alumni Association at Politecnico di Milano, Italy. 

 

His research focuses on the modeling of the failure-repair-maintenance behavior of components and complex systems, for the analysis of their reliability, maintainability, prognostics, safety, vulnerability, resilience and security characteristics, and on the development and use of Monte Carlo simulation methods, artificial intelligence techniques and optimization heuristics. 

 

In 2020, he has been awarded the prestigious Humboldt Research Award from the Alexander von Humboldt Foundation in Germany

Prognostics and Health Management for Condition-based and Predictive Maintenance: A Look In and a Look Out

 

A number of methods of Prognostics and Health Management (PHM) have been developed (and more are being developed) for use in diverse engineering applications. Yet, there are still a number of critical problems which impede full deployment of PHM and its benefits in practice. In this lecture, we look in some of these PHM challenges and look out to advancements for PHM deployment.

 

Speaker Topic

Øyvind Smogeli, CTO Zeabuz

 

Øyvind Smogeli is the CTO and co-founder of Zeabuz and an Adjunct Professor at NTNU. Øyvind received his PhD from NTNU in 2006, and has spent his career working on modeling, simulation, testing and verification, complex cyber-physical systems, and assurance of digital technologies. He has previously held positions as CTO, COO and CEO of Marine Cybernetics and as Research Program Director for Digital Assurance in DNV.

Zeabuz: Providing trust in a zero emission autonomous passenger ferry

 

Zeabuz is developing a new urban mobility system based on zero emission, autonomous passenger ferries. This endeavour comes with a huge trust challenge: How to prove the trustworthiness towards both passengers, authorities, municipalities, and mobility system operators? This trust challenge has many facets and many stakeholders. There is a need to balance safety and usefulness, balance technical safety and perceived safety, and balance the various stakeholder needs. To solve this, an assurance case is being established that can capture a wide range of claims and evidence in a structured way. This talk introduces the Zeabuz mobility concept, the autonomy architecture, then will focus on the many layers of trust and how to achieve this. The various components of the autonomy system and the simulation technology used to build trust in the autonomy are explained. An approach to build trust in the simulators through field experiments and regular operation will be presented. It will be shown how this all fits into the larger assurance case.

Speaker Topic

Martin Vechev, ETH Zürich

 

Martin Vechev is an Associate Professor at the Department of Computer Science, ETH Zurich. His work spans the intersection of machine learning and symbolic methods with applications to topics such as safety of artificial intelligence, quantum programming and security. He has co-founded 3 start-ups in the space of AI and security, the latest of which LatticeFlow aims to build and deploy trustworthy AI models.

Certified Deep Learning

 

In this talk I will discuss some of the latest progress we have made in the space of certifying AI systems, ranging from certification of deep neural networks to entire deep learning pipelines. In the process I will also discuss new neural architectures that are more amenable to certification as well as mathematical impossibility and complexity results that help guide new kinds of certified training methods.

 

Speaker Topic

Asun Lera St.Clair, DNV & André Ødegårdstuen, DNV

 

Dr. Asun Lera St.Clair, philosopher and sociologist, is Director of the Digital Assurance Program in DNV Group Research and Development and Senior Advisor for the Earth Sciences unit of the Barcelona Supercomputing Center (BSC). She has over 30 years of experience with designing and directing interdisciplinary user-driven and solutions-oriented research for global challenges in the interface of sustainable development and climate change, and more recently on the provision of trust on digital technologies and leveraging these for sustainable development.

André Ødegårdstuen works as a Senior Researcher at DNV where he focuses on the assurance of machine learning. André is active in the area of computer vision for drone surveys of industrial assets and monitoring of animal welfare. He has a background in physics and experience from the Point-of-Care diagnostic industry.

Trustworthy Industrial AI Systems

 

Trust in AI is a major concern of many societal stakeholders. These concerns relate to the delegation of decisions to technologies we do not fully understand, to the misuse of those technologies for illegal, unethical or rights violation purposes, or to the actual technical limitations of these cognitive technologies while we rush to deploy them into society. There is a fast-emerging debate around these questions, often named as responsible AI, AI ethics, or explainable AI. However, there is less discussion as to what should be considered a trustworthy AI system in industrial contexts. AI introduces complexity and creates digital risks.

While complexity in traditional mechanical systems is naturally limited by physical constraints and the laws of nature, complexity in integrated, software-driven systems – which do not necessarily follow well-established engineering principles – seems to easily exceed human comprehension.

In this presentation we will unpack the idea that the trustworthiness of an AI system is not very different from that of a leader or an expert to whom, or an organization to which, we delegate our authority to make decisions or provide recommendations to reach a particular goal. Similarly, we argue that AI systems should be subjected to the same quality assurance methods and principles we use for any other technology.

 

Speaker Topic

Peter Battaglia

 

Peter Battaglia is a research scientist at DeepMind working on approaches for reasoning about and interacting with complex systems.

Structured models of physics, objects, and scenes

 

This talk will describe various ways of using structured machine learning models for predicting complex physical dynamics, generating realistic objects, and constructing physical scenes. The key insight is that many systems can be represented as graphs with nodes connected by edges, which can be processed by graph neural networks and transformer-based models. By considering the underlying structure of the problem, and imposing inductive biases within our models that reflect them, we can often achieve more accurate, efficient, and generalizable performance than if we avoid using principled assumptions.

22 January 2021

Speaker Topic

Joseph Sifakis

 

Hear 2007 Turing Award winner Joseph Sifakis explain the challenges raised by the vision for trustworthy autonomous systems for the autonomous vehicle case and outline his hybrid design approach combining model-based and data-based techniques and seeking trade offs between performance and trustworthiness.

Why is it so hard to make self-driving cars? (Trustworthy autonomous systems)

 

Why is the problem of self-driving autonomous control so hard? Despite the enthusiastic involvement of big technological companies and investment of  billions of dollars, optimistic predictions about the realization of autonomous vehicles have yet to materialize.