IMod-Workshop-2024-Header

IMod-Workshop-2024-Abstracts-Title

Detailed programme for Workshop on PDEs, Spatio-Temporal Statistics, and Data-Driven Methods in Neuroscience and Fluid Mechanics

Tuesday (March 12, 2024)

Tuesday (March 12, 2024)

Title: Revolutionizing science and engineering in the age of machine learning

Abstract: A major challenge in the study of dynamical systems is that of model discovery: turning data into reduced order models that are not just predictive, but provide insight into the nature of the underlying dynamical system that generated the data. We introduce a number of data-driven strategies for discovering nonlinear multiscale dynamical systems and their coordinates from data. We consider two canonical cases: (i) systems for which we have full measurements of the governing variables, and (ii) systems for which we have incomplete measurements. For systems with full state measurements, we show that the recent sparse identification of nonlinear dynamical systems (SINDy) method can discover governing equations with relatively little data and introduce a sampling method that allows SINDy to scale efficiently to problems with multiple time scales, noise and parametric dependencies. For systems with incomplete observations, we show that the Hankel alternative view of Koopman (HAVOK) method, based on time-delay embedding coordinates and the dynamic mode decomposition, can be used to obtain a linear models and Koopman invariant measurement systems that nearly perfectly captures the dynamics of nonlinear quasiperiodic systems. Neural networks are used in targeted ways to aid in the model reduction process. Together, these approaches provide a suite of mathematical strategies for reducing the data required to discover and model nonlinear multiscale systems.

Title: From Machine Learning to Physics and back: Some Perspectives on Physical-Statistical Models, PDEs and Neural Networks

Abstract: It is widely acknowledged how the relentless surge of Volume, Velocity and Variety of data, as well as the simultaneous increase of computational resources have stimulated the development of data-driven methods with unprecedented flexibility and predictive power. However, not every study entails a large data set: many applications have a high associated sampling cost and are instead constrained by physics-informed partial differential equations. While the principle of embedding physical information in a statistical model is not new, the machine learning literature has recently resumed interest in this topic by formulating neural networks informed by Partial Differential Equations, hence providing a unified framework for a whole spectrum of problems ranging from data-rich/context-poor to data-poor/context-rich. In this talk, I will present this framework and discuss some of the most recent efforts to reformulate it as a stochastic model-based approach  (as opposed to an algorithmic formulation) with reservoir computing methods.

Title: Stochastic adventures in space and time

Abstract: The standard geostatistics toolbox includes methods for modelling spatial dependence between georeferenced observations, as well as methods for modelling the occurrence of random points. The core model building blocks are often some form of Gaussian random fields.

The easiest approach to constructing space-time models is by taking the product between a spatial covariance kernel and a temporal covariance kernel. These are called covariance separable models. An alternative that may better capture the spatio-temporal dynamics is to take inspiration for physics motivated partial differential equations such as the heat equation, which leads to non-separable models. Non-separable models are in general more computationally expensive, but one can sometimes use the model structure to retain a lot of the simplicity of separable models, for example allowing these models to be used as components of larger hierarchical generalised additive models. For point process observations, such as observations of a moving animal, the temporal dynamics poses an additional challenge.

I will discuss some of these aspects, including the construction of non-separable space-time models available in the INLAspacetime R package.

Title: Physics-Informed Statistical Machine Learning

Abstract: Physics-informed machine learning (PIML) is an emerging field of research that integrates physical laws into the machine learning framework. Most existing methods treat the physical model as a perfect representation of reality. However, the idealized physical model might rely on simplified assumptions, while the reality is more complex. Additionally, the physics might not be fully known. Even if the physics is fully known, the computational cost of a high-fidelity model might be prohibitive for a specific application, and therefore, a simplified, low-fidelity, and less accurate model might be a practical alternative. In such cases, it is important to account for model discrepancies in the model formulation, which represent the difference between the predictions of the physical model and actual observed data. Addressing these discrepancies and quantifying the uncertainties for the quantities of interest are crucial steps in improving the accuracy and reliability of PIML applications.

In this talk, I will present fully probabilistic PIML methods that incorporate model discrepancies directly into the model formulation and quantify the uncertainties for the quantities of interest, such as physical parameters in inverse problems and uncertainties in model predictions. These methods combine existing ideas for incorporating model discrepancy with recent advances in PIML and Bayesian computation. Additionally, I will discuss the application of physical models in digital twin technologies, specifically presenting a fully probabilistic formulation that allows information sharing between individual models with the aim of reducing uncertainty in physical parameters when accounting for model discrepancy.

Title: Coupled interaction between surface waves and a vertically sheared flow

Abstract: A system of ocean surface waves and flow is key to understanding the dynamics of the upper ocean. A Nonlinear Schrödinger (NLS) equation-based theoretical model is derived in Li & Chabchoub (2024, Geophys. Res. Lett.) for deepwater waves in the presence of a background flow. The flow propagates in the horizontal plane with its profile magnitude and direction being depth dependent. In this talk, a few novel features, attributing to the nonlinear coupling between surface waves and a (wave-correlated and -uncorrelated) vertically sheared flow, will be presented based on Li & Chabchoub (2024). Stokes drift velocity, second-order wave-induced Eulerian flows, and amplified or suppressed wave steepness are particularly highlighted in the presence of a vertically sheared flow. Mathematical conditions are provided for the flow to either trigger or suppress modulation instability of Stokes waves and the associated relevance to the Craik-Leibovich type 2 instability will be discussed.

Wednesday (March 13, 2024)

Wednesday (March 13, 2024)

Title: On the SCI hierarchy and generalised hardness of approximation in computation — Spectra, PDEs, computer assisted proofs and AI

Abstract: The problem of finding algorithms for computing spectra of operators has both fascinated and frustrated mathematicians since the seminal work by H. Goldstine, F. Murray, and J. von Neumann in the 1950s.  W. Arveson pointed out in the 1990s that despite the plethora of papers on the subject of computing spectra, the general computational spectral problem remained unsolved. We will provide the solution to the computational spectral problem and show how it can only be solved through the Solvability Complexity Index (SCI) hierarchy. The theory of the SCI hierarchy also encompasses the issue of generalised hardness of approximation (GHA) — the phenomenon that one can easily compute an epsilon-approximation to a solution of a computational problem for epsilon larger than some positive epsilon_0 (known as the approximation threshold), but, for epsilon < epsilon_0, it suddenly becomes hard or impossible. The GHA phenomenon happens in spectral computations and PDEs, and is particularly relevant in the recently announced computer assisted proof of one the long lasting problems in non-linear PDEs: blow-up of the 3D Euler equation with smooth initial data. Finally, GHA appears even for linear PDEs, as well as in problems involving computing neural networks for the purposes of both scientific computing and AI.

Title: Dynamics of spontaneous, large-scale activity propagation across cortical and hippocampal networks

Abstract: Despite substantial differences in their formulation and details, most theories of memory formation and consolidation recognize the hippocampus and the cerebral cortex as the two main actors involved in the processing and reorganization of memories. These two centers are thought to play often complementary roles and the successful assimilation of novel information is thought to rely on their interplay over extended periods of time. Neural reactivations during sleep periods are considered central in organizing these interactions between the hippocampus and a diverse set of cortical centers, but very little is known about the role played by the characteristics of cortical dynamics during these events. 

I will present a set of analysis of wide-field imaging data addressing the large-scale organization of spontaneous cortical activity. I will focus on the statistical properties of activity transients, their organization in separate anatomical networks and their selective involvement in establishing bi-directional communications with the hippocampus. I will then discuss the relationship between theories of cortical criticality and the observed spatio-temporal organization of activity transients at different levels of resolution. In particular I will show how different signatures of criticality are modulated across cortical networks and sleep stages. 

Title: Computing solutions to stochastic partial differential equations on surfaces

Abstract: Looking around us, many surfaces including the Earth are no plain Euclidean domains but special cases of Riemannian manifolds. One way of describing uncertain physical phenomena on these surfaces is via stochastic partial differential equations. In this talk, I will introduce how to compute approximations of solutions to such equations and give convergence results to characterize the quality of the approximations. Furthermore, I will show how these solutions on surfaces are a first step towards the computation of time-evolving stochastic manifolds.

Title: Steady water waves with vorticity

Abstract: Steady (or travelling) water waves are waves moving on the water surface with constant speed and without change of shape. Although an idealisation, such waves do provide a good description in some physical settings. The mathematical problem has been studied for over two centuries, both based on simplified models, such as the KdV equation, and directly using the governing equations of fluid mechanics, namely the Euler or Navier-Stokes equations. I will consider the latter approach, where the waves are modelled using the Euler equations with a free surface and we neglect viscosity. Historically, most of the mathematical theory is based on the assumption of irrotational flow. In the last two decades there has been a renewed interest in water waves with non-zero vorticity, which is for example important for modelling wave-current interactions. There is by now a rich theory in the two-dimensional setting (that is when the fluid flow is essentially two-dimensional and uniform in the transverse direction) based on bifurcation theory for an elliptic free boundary problem, but so far little very little is known in three dimensions. 

In my talk I will give an overview of the state of the art of the steady water wave problem with vorticity, both in two and three dimensions, and try to explain why the latter is particularly challenging. Also, in connection to the themes of IMod, I will mention interesting questions about how to fit the models to observations. In particular, in the two-dimensional setting the usual mathematical approach is to fix the so-called vorticity function, but it’s not clear only from the mathematics how to choose it.

Title: Wave Breaking: Insights from Mathematical Modeling, Laboratory Experiments and Field Data

Abstract: Wave breaking is one the most ubiquitous and well known phenomena in fluid mechanics, but our current understanding of when and how it happens is still limited. One of the open issues surrounding wave breaking is its very definition. There is no clear single criterion that signals the commencement of wave breaking, and in particular in field situations, discerning whether or not a wave breaks is complicated by various challenges in data acquisition and interpretation. In this lecture, we will review some work attempting to predict and quantify wave breaking in various settings.

Dinner will take place at Osteria Moderna.

Thursday (March 14, 2024)

Thursday (March 14, 2024)

Title: Convex neural codes

Abstract: Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called `convex' if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this talk, we will exhibit topological, algebraic, and geometric approaches to answering this question.

Title: Investigating noise induced behavior of neurons in a deterministic framework

Abstract: A Fokker--Planck-like partial differential equation was recently proposed to represent stochastic  neural fields with the aim of better understanding the impact of noise on grid cells. This representation allows a study of noise-induced behaviors in a deterministic framework. I will talk about the derivation of this model, and discuss some challenges related to it.

Title: Gaussian random fields on metric graphs

Abstract: There is a growing interest in the statistical modeling of data on compact metric graphs such as street or river networks based on Gaussian random fields. In this work, we introduce the Whittle-Matérn fields, which is a class of models specified as solutions to a fractional-order stochastic differential equation on the metric graph. Contrary to earlier covariance-based approaches for specifying Gaussian fields on metric graphs, the Whittle-Matérn fields are well-defined for any compact metric graph and can provide Gaussian processes with differentiable sample paths given that the fractional exponent is large enough.

We present some of the main statistical properties of the models and show how they can be used to perform exact likelihood-based inference and prediction. We then discuss non-stationary and spatio-temporal extensions and finally illustrate the usage of the models through an application to traffic data, where we use the recently introduced MetricGraph R package to fit and compare different models.

Title: Mean Field Games and applications

Abstract: Mean Field Game theory, introduced by J.-M. Lasry and P.-L. Lions in 2006, offers a general model to analyze strategic interactions in large populations of rational agents. Following the paradigm of statistical physics, this gives a mean-field version of concepts such as Nash equilibrium within systems of interacting individuals. In terms of PDEs, this results in the study of coupled systems of (diffusive or deterministic) Hamilton-Jacobi and Fokker-Planck equations. In the talk, I will describe some mathematical aspect related to the stability and long time behavior of the equilibria; if time will consent, I will also mention the application of some of these ideas to the study of balanced growth paths in macro-economic models of endogenous growth driven by knowledge diffusion. 

IMod-Workshop-2024-Abstracts

Titles and abstracts (alphabetic)

David Bolin: "Gaussian random fields on metric graphs"

There is a growing interest in the statistical modeling of data on compact metric graphs such as street or river networks based on Gaussian random fields. In this work, we introduce the Whittle-Matérn fields, which is a class of models specified as solutions to a fractional-order stochastic differential equation on the metric graph. Contrary to earlier covariance-based approaches for specifying Gaussian fields on metric graphs, the Whittle-Matérn fields are well-defined for any compact metric graph and can provide Gaussian processes with differentiable sample paths given that the fractional exponent is large enough.

We present some of the main statistical properties of the models and show how they can be used to perform exact likelihood-based inference and prediction. We then discuss non-stationary and spatio-temporal extensions and finally illustrate the usage of the models through an application to traffic data, where we use the recently introduced MetricGraph R package to fit and compare different models.

Stefano Castruccio: "From Machine Learning to Physics and back: Some Perspectives on Physical-Statistical Models, PDEs and Neural Networks"

It is widely acknowledged how the relentless surge of Volume, Velocity and Variety of data, as well as the simultaneous increase of computational resources have stimulated the development of data-driven methods with unprecedented flexibility and predictive power. However, not every study entails a large data set: many applications have a high associated sampling cost and are instead constrained by physics-informed partial differential equations. While the principle of embedding physical information in a statistical model is not new, the machine learning literature has recently resumed interest in this topic by formulating neural networks informed by Partial Differential Equations, hence providing a unified framework for a whole spectrum of problems ranging from data-rich/context-poor to data-poor/context-rich. In this talk, I will present this framework and discuss some of the most recent efforts to reformulate it as a stochastic model-based approach  (as opposed to an algorithmic formulation) with reservoir computing methods.

Anders Hansen: "On the SCI hierarchy and generalised hardness of approximation in computation — Spectra, PDEs, computer assisted proofs and AI"

The problem of finding algorithms for computing spectra of operators has both fascinated and frustrated mathematicians since the seminal work by H. Goldstine, F. Murray, and J. von Neumann in the 1950s.  W. Arveson pointed out in the 1990s that despite the plethora of papers on the subject of computing spectra, the general computational spectral problem remained unsolved. We will provide the solution to the computational spectral problem and show how it can only be solved through the Solvability Complexity Index (SCI) hierarchy. The theory of the SCI hierarchy also encompasses the issue of generalised hardness of approximation (GHA) — the phenomenon that one can easily compute an epsilon-approximation to a solution of a computational problem for epsilon larger than some positive epsilon_0 (known as the approximation threshold), but, for epsilon < epsilon_0, it suddenly becomes hard or impossible. The GHA phenomenon happens in spectral computations and PDEs, and is particularly relevant in the recently announced computer assisted proof of one the long lasting problems in non-linear PDEs: blow-up of the 3D Euler equation with smooth initial data. Finally, GHA appears even for linear PDEs, as well as in problems involving computing neural networks for the purposes of both scientific computing and AI.

Henrik Kalisch: "Wave Breaking: Insights from Mathematical Modeling, Laboratory Experiments and Field Data"

Wave breaking is one the most ubiquitous and well known phenomena in fluid mechanics, but our current understanding of when and how it happens is still limited. One of the open issues surrounding wave breaking is its very definition. There is no clear single criterion that signals the commencement of wave breaking, and in particular in field situations, discerning whether or not a wave breaks is complicated by various challenges in data acquisition and interpretation. In this lecture, we will review some work attempting to predict and quantify wave breaking in various settings.

Nathan Kutz: "Revolutionizing science and engineering in the age of machine learning"

A major challenge in the study of dynamical systems is that of model discovery: turning data into reduced order models that are not just predictive, but provide insight into the nature of the underlying dynamical system that generated the data. We introduce a number of data-driven strategies for discovering nonlinear multiscale dynamical systems and their coordinates from data. We consider two canonical cases: (i) systems for which we have full measurements of the governing variables, and (ii) systems for which we have incomplete measurements. For systems with full state measurements, we show that the recent sparse identification of nonlinear dynamical systems (SINDy) method can discover governing equations with relatively little data and introduce a sampling method that allows SINDy to scale efficiently to problems with multiple time scales, noise and parametric dependencies. For systems with incomplete observations, we show that the Hankel alternative view of Koopman (HAVOK) method, based on time-delay embedding coordinates and the dynamic mode decomposition, can be used to obtain a linear models and Koopman invariant measurement systems that nearly perfectly captures the dynamics of nonlinear quasiperiodic systems. Neural networks are used in targeted ways to aid in the model reduction process. Together, these approaches provide a suite of mathematical strategies for reducing the data required to discover and model nonlinear multiscale systems.

Annika Lang: "Computing solutions to stochastic partial differential equations on surfaces"

Looking around us, many surfaces including the Earth are no plain Euclidean domains but special cases of Riemannian manifolds. One way of describing uncertain physical phenomena on these surfaces is via stochastic partial differential equations. In this talk, I will introduce how to compute approximations of solutions to such equations and give convergence results to characterize the quality of the approximations. Furthermore, I will show how these solutions on surfaces are a first step towards the computation of time-evolving stochastic manifolds.

Yan Li: "Coupled interaction between surface waves and a vertically sheared flow"

A system of ocean surface waves and flow is key to understanding the dynamics of the upper ocean. A Nonlinear Schrödinger (NLS) equation-based theoretical model is derived in Li & Chabchoub (2024, Geophys. Res. Lett.) for deepwater waves in the presence of a background flow. The flow propagates in the horizontal plane with its profile magnitude and direction being depth dependent. In this talk, a few novel features, attributing to the nonlinear coupling between surface waves and a (wave-correlated and -uncorrelated) vertically sheared flow, will be presented based on Li & Chabchoub (2024). Stokes drift velocity, second-order wave-induced Eulerian flows, and amplified or suppressed wave steepness are particularly highlighted in the presence of a vertically sheared flow. Mathematical conditions are provided for the flow to either trigger or suppress modulation instability of Stokes waves and the associated relevance to the Craik-Leibovich type 2 instability will be discussed.

Finn Lindgren: "Stochastic adventures in space and time"

The standard geostatistics toolbox includes methods for modelling spatial dependence between georeferenced observations, as well as methods for modelling the occurrence of random points. The core model building blocks are often some form of Gaussian random fields.

The easiest approach to constructing space-time models is by taking the product between a spatial covariance kernel and a temporal covariance kernel. These are called covariance separable models. An alternative that may better capture the spatio-temporal dynamics is to take inspiration for physics motivated partial differential equations such as the heat equation, which leads to non-separable models. Non-separable models are in general more computationally expensive, but one can sometimes use the model structure to retain a lot of the simplicity of separable models, for example allowing these models to be used as components of larger hierarchical generalised additive models. For point process observations, such as observations of a moving animal, the temporal dynamics poses an additional challenge.

I will discuss some of these aspects, including the construction of non-separable space-time models available in the INLAspacetime R package.

Alessio Porretta: "Mean Field Games and applications"

Mean Field Game theory, introduced by J.-M. Lasry and P.-L. Lions in 2006, offers a general model to analyze strategic interactions in large populations of rational agents. Following the paradigm of statistical physics, this gives a mean-field version of concepts such as Nash equilibrium within systems of interacting individuals. In terms of PDEs, this results in the study of coupled systems of (diffusive or deterministic) Hamilton-Jacobi and Fokker-Planck equations. In the talk, I will describe some mathematical aspect related to the stability and long time behavior of the equilibria; if time will consent, I will also mention the application of some of these ideas to the study of balanced growth paths in macro-economic models of endogenous growth driven by knowledge diffusion. 

Michail Spitieris: "Physics-Informed Statistical Machine Learning"

Physics-informed machine learning (PIML) is an emerging field of research that integrates physical laws into the machine learning framework. Most existing methods treat the physical model as a perfect representation of reality. However, the idealized physical model might rely on simplified assumptions, while the reality is more complex. Additionally, the physics might not be fully known. Even if the physics is fully known, the computational cost of a high-fidelity model might be prohibitive for a specific application, and therefore, a simplified, low-fidelity, and less accurate model might be a practical alternative. In such cases, it is important to account for model discrepancies in the model formulation, which represent the difference between the predictions of the physical model and actual observed data. Addressing these discrepancies and quantifying the uncertainties for the quantities of interest are crucial steps in improving the accuracy and reliability of PIML applications.

In this talk, I will present fully probabilistic PIML methods that incorporate model discrepancies directly into the model formulation and quantify the uncertainties for the quantities of interest, such as physical parameters in inverse problems and uncertainties in model predictions. These methods combine existing ideas for incorporating model discrepancy with recent advances in PIML and Bayesian computation. Additionally, I will discuss the application of physical models in digital twin technologies, specifically presenting a fully probabilistic formulation that allows information sharing between individual models with the aim of reducing uncertainty in physical parameters when accounting for model discrepancy.

Federico Stella: "Dynamics of spontaneous, large-scale activity propagation across cortical and hippocampal networks"

Despite substantial differences in their formulation and details, most theories of memory formation and consolidation recognize the hippocampus and the cerebral cortex as the two main actors involved in the processing and reorganization of memories. These two centers are thought to play often complementary roles and the successful assimilation of novel information is thought to rely on their interplay over extended periods of time. Neural reactivations during sleep periods are considered central in organizing these interactions between the hippocampus and a diverse set of cortical centers, but very little is known about the role played by the characteristics of cortical dynamics during these events. 

I will present a set of analysis of wide-field imaging data addressing the large-scale organization of spontaneous cortical activity. I will focus on the statistical properties of activity transients, their organization in separate anatomical networks and their selective involvement in establishing bi-directional communications with the hippocampus. I will then discuss the relationship between theories of cortical criticality and the observed spatio-temporal organization of activity transients at different levels of resolution. In particular I will show how different signatures of criticality are modulated across cortical networks and sleep stages. 

Susanne Solem: "Investigating noise induced behavior of neurons in a deterministic framework"

A Fokker--Planck-like partial differential equation was recently proposed to represent stochastic  neural fields with the aim of better understanding the impact of noise on grid cells. This representation allows a study of noise-induced behaviors in a deterministic framework. I will talk about the derivation of this model, and discuss some challenges related to it.

Erik Wahlén: "Steady water waves with vorticity"

Steady (or travelling) water waves are waves moving on the water surface with constant speed and without change of shape. Although an idealisation, such waves do provide a good description in some physical settings. The mathematical problem has been studied for over two centuries, both based on simplified models, such as the KdV equation, and directly using the governing equations of fluid mechanics, namely the Euler or Navier-Stokes equations. I will consider the latter approach, where the waves are modelled using the Euler equations with a free surface and we neglect viscosity. Historically, most of the mathematical theory is based on the assumption of irrotational flow. In the last two decades there has been a renewed interest in water waves with non-zero vorticity, which is for example important for modelling wave-current interactions. There is by now a rich theory in the two-dimensional setting (that is when the fluid flow is essentially two-dimensional and uniform in the transverse direction) based on bifurcation theory for an elliptic free boundary problem, but so far little very little is known in three dimensions. 

In my talk I will give an overview of the state of the art of the steady water wave problem with vorticity, both in two and three dimensions, and try to explain why the latter is particularly challenging. Also, in connection to the themes of IMod, I will mention interesting questions about how to fit the models to observations. In particular, in the two-dimensional setting the usual mathematical approach is to fix the so-called vorticity function, but it’s not clear only from the mathematics how to choose it.

Nora Youngs: "Convex neural codes"

Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called `convex' if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this talk, we will exhibit topological, algebraic, and geometric approaches to answering this question.

IMod-Workshop-2024-Maintainer

Web page maintained by

Geir-Arne Fuglstad