Roudi - Kavli Institute for Systems Neuroscience
Kavli Institute research: SPINOr - Stat.Phys. of Inference and Network Organization
Aim
To understand the properties of neural communication on a network level.
Background
As technologies are growing more sophisticated, neuroscientists are gathering larger and larger datasets with recordings from hundreds up to thousands of neurons at a time. On their own, large data sets are not very informative. However, match them with a theoretician, and it is possible to extract relevant patterns, mechanisms and universal principles from the data, that will enable scientists to explain behaviour across several scales in a meaningful way.
These researchers are interested in understanding properties of information transmission and coding in neuronal systems at a general level in biological systems.
Key Research Questions
-
What can global patterns of neuronal activity tell us about how the brain works?
-
What are the principles governing network communications?
Tools and Methods
Roudi’s team uses mathematical tools from the field of theoretical physics to analyse big datasets, to develop models that draw out neural mechanisms in big datasets, and to identify and describe universal principles in biological systems.
Research
- Shouting or listening: How living systems grapple with noisy conversations
Whether we consider neurons communicating through electrical impulses, bacteria communicating through quorum sensing, or humans communicating through language – all biological populations rely on clear communication.
In the past year, the Roudi's team published a study on the properties of a biological communication network. Firstly, let’s think about a biological population in the terms of network of interacting agents. The network of agents could stand for a neural population, a bacterial culture, a human society, or another biological population.
All agents want to understand their environmental conditions, from temperature, pH, and nutrients, to more complex features. Agents can gather information about their environment by sensing the external world directly and by communicating with other sensing agents. They communicate by signalling to each other what they think the state of the world is.
Based on the information received from other agents and from their own sensory information apparatus tuned to the world, the agents continuously make decisions about the current state of the environment, and communicate this to other agents. Alas, in this system there is also noise.
Signals on all levels are prone to noise: the sensory information signal that the agent acquires from the environment may be incorrect; the agent may miscommunicate its information signal to other agents; or the communicated information signal may be misinterpreted by the agent on the receiving end.
Roudi’s team set out to identify these rules of noise in various stages of signal, in order to identify under what circumstances the network as a whole will arrive at the correct conclusion about its environment, circumstances leading to the wrong conclusion, or circumstances leading to conflicting belief spread randomly across agents without any general consensus in the network.
They found that the most important factor is where in the flow of communication that the noise arises. Noise that arise in the agent’s production of signal (speech) is more harmful than noise in an agent’s comprehension of signals (listening). Since biological organisms have limited resources to devote to noise reduction, Roudi’s team propose that evolutionarily, it is more advantageous to make oneself understandable than to understand.
So, where in nature would we expect to observe this asymmetry between being understandable and understanding others? Their model points to populations that are living in smaller groups with high connectivity, like primate populations and early human societies.
This phenomenon has already been reported on, for instance in signalling games and in human language learning, where children tend to produce correct language signals before they themselves can correctly comprehend speech.
Proposing a universal explanation to a frequently observed pattern in nature
In another recent work, Roudi's team provide a novel explanation to broad distribution patterns. Broad distribution patterns show up time and time again across different natural systems – from the co-activity of neurons, to the distribution of tree species, or the size of cities.
It is a highly debated topic in the field, and the ubiquity of these patterns begs for a universal explanation. Ryan Cubero et al propose that they are the results of a mathematically well-defined notion for optimal information transmission called Minimum Description Length (MDL). The notion reads that the best description of data is the one that compresses the data the most. In other words, the broad distribution patterns happen so often in nature because they are the best way to transfer information.
For more information please visit our lab's external website: spinorkavli.org/
Kavli Communications Hub
The Dimensionality Reduction and Population Dynamics in Neural Data conference were held at Nordita in Stockholm 11-14 February 2020. Most parts of the conference were recorded (see links below).
About the conference
The brain represents and processes information through the activity of many neurons whose firing patterns are correlated with each other in non-trivial ways. These correlations, in general, imply that the activity of a population of neurons involved in a task has a lower dimensional representation. Naturally, then, discovering and understanding such representations are important steps in understanding the operations of the nervous system, and theoretical and experimental neuroscientists have been making interesting progress on this subject. The aim of this conference is to gather together a number of key players in the effort for developing methods for dimensionality reduction in neural data and studying the population dynamics of networks of neurons from this angle. We aim to review the current approaches to the problem, identify the major questions that need to be addressed in the future, and discuss how we should move forward with those questions.
See recordings from the conference here:
Conference in Stockholm playlist
Tuesday 11/02/2020
Sara Solla (Northwestern University) Neural manifolds for the stable control of movement
Matteo Marsili (ICTP) Multiscale relevance and informative encoding in neuronal spike trains
Srdjan Ostojic (ENS) Disentangling the roles of dimensionality and cell classes in neural computations (Lecture not recorded)
Wednesday 12/02/2020
Taro Toyoizumi (Riken) A local synaptic update rule for ICA and dimensionality reduction
Soledad Gonzalo Cogno (Kavli Institute, NTNU) Stereotyped population dynamics in the medial entorhinal cortex (Lecture not recorded)
Tatiana Engel (CSHL) Discovering interpretable models of neural population dynamics from data
Thursday 13/02/2020
Benjamin Dunn (Math Department, NTNU) TBA (Lecture not recorded)
Sophie Deneve (ENS) TBA (Lecture not recorded)
Barbara Feulner (Imperial College London) Learning within and outside of the neural manifold
Friday 14/02
Mark Humphries (University of Nottingham) Strong and weak principles for neural dimension reduction
Devika Narain (Erasmus University Medical Center) Bayesian time perception through latent cortical dynamics