29th Annual Computational Neuroscience Meeting
CNS*2020, Online
Information-Theoretic Models in Psychology and Neuroscience
An online workshop for computational neuroscientists and mathematical psychologists taking place on 21st & 22nd July 2020.
About
Models of information theory describe the behavior and neural dynamics in intelligent agents. They have arisen through fruitful interactions between mathematical psychology, cognitive neuroscience and other fields. However, opportunities for such interactions seem to be few at the moment. This workshop aims to fill this gap by bringing together researchers with different backgrounds but a common goal: to understand information processing in the human and animal brain.
The workshop will discuss information sampling, encoding and decoding during sensory processing, time perception and higher cognitive functions. It will review state of the art techniques based on deep neural networks, probabilistic inference and dynamical systems. It will also provide updates about recent results using these techniques to understand the biology and behavior of intelligent information processing.
The workshop will be of interest to members of the CNS community who are keen on model-driven explanations of sensory perception and higher cognition.
Organisers
When
-
Tuesday 21st July: 09:30 - 13:55 ET / 15:30 - 19:55 CET
-
Wednesday 22nd July: 09:00 - 13:55 ET / 15:00 - 19:55 CET
Speakers
-
Vijay Balasubramanian, University of Pennsylvania
-
Peter Balsam, Columbia University
-
Beth Buffalo, University of Washington
-
Karl Friston, University College London
-
Randy Gallistel, Rutgers University
-
Larry Maloney, NYU
-
Earl Miller, MIT
-
Devika Narain, Erasmus MC
-
Bill Phillips, University of Stirling
-
Jonathan Pillow, Princeton University
-
Dimitris Pinotsis, University of London — City & MIT
-
Tomas Ryan, Trinity College Dublin
-
Noga Zaslavsky, MIT
Schedule
This is the list of talks to be held at the workshop. By clicking on the arrow next to the speaker name, you can find the title and abstract of the talk along with a link to the speaker homepage.
Session 1: Tuesday 21st July 09:30 - 13:55 ET / 15:30 - 19:55 CET
Session 2: Wednesday 22nd July 09:00 - 13:55 ET / 15:00 - 19:55 CET
For any inquiries related to this workshop, please contact the organizers at pinotsis@mit.edu.
-
9:30 ET / 15:30 CET - Opening RemarksA few words before we begin.
-
9:35 ET / 15:35 CET - Jonathan Pillow (Princeton)Connecting Perceptual Bias and Discriminability with Power-Law Efficient Codes Recent work from Wei & Stocker (2017) proposed a new "perceptual law" relating perceptual bias and discrimination threshold. This law was shown to arise under an information-theoretically optimal allocation of Fisher Information in a neural population. In this talk, I will discuss recent work with Mike Morais that generalizes and extends these results. Specifically, we show that the same law arises under a much larger family of optimal neural codes, which we call "power-law efficient codes". This family includes neural codes that are optimal for minimizing L_p error for any p, indicating that the lawful relationship observed in human psychophysical data does not require information-theoretically optimal neural codes. Moreover, our framework provides new insights into “anti-Bayesian” perceptual biases, in which percepts are biased away from the center of mass of the prior. Power-law efficient codes provide a unifying framework for understanding the relationship between perceptual bias, discriminability, and the allocation of neural resources. Speaker homepage here.
-
10:05 ET / 16:05 CET - BreakWe will take a short break between speakers.
-
10:10 ET / 16:10 CET - Vijay Balasubramanian (UPenn)Complexity, Information, and Statistical Inference by Humans All animals face the challenge of making inferences about current and future states of the world from uncertain sensory information. One might think that animals would perform better in such tasks by using more complex algorithms and models to extract and process pertinent information. But, in fact, theory predicts circumstances where simpler models of the world are more effective than complex ones, even if the latter more closely approximates the truth. Using information theory, we demonstrate this point in two ways. First, we show that when data is sparse or noisy, less complex inferred models give better predictions for the future. In this form of Occam's razor, a model family is more complex if it has more parameters, describes a greater number of distinguishable models, or is more sensitive in its parameter dependence. Second, even in situations where complex models give better predictions, cognitive and computational costs typically grow with complexity, subjecting the models to a law of diminishing returns. To conclude, we present experimental results showing that human inference behavior matches our theoretical predictions. Speaker homepage here.
-
10:40 ET / 16:40 CET - BreakWe will take a short break between speakers.
-
10:45 ET / 16:45 CET - Devika Narain (Erasmus MC)Neural Circuits Underlying Bayesian Inference in Time Perception Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous, is corrupted by the influence of noise and is inadvertently transformed as it traverses through neural circuitry. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. However, we know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. In the second part of my talk, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of timing behaviours. Speaker homepage here.
-
11:15 ET / 17:15 CET - One Hour BreakWe will take a 1 hour break before our next speaker.
-
12:15 ET / 18:15 CET - Earl Miller (MIT)Working Memory 2.0 Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts. Speaker homepage here.
-
12:45 ET / 18:45 CET - BreakWe will take a short break between speakers.
-
12:50 ET / 18:50 CET - Beth Buffalo (U Washington)Reconciling the Spatial and Mnemonic Views of the Hippocampus For decades, our understanding of the hippocampus has been framed by two landmark discoveries: the discovery by Scoville and Millner that hippocampal damage causes profound and persistent amnesia and the discovery by O’Keefe and Dostrovsky of hippocampal place cells in rodents. However, it has been unclear to what extent spatial representations are present in the primate brain and how to reconcile these representations with the known mnemonic function of this region. I will discuss a series of experiments that have examined neural activity in the hippocampus and adjacent entorhinal cortex in monkeys performing behavioral tasks including spatial memory tasks in a virtual environment. These data demonstrate that spatial representations can be identified in the primate hippocampus, and that behavioral task structure has a significant influence on hippocampal activity, with neurons responding to all salient events within the task. Together, these data are consistent with the idea that activity in the hippocampus tracks ongoing experience in support of memory formation. Speaker homepage here.
-
13:20 ET / 19:20 CET - BreakWe will take a short break between speakers.
-
13:25 ET / 19:25 CET - Dimitris Pinotsis (London)Characterizing Sensory and Abstract Representations in Neural Ensembles Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated tasks like decision-making are less used. At the same time, decision making tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these tasks could improve AI performance. Here we modelled some of these dynamics. We investigated how they flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Both approaches yielded the same conclusions: decision making in the color domain appeared to rely more on sensory processing, while in the motion domain more on abstract representations. Finally, using biophysical modeling, and data from a spatial delayed response task, we characterized cortical connectivity in neural ensembles and explained a well-known behavioral effect in psychophysics, known as the oblique effect. Overall, this talk will introduce an approach for studying the computations and neural representations taking place in neural ensembles by exploiting a combination of machine learning, biophysics and brain imaging. Speaker homepage here.
-
9:30 ET / 15:30 CET - Opening RemarksA few words before we begin.
-
9:35 ET / 15:35 CET - Jonathan Pillow (Princeton)Connecting Perceptual Bias and Discriminability with Power-Law Efficient Codes Recent work from Wei & Stocker (2017) proposed a new "perceptual law" relating perceptual bias and discrimination threshold. This law was shown to arise under an information-theoretically optimal allocation of Fisher Information in a neural population. In this talk, I will discuss recent work with Mike Morais that generalizes and extends these results. Specifically, we show that the same law arises under a much larger family of optimal neural codes, which we call "power-law efficient codes". This family includes neural codes that are optimal for minimizing L_p error for any p, indicating that the lawful relationship observed in human psychophysical data does not require information-theoretically optimal neural codes. Moreover, our framework provides new insights into “anti-Bayesian” perceptual biases, in which percepts are biased away from the center of mass of the prior. Power-law efficient codes provide a unifying framework for understanding the relationship between perceptual bias, discriminability, and the allocation of neural resources. Speaker homepage here.
-
10:05 ET / 16:05 CET - BreakWe will take a short break between speakers.
-
10:10 ET / 16:10 CET - Vijay Balasubramanian (UPenn)Complexity, Information, and Statistical Inference by Humans All animals face the challenge of making inferences about current and future states of the world from uncertain sensory information. One might think that animals would perform better in such tasks by using more complex algorithms and models to extract and process pertinent information. But, in fact, theory predicts circumstances where simpler models of the world are more effective than complex ones, even if the latter more closely approximates the truth. Using information theory, we demonstrate this point in two ways. First, we show that when data is sparse or noisy, less complex inferred models give better predictions for the future. In this form of Occam's razor, a model family is more complex if it has more parameters, describes a greater number of distinguishable models, or is more sensitive in its parameter dependence. Second, even in situations where complex models give better predictions, cognitive and computational costs typically grow with complexity, subjecting the models to a law of diminishing returns. To conclude, we present experimental results showing that human inference behavior matches our theoretical predictions. Speaker homepage here.
-
10:40 ET / 16:40 CET - BreakWe will take a short break between speakers.
-
10:45 ET / 16:45 CET - Devika Narain (Erasmus MC)Neural Circuits Underlying Bayesian Inference in Time Perception Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous, is corrupted by the influence of noise and is inadvertently transformed as it traverses through neural circuitry. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. However, we know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. In the second part of my talk, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of timing behaviours. Speaker homepage here.
-
11:15 ET / 17:15 CET - One Hour BreakWe will take a 1 hour break before our next speaker.
-
12:15 ET / 18:15 CET - Earl Miller (MIT)Working Memory 2.0 Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts. Speaker homepage here.
-
12:45 ET / 18:45 CET - BreakWe will take a short break between speakers.
-
12:50 ET / 18:50 CET - Beth Buffalo (U Washington)Reconciling the Spatial and Mnemonic Views of the Hippocampus For decades, our understanding of the hippocampus has been framed by two landmark discoveries: the discovery by Scoville and Millner that hippocampal damage causes profound and persistent amnesia and the discovery by O’Keefe and Dostrovsky of hippocampal place cells in rodents. However, it has been unclear to what extent spatial representations are present in the primate brain and how to reconcile these representations with the known mnemonic function of this region. I will discuss a series of experiments that have examined neural activity in the hippocampus and adjacent entorhinal cortex in monkeys performing behavioral tasks including spatial memory tasks in a virtual environment. These data demonstrate that spatial representations can be identified in the primate hippocampus, and that behavioral task structure has a significant influence on hippocampal activity, with neurons responding to all salient events within the task. Together, these data are consistent with the idea that activity in the hippocampus tracks ongoing experience in support of memory formation. Speaker homepage here.
-
13:20 ET / 19:20 CET - BreakWe will take a short break between speakers.
-
13:25 ET / 19:25 CET - Dimitris Pinotsis (London)Characterizing Sensory and Abstract Representations in Neural Ensembles Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated tasks like decision-making are less used. At the same time, decision making tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these tasks could improve AI performance. Here we modelled some of these dynamics. We investigated how they flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Both approaches yielded the same conclusions: decision making in the color domain appeared to rely more on sensory processing, while in the motion domain more on abstract representations. Finally, using biophysical modeling, and data from a spatial delayed response task, we characterized cortical connectivity in neural ensembles and explained a well-known behavioral effect in psychophysics, known as the oblique effect. Overall, this talk will introduce an approach for studying the computations and neural representations taking place in neural ensembles by exploiting a combination of machine learning, biophysics and brain imaging. Speaker homepage here.