Pau Vilimelis Aceituno

Max Planck Institute for Mathematics in the Sciences, Leipzig

The structure of complex neural networks and its effects on learning

Abstract: Reservoir Computing (RC) is a computing paradigms which can be used both as a theoretical neuroscience model (Maass et al., 2002) and as a machine learning tool (Jaeger and Haas, 2004). The key feature of the RC paradigm is its reservoir, a directed and weighted network that represents the connections between neurons. Despite extensive research efforts, the impact of the reservoir topology on the RC performance remains unclear. Here we explore this fundamental question and show, both analytically and computationally, how structural features determine the type of tasks that these recurrent neural networks can perform.
We focus on two network properties: First, by studying the correlations between neurons we demonstrate how the degree distribution affects the short-term memory of the reservoir. Second, after showing that adapting the reservoir to the frequency of the time series to be processed increases the performance, we demonstrate how this adaptation is dependent on the abundance of short cycles in the network.
Based on our previous results we create an optimization strategy to improve time series forecasting performance. We validate our results with various benchmark problems, in which we surpass state-of-the-art RC implementations. Our approach provides a new way of designing more efficient recurrent neural networks and to understand how the computational role of common network properties such as degree distributions and network motifs.

Alexis Dubreuil

Ecole Normale Supérieure, Paris

Sensorimotor computation underlying phototaxis in zebrafish

Abstract: Animals continuously gather sensory cues in order to move towards favorable environments. Efficient goal-directed navigation requires sensory perception and motor command to be intertwined in a feedback loop, yet the neural substrate underlying this sensorimotor task in the vertebrate brain remains elusive. Here we present analysis of whole-brain calcium recordings together with circuit modeling to reveal the neural mechanisms through which zebrafish performs phototaxis, i.e. actively orients towards a light source. Key to this process is a self-oscillating hindbrain network that acts as a pacemaker for ocular saccades and controls the orientation of successive swim-bouts. It further integrates visual stimuli in a state dependent manner, i.e. its response to visual inputs varies with the motor context, a mechanism that manifests itself in the phase-locked entrainment of the circuit by periodic stimuli. A rate model is developed that reproduces our observations and demonstrates how this sensorimotor processing eventually biases the animal trajectory towards bright regions.

Gregory Dumont

Ecole Normale Supérieure, Paris

Phase Locking States of Bidirectionally Coupled Spiking Neural Networks

Abstract: Macroscopic oscillations of different brain regions show phase relations that are persistent across time. Termed as communication through coherence, such phase locking is believed to be implicated in a number of cognitive functions. This suggests that there are mechanisms taking place at the cellular level that influence the network’s dynamic and structure macroscopic firing patterns. The question is then to identify the biological properties of neurons that permit such motifs to arise. Unfortunately, there is not yet a method for studying macroscopic phase locking, and one of the central issues we face in mathematical neuroscience is to bring about a clear understanding of the synaptic mechanisms responsible for its emergence.

To address this issue, we use a semi-analytic modeling approach. We investigate the dynamical emergence of phase locking within two bidirectionally delayed-coupled spiking networks. While not explicitly a model of any specific brain areas, the design essentially captures many communicating cortical and sub-cortical regions where macroscopic locking takes place. The circuits are made up of excitatory and inhibitory cells coupled in an all-to-all fashion. Each cell is described by a well-established conductance-based model – the quadratic integrate-and-fire – which is known to replicate the dynamical features of neural voltage. Taking advantage of a mean-field approach combined with an exact reduction method, we break down the description of each spiking network into a low dimensional nonlinear system. Bifurcation analysis of the reduced system enables us to reveal how synaptic interactions and inhibitory feedback permit the emergence of macroscopic rhythms. The adjoint method is then applied, and a semi-analytical expression of the macroscopic infinitesimal phase resetting-curve is derived.

From there, we study the dynamical emergence of macroscopic phase locking of two bidirectionally delayed-coupled spiking networks within the framework of weakly coupled oscillators. The weak coupling condition allows us to abbreviate the bidirectionally coupled circuits description into a phase equation. An analysis of the phase equation sheds new lights on the synaptic mechanism enabling circuits to bound together, and it uncovers the determinant part played by delay and coupling strength upon the dynamical rise of a variety synchronization modes.

Ramon Guevara Erra

Universite Paris Descartes, Paris

Consciousness as a global property of brain dynamic activity

Abstract: We seek general principles of the structure of the cellular collective activity associated with conscious awareness. Can we obtain evidence for features of the optimal brain organization that allows for adequate processing of stimuli and that may guide the emergence of cognition and consciousness? Analyzing brain recordings in conscious and unconscious states, we followed initially the classic approach in physics when it comes to understanding collective behavior of systems composed of a myriad of units: the assessment of the number of possible configurations (microstates) that the system can adopt, for which we use a global entropic measure associated with the number of connected brain regions. Having found maximal entropy in conscious states, we then inspected the microscopic nature of the configurations of connections using an adequate complexity measure and found higher complexity in states characterized not only by conscious awareness but also by subconscious cognitive processing, such as sleep stages. Our observations indicate that conscious awareness is associated with maximal global (macroscopic) entropy and with the short time scale (microscopic) complexity of the configurations of connected brain networks in pathological unconscious states (seizures and coma), but the microscopic view captures the high complexity in physiological unconscious states (sleep) where there is information processing. As such, our results support the global nature of conscious awareness, as advocated by several theories of cognition. We thus hope that our studies represent preliminary steps to reveal aspects of the structure of cognition that leads to conscious awareness.


Inria Sophia Antipolis-Méditerranée, Sophia Antipolis Cedex, France

Anticipation via canards in excitable systems

Abstract: Anticipated synchronisation appears as a counter-intuitive observation of synchronisation in a wide range of dynamical systems from biology to engineering applications. It can occur in unidirectionally interacting systems where the receiver is subject to a self-delayed feedback in addition to a signal coming from the sender. This particular interaction permits the receiver to predict the future trajectory of the sender. In this study, we focused on the anticipated behaviour in excitable systems, particularly in type-II neuron models, and linked it with another counter-intuitive phenomenon, namely canards. Canard trajectories structure the excitability and synchronisation properties of multiple timescale systems exhibiting excitable dynamics. By developing a theoretical framework enhanced by numerical continuation, we showed that the underlying canard structure in excitable systems can responsible for delaying sub-threshold solutions, but anticipating the spiking ones.

Ulisse Ferrari

Sorbonne Universite, Paris

Separating intrinsic interactions from extrinsic correlations in a network of sensory neurons

Abstract: Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the couplings between neurons separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.

Lionel Gil

Inphyni, SOphia Antipolis , France

Stage 2 retinal waves

Jennifer Goldman

EITN, CNRS,  Paris, France

Spectral structure of human brain activity; an analogy with thermodynamics

Neural electromagnetic (EM) signals recorded non-invasively from individual human subjects vary in complexity and magnitude between brain states. Accordingly, variations in neural activity across different brain functions have been difficult to quantify and interpret, due to their complex, broad-band features in the frequency domain. Studying signals recorded with magnetoencephalography (MEG) from healthy young adult subjects while resting and performing cognitively demanding tasks, a systematic framework inspired by thermodynamics is applied to neural EM signals. Despite considerable inter-subject variation, data support the existence of a robust and linear relationship that defines an effective state equation conserved across brain states examined here. Physical relationships between state variables are further investigated using a Kuramoto model, revealing that an interplay between noise and coupling strength can account for coherent variation of empirically observed state variables across brain states.

Tobias Kuehn

Research Centre Juelich, INM-6 (Institute for Computational and Systems Neuroscience), IAS-6 (Institute for Advanced Simulation and ) and INM-10 (JARA Brain), Juelich, Germany

TAP-Approximation and beyond with Feynman diagrams

Abstract: Originally invented to describe magnetism and extensively studied in spin glass theory, the Ising model is now in use in many other disciplines, such as socioeconomic physics, machine learning [1] and analysis of neural data [2,3,4]. To justify the usage of the mean-field approximation and to calculate corrections thereof, it is necessary to know the Gibbs free energy or effective action of the Ising model. Its computation is a nontrivial task, even if one proceeds perturbatively in the interaction. Still, it was accomplished by techniques specialized to the Ising model nondiagrammatically, e.g. in [5], and diagrammatically [6]. These techniques were lacking a link to field theory in the sense that it was in general unclear which types of diagrams represent the effective action if the unperturbed theory is not gaussian. In our work [7], we provide this link by showing that there are two types of diagrams contributing: Those that are irreducible in the sense that they are not decomposed into two parts both containing interactions by detaching one leg of an interaction and a second type of diagrams specific to non-Gaussian theories. Applying these novel diagrammatic rules, we reproduce the results for the Ising model obtained earlier and find that they simplify the calculation compared to non-diagrammatic techiques.


[1] Hinton, G.E. and Salakhutdinov, R.R. (2006): Reducing the Dimensionality of Data with Neural Networks. Science 313 504507

[2] Tkacik, G., Schneidman, E., Berry II, M. J., Bialek, W. (2008): Ising models for networks of real neurons. arXiv:q-bio/0611072

[3] Roudi, Y., Tyrcha, J. and Hertz, J.A. (2009): Ising model for neural data: Model quality and approximate methods for extracting functional connectivity. Phys. Rev. E 79, 051915

[4] Hertz, J.A., Roudi, Y. and Tyrcha, J (2011): Ising models for inferring network structure from spike data. arXiv:1106.1752.

[5] Georges, A. and Yedidia, J.S. (1991): How to expand around mean-field theory using high-temperature expansions. J. Phys. A 24, 2173 – 2192

[6] Vasiliev, A.N. and Radzhabov, R.A. (1974): Legendre transforms in the Ising model. Theoretical and Mathematical Physics 21 963 – 970

[7] Kühn, T. and Helias, M. (2017): Expansion of the effective action around non-Gaussian theories. arXiv:1711.05599


This work was partially supported by HGF young investigator’s group VH-NG-1028, Helmholtz portfolio theme SMHB, Juelich Aachen Research Alliance (JARA), and EU Grant 604102 (Human Brain Project, HBP).

Christian Keup

Institution: Juelich Research Centre, RWTH Aachen, Germany

 Dynamics of cell assemblies in binary neuronal networks

Abstract: Connectivity in local cortical networks is far from random: Reciprocal connections are over-represented, and there are subgroups of neurons which are stronger connected among each other than to the remainder of the network [1,2]. These observations provide a growing evidence for the existence of neuronal assemblies, that is groups of neurons with stronger and/or more numerous connections between members compared to non-members. To study quantitatively the dynamics of these building blocks, we consider a single assembly of binary neurons embedded in a larger randomly connected EI-network and explore its properties by analytical methods and simulation. In dynamical mean field theory [3], we obtain expressions for mean activities, auto- and cross-correlations, and response to input fluctuations using a Gaussian closure. For sufficiently strong assembly self-feedback, a bifurcation from a mono-stable to a bistable regime exists. The critical regime around the bifurcation is of interest, as input variations can drive the assembly to high or low activity states and large spontaneous fluctuations are present. These could be a source of neuronal avalanches observed in cortex, and the robust response to input could constitute attractor states supporting classification in sensory perception. In this regime however, the gaussian approximation is not accurate due to large fluctuation corrections. We therefore work on a path-integral formulation of such systems built on developments in the application of statistical field theory to neuronal networks [4]. This formulation allows the derivation of an effective potential, a systematic treatment of approximations and the quantification of the response to inputs.

1. Ko H, Hofer SB, Pichler B, Buchanan KA , Sjöström PJ, Mrsic-Flogel TD, (2011) Functional specificity of local synaptic connections in neocortical networks. Nature 473: 87-91
2. Perin R, Berger TK, Markram H, (2011) A synaptic organizing principle for cortical neuronal groups. PNAS 108: 5419-5424
3. Helias M, Tetzlaff T, Diesmann M (2014) The Correlation Structure of Local Neuronal Networks Intrinsically Results from Recurrent Dynamics. PLoS Comput Biol 10(1): e1003428
4. Schücker J, Goedeke S, Dahmen D, Helias M, (2016) Functional methods for disordered neural networks. arXiv 1605:06758v2