Royal Society Publishing

The sleeping brain as a complex system

Eckehard Olbrich, Peter Achermann, Thomas Wennekers


‘Complexity science’ is a rapidly developing research direction with applications in a multitude of fields that study complex systems consisting of a number of nonlinear elements with interesting dynamics and mutual interactions. This Theme Issue ‘The complexity of sleep’ aims at fostering the application of complexity science to sleep research, because the brain in its different sleep stages adopts different global states that express distinct activity patterns in large and complex networks of neural circuits. This introduction discusses the contributions collected in the present Theme Issue. We highlight the potential and challenges of a complex systems approach to develop an understanding of the brain in general and the sleeping brain in particular. Basically, we focus on two topics: the complex networks approach to understand the changes in the functional connectivity of the brain during sleep, and the complex dynamics of sleep, including sleep regulation. We hope that this Theme Issue will stimulate and intensify the interdisciplinary communication to advance our understanding of the complex dynamics of the brain that underlies sleep and consciousness.

1. Introduction

There is no clearly defined and universally accepted notion of ‘complexity’ or ‘complex systems’. However, in the last decades, a new interdisciplinary field has developed—now labelled ‘complexity science’—that studies how emergent properties arise in systems consisting of few or many interacting elements. The first such models were studied in physics and usually consisted of simple elements with equally simple, but nonlinear, interactions. Early paradigmatic models of enormous impact were low-dimensional deterministic dynamical systems that showed aperiodic dynamical behaviour called ‘deterministic chaos’ [1]. Also high-dimensional and distributed systems were studied; examples include elementary cellular automata [2], grids and networks of nonlinear oscillators [3], reaction–diffusion equations for pattern formation in chemical preparations [4], lasers [5] and other systems [6]. These theoretical models have further been applied to biological systems, for example, in morphogenesis and embryogenesis, where they are used to describe different aspects of cell differentiation in growing tissue [79], or to genetic and metabolic networks as another prominent area of application [10,11]. Similar approaches have been used in models of brain function based on principles of self-organization, for example, associative memory models [12,13] or coupled oscillator systems for cognitive processes [14].

In many biological systems, a complication not accounted for in early approaches to complex systems becomes apparent: the elements of biological systems are themselves complex; they reveal behaviour that can be seen as emerging from interacting subsystems at a finer scale. This renders the definition of many biological systems problematic, as the identification of ‘levels’ of description and discrete parts are not at all obvious. Such a separation of scales has been very successful in thermodynamics, where, for example, the macroscopic and microscopic scales of a gas and its constituent particles are clearly separable, the respective laws governing either scale are well established, and their correspondence is very well understood and can be expressed by the classical statistical ensembles of thermodynamics.

In strong contrast, the brain and the physiological processes it carries reveal spatial and temporal structure on wide and intermingled ranges of scales: the brain is subdivided into various phylogenetically differentiated parts, of which the most recent and highly developed neocortex hosts somewhere around a hundred areas specialized for various different sensory, cognitive and motor functions [15]. These cortical areas are commonly believed to consist of subunits, but there is currently no agreement about what these more elementary units precisely are. They may be defined as local groups or pools of cells acting in a coordinated way [16,17], columns [18,19], cell assemblies [20,21], canonical micro-circuits [22] or just individual nerve cells. These different concepts may indeed address more than one level of complexity, but this, as said, is a matter of debate. It is commonly believed that individual neurons define basic units at their own level of scale (often called ‘microscopic’), but again they can be subdivided into compartments, synapses, channels, molecules and so forth, with relevant physiological and dynamical properties on all levels.

The temporal dynamics of the brain is similarly rich, spanning many orders of magnitude: many synaptic processes fall into the sub-millisecond to millisecond range, perceptual and thought processes happen in fractions of seconds to perhaps hours, and memory phenomena last for seconds to many years. Faster as well as slower processes have been identified, like molecular or evolutionary ones. It can be safely assumed that no process acts in isolation; rather, any process is embedded into a network of intricate mutual dependences across time.

The definition of spatial and temporal scales in the brain is further complicated for statistical/thermodynamical reasons. Laws of large numbers act nicely in physical systems like gases, solids or lasers, where they allow one to reduce the high-dimensional microscopic dynamics to state equations or equations for ‘order parameters’. Typically, the larger the system size is, the better isolated are the two scales of description from each other. ‘Emergent’ behaviour on one level is assumed to allow a self-sufficient description, with the processes from the lower scales entering the description either as parameters or as noise. Thereby, the dimensionality of the system description becomes drastically reduced and the lower-level physics is, informally speaking, ‘abstracted away’. This may well be quite different in biological complex systems and especially in the brain, where mean-field arguments have been brought forward on a theoretical base, e.g. [16,2325], but system sizes may not be big enough for thermodynamic limits to apply crisply. Reduced state equations may then not be good approximations. In order to understand complex behaviour on one level, say, activity patterns in a local slab of cells, as emerging from the nonlinear interaction between elements like single neurons, the complexity on the lower level has to be reduced appropriately. Given the complicated anatomical structure and high-dimensional dynamics of biological neurons, this is an almost impossible task that requires significant approximations of often uncertain quality. Moreover, bridging scales in thermodynamics makes use of the virtually infinite number of particles in, say, an ounce of matter. In contrast, even though the number of neurons in a slab the size of a pearl is high, it must be considered finite, perhaps between 104 and 106 for human cortical principal cells. The numbers are thus much less favourable than in solids or gases, with, for example, 6×1023 particles per gram of hydrogen. A cortical neuron, in contrast, has just about 105 synapses [26], often sparsely activated in space and time across its dendrites; the number of neurons in a column is between 100 and 105 (depending on definition, species and author, e.g. [19]); the number of columns in the macaque striate cortex is perhaps 2000 (estimated from Horton & Adams [19]); and the number of areas in the brain is somewhere around 100. All these numbers are debatable, but they are certainly not large enough to assume by default that thermodynamic limit arguments are valid to bridge between scales. Given, in addition, the omnipresence of noise-like processes in the brain (e.g. quantal noise in vesicle release, fast channel dynamics or Poisson-like neural firing patterns), the heterogeneity of its elements, the sparseness of neural activity and the highly structured functional neural circuits spanning laterally and across the laminar cortical architecture [27], some doubt can be cast on the concept of clearly defined subsystems and order parameters resulting from laws of large numbers or other averaging arguments [25,28]. It may also be said that, after all, the purpose of the brain is to process information, not to average it away. Understanding the brain as a complex interacting network poses a number of issues that can draw on previous and ongoing research. It may also raise new challenges for complexity science.

A major goal of brain science is to understand cognitive processes (and perhaps even our mind). Given the above argumentation, cognitive systems should be large-scale, multi-scale, nonlinear, highly heterogeneous and highly interactive. To some degree, this reflects psychological experience, the multifaceted plethora of thoughts and behaviours the brain is capable of producing. Assuming congruence between psychological and physiological processes, it therefore makes sense to develop a complexity theoretical approach towards mind and brain. We propose such an approach in the present Theme Issue in a more limited form, that is, with special consideration of the process of sleep and its regulation. A number of papers will be presented that introduce ideas from complexity science and their application to analyse the brain during sleep. Of course, the selection can only cover a few aspects, but we hope that the previous paragraphs have highlighted the potential of bringing together complexity science and sleep research.

We have chosen to focus mainly on the brain during sleep, because this has a number of advantages over a more general approach: a major reason is that it apparently allows one to simplify and constrain the range of target problems. Sleep is characterized by two distinct states—rapid eye movement (REM) and non-REM sleep—reflected in specific activation patterns across the brain in electroencephalographic/magnetoencephalographic (EEG/MEG) recordings or functional magnetic resonance imaging (fMRI). Furthermore, non-REM sleep is subdivided into light and deep sleep. Interestingly, different states are characterized by distinct spectral patterns of the EEG, reflecting characteristic sleep oscillations, such as slow waves and sleep spindles. It is this nested set of oscillations and waves that makes the sleeping brain interesting for complexity science: the spatio-temporal properties of the underlying complex networks change with state, but there are only a low number of key mechanisms, which can be addressed by physiological experiments and computational models. The physiology of sleep regulation, that is, the control of the dynamics of alternation between wakefulness, non-REM and REM sleep, also allows the application of concepts from complexity science. For instance, one may try to understand state transitions as related to bifurcations in dynamical systems. Finally, a further interesting aspect of sleep is that it is closely related to the question of the origin of conscious experience, because non-REM sleep, REM sleep and wakefulness relate to different levels of consciousness. Indeed, Tononi [29] suggested that the level of structural ‘complexity’, measured as simultaneous integration and segregation of activity during different brain states, is a direct measure of the level of consciousness the brain experiences.

This Theme Issue aims at providing an overview of both the current state of the art as well as future challenges in applications of complex system science to sleep research. The issue comprises new results and methods from different research fields, such as experimental brain research, mathematical physics, data analysis and computational modelling, which all approach the complexity of the sleeping brain from different perspectives. The following sections introduce various relevant topics in more depth and provide an integrative view of how the presented papers contribute to progress in the field.

2. Complex networks, functional connectivity and complexity measures

A very popular field in complexity science that has particular relevance for neuroscience in general and the analysis of the sleeping brain in particular is that of ‘complex networks’. Here, a complex system is represented by a graph, with the elements being the nodes and the edges corresponding to a binary relation between the nodes. The foundation of this approach is based on the initial observation that large classes of systems can be represented by networks that share universal features such as degree distribution according to a power law, the so-called ‘scale-free’ graphs, the small-world property and a modular structure. The contribution of Zamora-López et al. [30] reviews some of these findings for brain and mind networks. In brain networks, the nodes are neurons, groups of neurons or cortical regions, and the edges represent anatomical connections (structural brain networks) or dynamical correlations during particular tasks or states (functional brain networks). The term ‘mind networks’ refers to networks related to the organization of the human mind, i.e. a description at the mental level. Examples are memory networks or linguistic networks, with words being the nodes and associations or co-occurrences represented by the edges. Zamora-López et al. [30] point out that both brain and mind networks show (i) a broad distribution of node degrees, with some nodes being hubs, (ii) small-world properties and (iii) organization into modular and hierarchical structures.

How different brain regions are functionally connected and how the interplay between brain regions is related to vigilance states (awake and sleep states) and cognitive functioning is of growing interest in neuroscience. An influential approach in this field is the idea that the complexity of the functional brain network can be related to the level of consciousness in the corresponding vigilance state [31], as formulated in the information integration theory of consciousness [29]. Using a complexity measure for networks that can be related to one of the early measures used by Tononi et al. [32], Zamora-López et al. [30] show that the complexity of brain networks depends essentially on the existence of hubs and their modular structure. In this issue, Spoormaker et al. [33] highlight changes in large-scale functional brain networks between the awake state, light sleep and deep sleep. In accordance with the information integration theory of consciousness, they find a breakdown of large-scale connectivity in deep sleep. By contrast, they describe an increased functional connectivity in light sleep when compared with the awake state and discuss possible implications.

The contribution of Seth et al. [34] reviews the state of the art of the development of quantitative measures that are sensitive to the level of consciousness and presents also recent results of the group. In particular, they study the Granger causality and its nonlinear generalizations that allow for the definition of a causal density, which is hypothesized to be a candidate for a quantitative measure of the level of consciousness. So far, such measures have been shown to be plausible only using some generic dynamics on abstract network models. Seth et al. [34] also highlight both the technical difficulties and the conceptual problems that have to be overcome before a successful application of these measures to experimental data can be expected. One of the problems that have to be solved is the identification of elements whose interaction should be studied, i.e. the identification of the nodes of the functional network. In the case of EEG measurements, the single channels are obviously a bad choice for being these elements, because the results would depend on the chosen reference and there is no direct causal dependence between the EEG channels but perhaps common sources. The contribution of Pascual-Marqui et al. [35] presents a possible solution to this problem: cortical current density estimation. They present a specific solution to the inverse problem that attains exact localization for point sources under ideal (no-noise) conditions called exact low-resolution electromagnetic tomography (eLORETA) and discuss it by comparing it with alternative solutions of the inverse problem.

3. The complex dynamics of the sleeping brain

The brain during sleep may be considered as a complex dynamical system largely independent of external stimulation. Even though external input is scarce, the brain shows a rich repertoire of internally generated dynamic patterns and state transitions. Oscillations of brain activity observed during sleep are complex and span multiple time and spatial scales, ranging from characteristic oscillations such as slow waves and sleep spindles, which span several seconds in duration, to transitions between the different sleep stages, which take places on a time scale of minutes, to the dynamical processes of sleep regulation, with characteristic time constants in the range of hours. The challenge of modelling the dynamics on these different time scales is discussed from a time-series analysis point of view in the contribution of Olbrich et al. [36].

Models of neural dynamics are basically nonlinear owing to voltage-dependent ion channels, firing thresholds, nonlinearities in synaptic transmission and other mechanisms. Even in rate-based models, which describe neural behaviour in terms of average rates of action potentials only, the firing rate of a neuron is given by a nonlinear (sigmoidal) function of the membrane potential. Therefore, the concepts and methods from nonlinear dynamics could be an important tool for the understanding of neural dynamics. In order to apply these tools, however, one has to overcome a serious obstacle. Because of the large number of neurons in the brain, neural dynamics happens in a very high-dimensional state space, while the tools of nonlinear dynamics are easily applicable only to low-dimensional or moderately high-dimensional systems.

From the viewpoint of data analysis, there was the hope that the macroscopic dynamics of the brain, which is, for instance, reflected in the EEG, might be essentially low dimensional, at least under certain conditions, such as during sleep. However, early findings of low-dimensional attractors, for instance in the sleep EEG, turned out to be spurious [37]. Therefore, time-series analysis of EEG data developed in two directions. On the one hand, there is a revival of linear models (autoregressive models, state space models) and methods (Granger causality, coherence-based measures). On the other hand, nonlinear measures—in particular, synchronization and interdependence measures—are applied without having a full model of the dynamics of the underlying system. In this issue, Olbrich et al. [36] use linear time-series models to describe short segments of the sleep EEG, Seth et al. [34] employ the Granger causality to define causal density as a quantitative measure that is sensitive to levels of consciousness, and Pascual-Marqui et al. [35] propose the combination of source localization with coherence-based measures for the analysis of functional connectivity.

From the viewpoint of modelling, a dimension reduction can be achieved by considering only the dynamics of firing rates of neural populations. This approach leads to so-called neural field models and is used in this issue by Bojak et al. [38], Fleshner et al. [39] and Robinson et al. [40]. It ignores the precise firing times of neurons, which can be a reasonable approximation in neural populations for processes that are slow compared with that of the spike dynamics. Alternatively, systems of neurons can be considered, including much more biophysical detail, as in Bazhenov et al. [41] in this issue. The number of neurons in their model is low compared with the real thalamocortical system under study, but the model reflects the anatomical connectivity scheme of cells and therefore allows the flow of currents across cortical layers to be predicted. Taking cells as representatives of larger local pools that act coherently, such models can predict electrical activity at a high level of spatial resolution. The contribution by Crunelli et al. [42] in this issue studies the same corticothalamic system using a similar approach, but addresses the origin of slow oscillations and ‘up’ and ‘down’ states, two related phenomena of high ongoing interest in neuroscience. This paper comes up with the new hypothesis that ‘the full expression of the slow oscillation in an intact thalamocortical network requires the balanced interaction of oscillator systems in both the neocortex and thalamus' [42]. This hypothesis will now have to be tested by experimenters.

(a) Sleep oscillations

Slow waves resemble a salient feature of the EEG during non-REM sleep and reflect sleep intensity. They are thought to underlie the restorative function of sleep. According to the synaptic homeostasis hypothesis of Tononi & Cirelli [43], they reflect the synaptic downscaling occurring during non-REM sleep. The term ‘slow waves’, however, is used in a rather loose way and comprises different rhythmic components, such as slow oscillations (less than 1 Hz) and delta activity (1–4 Hz). In intracellular recordings, slow oscillations appear as periods of synaptic firing (up states) and periods of neuronal silence (down states) alternating with a frequency around 1 Hz, and in multi-unit recordings as sequences of ‘on’ (spiking) and ‘off’ (no spikes) periods. Generally, slow oscillations are considered as having a cortical origin, as they survived thalamectomy [44] but not the interruption of cortico-cortical connections [45]. However, Crunelli et al. [42] in this issue propose a complex interplay between thalamus and cortex to generate slow oscillations based on empirical data and modelling.

Micro-physiological recordings in the human cortex revealed that slow oscillations reflect rhythmic oscillations of widespread cortical activation and silence [46] with a specific distribution of current sources and sinks across cortical layers. Bazhenov et al. [41] in this issue address this matter with models of slow oscillations.

Sleep spindles are another prominent oscillatory event observed in the non-REM sleep EEG. They are a hallmark of light sleep and are generated in the thalamus by interactions between thalamic reticular inhibitory neurons and excitatory corticothalamic neurons that project to the cortex [47].

Spoormaker et al. [33] in this issue review evidence for potential cortical and subcortical generators of sleep oscillations, such as slow waves and sleep spindles, as assessed in studies combining EEG recordings and imaging (positron emission tomography or fMRI).

On the level of description of neural field models, sleep oscillations, like sleep spindles or alpha activity in non-REM sleep, can be understood as resonances, i.e. by analysing the linear response of the stationary solution to stochastic perturbations. Robinson et al. [48] formulated a neural field model of the thalamocortical system during sleep (reviewed also briefly in this issue [40]), which reproduced spectral characteristics of the sleep EEG in the different sleep stages. They related the different sleep oscillations to corresponding Hopf bifurcations in the model, with normal awake and sleep stages being located in the region where the fixed point is stable. The different sleep stages correspond to different values of the gains in different cortical and corticothalamic feedback loops. A specific sleep oscillation is the more pronounced the nearer the actual state is to the bifurcation. There are also attempts to understand K-complexes and the slow oscillation as related to a cortical bistability in a neural field model of the cortex [49].

(b) The nonlinear dynamics of sleep regulation

Three distinct processes underlie sleep regulation at a phenomenological level (for a recent review, see [50]): a homeostatic process, reflecting prior history of sleep and wakefulness; a circadian process, based on cellular clock-like mechanisms; and an ultradian process, reflected by the cyclic alternation of non-REM and REM sleep (on a time scale of minutes to hours in humans and minutes in rodents). The circadian and homeostatic processes were formalized phenomenologically in the two-process model of sleep regulation [51,52]. According to this model, a homeostatic process (process S; relaxation oscillator) and a circadian process (process C) interact to generate sleep–awake patterns (on a time scale of hours to days in humans and minutes in rodents). In recent years, more and more facts about the underlying neurophysiological mechanisms have been uncovered by experimental work [53]. Based on these findings, several attempts have been made to develop mathematical models of the neural dynamics underlying sleep regulation. There the activity of the specific neural populations involved in sleep regulation is modelled by coupled nonlinear ordinary differential equations that can produce limit cycles, show hysteresis effects and undergo bifurcations. In this issue, Fleshner et al. [39] concentrate on the interaction between the circadian and the homeostatic regulatory mechanisms by modelling the neurotransmitter-mediated interactions between the corresponding neural populations, whereas Robinson et al. [40] focus on the reaction of the regulatory dynamics to external influences, such as auditory stimuli, sleep deprivation or caffeine.

Both Robinson et al. [40] and Fleshner et al. [39] model homeostasis in the framework of the adenosinergic system. In these models, the homeostatic component, however, has no link to neuronal substrates generating slow waves and thus has no relation to slow-wave activity. An alternative hypothesis was proposed by Tononi & Cirelli [43,54], who relate the homeostatic process to an increase of synaptic weights during the awake state and a decrease during slow-wave sleep, with the slow oscillations playing the crucial role for synaptic downscaling [55]. Further work is needed to unravel the interaction between the nuclei governing this regulatory dynamics and the thalamocortical system, which would provide the link to an understanding of the changing patterns of the occurrence of the sleep oscillations.

4. Conclusions and outlook

To summarize, the present Theme Issue aims at bringing together complexity science and sleep research. On the one hand, the mathematical concepts and tools developed in complexity science appear to be highly useful for the study of the sleeping brain in various aspects: computer modelling allows one to study brain networks at different levels, with trends going towards large multi-scale systems of high biological faithfulness; mathematical analyses provide insight into the dynamics and bifurcations of model activity, their robustness and generic mechanisms; complex network studies can characterize the connectivity structure of anatomical networks as well as functional networks during different sleep states; and advanced data analysis methods are invaluable in linking models and theories to real data, which is often high-dimensional and extraordinarily rich. On the other hand (virtually flip-sides of the advantages above), sleep research provides a large number of interesting aspects to complexity scientists as well: modelling the brain is certainly one of the greatest challenges of current science, and new levels of understanding seem possible given recent progress in modelling and computer technologies; the subsystems that contribute to different sleep phenomena are complicated but approachable, such that they can serve as real-world testbeds for theoretical work; the connectivity of the brain is just being discovered on levels of previously unknown detail, awaiting a characterization and analysis in terms of advanced concepts in complex network science; and experiments in sleep research also provide data that require the most powerful data-analysis methods available.

There is little doubt that, to advance an integrated understanding of the complex dynamics of the brain that underlie sleep and consciousness, scientists from multiple disciplines have to work together. This comprises experimental brain research, medicine, psychology, complex systems science, data analysis and large-scale computational modelling. This Theme Issue will hopefully stimulate and intensify the interdisciplinary communication and contribute to eroding existing barriers.


We thank the Max Planck Institute for the Physics of Complex Systems (Dresden, Germany) for its generous support, which enabled a meeting on ‘Complex dynamics in large-scale interacting brain systems: towards physical models of sleep and consciousness’ that was the origin of this Theme Issue.



View Abstract