Although it is difficult to point to a definite origin of the field of nonlinear dynamics, it evolved in part from attempts to understand two fundamentally different types of deterministic systems—very simple systems with as few as three variables on the one side that generate erratic non-periodic dynamics, exemplified by the three-body problem; and nearly infinite dimensional systems, such as fluids, that transition from very well-behaved dynamics through a range of increasingly complex space and time-varying dynamics, exemplified by the transitions from laminar to turbulent hydrodynamic flows.

By the 1980s, the nonlinear dynamics community fitted roughly into two camps, those looking at and harnessing low-dimensional chaos—the emergence of complex unpredictable behaviour from systems with few degrees of freedom; and those looking at pattern formation—or the emergence of order, or coherent-structured dynamics in space and time—from systems with infinite degrees of freedom. The classic experimental systems for the study of, and application of, chaos were circuits and pendulums, lasers and population dynamics. Pattern formation was studied in fluid flows, crystal growth dynamics, chemical reactions and biology. Often, there was cross-pollination between these camps in developed theoretical tools, computational hardware, data analysis and visualization technology and experimental approach. The objectives overall were to both understand the origins of the phenomena and provide insight into how to harness the origins of complexity, exemplified in the 1990s by the development of chaos control techniques.

The current issue is a cross-sectional sampling of the broad experimental applications now addressed in the nonlinear dynamics community. It starts with a solution to a classic hydrodynamic problem in Metcalfe *et al.* (2010). The challenge addressed is how to optimally mix fluids—critical, for example, in chemical reactors and heat exchangers—with simple laminar, low-energy, flows. Efficient mixing brings arbitrary pairs of fluid particles together. The simplest of such flows does not have the mixture of convergent and divergent points needed to do so. But, as the authors demonstrate, super-positions of such flow patterns, achieved by alternating between boundary conditions, yield highly chaotic fluid trajectories and therefore ideal mixing.

The next two articles address issues and applications in chaotic dynamics. In the 1990s, atomic physics was essentially re-born with the perfection of techniques to greatly cool and trap individual, or small numbers of, atoms (Letokhov *et al.* 1995). Such systems then allow for real experimental systems with well-defined potentials and minimal thermal noise, which should yield the most ideal expression of chaos dynamics. Hennequin & Verkerk (2010) present novel work addressing how to measure such dynamics.

Buscarino *et al.* (2010) use chaotic dynamics for programming the trajectories of robots for the same reasons that Metcalfe *et al.* (2010) do for mixing—for better search and discovery within unknown landscapes. The challenge addressed in their paper is how to synchronize multiple robotic entities. Because the individual trajectories are chaotic—have sensitivity to initial conditions—the entities must actively synchronize their computed dynamics or risk diverging. Rusin *et al.* (2010) extend the discussion of synchronization to networks of interacting dissimilar chaotic elements in their work with chemical oscillators.

Networks abound both in nature and in our man-made world. Understanding the underlying structure of such networks, how the individual elements’ dynamics determines and is modulated by the group dynamics, how information is processed and transmitted through such networks and how robust they are to changes in connections is currently a major undertaking. Kocarev *et al.* (2010) address one aspect of this field in their analysis of vulnerability of dynamic networks both in computational models and in real infrastructure and power grids.

Inhomogeneous networks of inherently complex elements abound in nature as well. Nonlinear scientists from both camps have made great strides using their tools to advance the understanding of the complex machinery underlying the function—and sometimes dysfunction—of the body. The final four articles represent aspects of this effort—two from investigations of heart dynamics and two of brain dynamics.

The article of Bittihn *et al.* (2010) addresses the spatio-temporal dynamics of the travelling activity waves that underlie cardiac fibrillation—which can appear as spiral waves that encircle point defects (Witkowski *et al.* 1998)—and explicitly investigate computationally how stimulation can be used to dislodge such waves and allow resumption of laminar contraction-generating waves.

Riedl *et al.* (2010) address cardiac dynamics on a completely different scale and level of clinical application. Following more the dynamic network approach, they look at the coupling between respiration, systolic and diastolic blood pressure, and heart rate using nonlinear dynamical tools in an effort to both detect the dangerous pregnancy-related condition of pre-eclampsia and understand the systems involved.

Arguably, the most complex—and to many the most compellingly interesting—dynamical system is the human brain. The human brain is composed of multiple connected networks comprising neurons and the support structures of glial and vascular networks, each with their own dynamics that are generated both internally and in response to the others, and determined differently at different length scales. The neurons themselves comprise over 100 billion cells, with anywhere from 10 to 10 000 interconnections (synapses) per neuron in an architecture that has both near and far connections. Information from one neuron diverges to many other neurons, but also converges from many other neurons. It is now commonplace to measure single neuron activity from intact brain (in animals and human) and to decode information from those recordings that reliably matches sensory information and predicts behaviour. Decoding the full neural correlates of brain function will require an understanding not only of how information is coded and transformed, but also of how it is stored in the short term and how the network transforms itself to remember in the long term. Feldt *et al.* (2010) address in their article novel analysis techniques for determining both related-element dynamics from many-neuron recordings and to track memory formation in evolving or learning networks.

In the final article in this issue, Schiff (2010) discusses model and model-based control for Parkinson’s disease (PD). As with the discussion of the cardiovascular system in Riedl *et al.* (2010), the brain regions involved in PD interact as a complex network of inhomogeneous dynamic elements. The disease itself comes from the atrophy of one of these elements, harkening to the vulnerability issues addressed by Kocarev *et al.* (2010). But Schiff (2010) has a clinical bent—he is fundamentally interested in cures for neurological diseases. Schiff and his colleagues have over the last decade spearheaded the application of model-based control to the highly nonlinear models involved in modelling the nervous system. Such techniques were initially developed in control engineering for linear—or linearizable—systems, but have been adapted for highly nonlinear and highly complex models, for example in weather prediction (Evensen & van Leeuwen 2000). In this scheme, one assimilates sparse and noisy measurement into the iteration of a computational model to keep the model state as close as possible to the real system. The advantages are threefold: first, when successful, it allows unobserved state variables to be reconstructed; second, forward iterations of the model provide predictions of the future state of the system that are updated with new observations; and third, when perturbations or feedback are incorporated into the model and system, it allows prescription of control input. Schiff’s article provides a complete tutorial in this programme, from generating a model, to implementing data assimilation, to future prescription of controllers for a host of nonlinear systems.

The above articles were contributed by participants of the Tenth Experimental Chaos Conference (ECC), which took place in Catania, Italy, in June 2008. The ECC has for nearly the last two decades been a forum to highlight the forefront of experimental work in the fields of chaos and nonlinear dynamics and to disseminate novel methodology to a broad range of fields of study. Although early work in nonlinear dynamics centred on chaotic circuits, fluid dynamics and dendritic crystal growth, it is now usefully applied to questions in neuroscience, cellular biology, epidemiology, astrophysics, and sociology, and clinical prosthetic design to name a few. This Theme Issue presents a cross section of this ongoing work.

## Acknowledgements

On behalf of the Guest Editors, I thank all the authors, and the reviewers, for their efforts to make this Theme Issue a success.

## Footnotes

One contribution of 10 to a Theme Issue ‘Experiments in complex and excitable dynamical systems’.

- © 2010 The Royal Society