## Abstract

Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast–slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

## 1. Introduction

Recent success in the simulation of large-scale cortical networks (Rulkov & Bazhenov 2008), even reaching the size of the human brain (Izhikevich & Edelman 2008), has stimulated work with map-based (also called discrete-time) neuron models. Compared with ordinary differential equation (ODE)-based neuron networks (Hodgkin & Huxley 1952; FitzHugh 1969; Hindmarsh & Rose 1984; Chay 1985; Abarbanel *et al.* 1996; Hoppensteadt & Izhikevich 1997), map-based neuron models have the advantages of requiring shorter computation time and fewer resources, using completely transparent iteration algorithms, and being capable of reproducing all kinds of dynamic regimes observed in real neurons.

Neuron networks are made up of two kinds of components: the individual neurons (called nodes in complex network parlance) (Watts & Strogatz 1998), and the synapses between them (also called connections or edges). The topological structure of the network and the dynamics of the nodes interact in complex ways to produce behaviours that can be observed in real neural systems. A connection scheme that has received attention as a simple model of chemical synapses, called fast threshold modulation (FTM) (Somers & Kopell 1993; Skinner *et al.* 1994), transforms the dynamic model of the network into a non-smooth or hybrid dynamical system (di Bernardo *et al.* 2001; di Bernardo & Tse 2002; Lygeros 2004).

Non-smooth or hybrid dynamical systems are defined by a combination of evolution equations (either ODEs or maps) and jump conditions producing discontinuous switching. Examples of hybrid systems are: a bouncing ball, defined by a two-dimensional system of ODEs with one state-change condition (the rebound); a gear shift design with both continuous and discrete controls; an automated highway system; and a computer-controlled system involving a physical process (modelled as a continuous-time system) and a computer (which is fundamentally a finite-state machine). Hybrid modelling is a natural approach for: mechanical systems, where continuous motion may be interrupted by sudden collisions; electrical circuits, where continuous phenomena such as the charging of capacitors are interrupted by switches opening and closing, or diodes going on or off; chemical processes, where the continuous evolution of chemical reactions is controlled by valves and pumps; and embedded computation, where a digital computer interacts with a mostly analogue environment. In all these examples, the continuous components (charging capacitors, chemical reactions) are instantaneously changed by the discrete components (switches, valves, computers) (di Bernardo *et al.* 2001; di Bernardo & Tse 2002; Lygeros 2004).

In the case of neural networks, modelling the connections by means of FTM turns neurons into automatons with two or multiple states induced by two possibilities in the state of its presynaptic neighbours: ‘on’ and ‘off’. This kind of on–off switching is reminiscent of binary neuron models; it may be said that a neuron coupled via FTM sees its presynaptic neighbours as binary switches. The criterion for switching usually depends on the membrane potential of the presynaptic neuron. Methods and techniques used to deal with hybrid dynamical systems can be fruitfully applied to study these neuronal models.

In this paper, we shall review mathematical modelling of hybrid map-based neuron networks. In the first place, models of hybrid map-based neurons and networks are introduced. Then the simplest neuron network is analysed, first with just chemical synapses and then with both chemical and electrical synapses. Finally, larger networks are discussed. Throughout, we describe techniques that have been used to deal with these systems, such as fast–slow decomposition, phase-plane analysis and master stability functions.

## 2. Description of hybrid map-based neuron networks

As mentioned above, hybrid map-based neuron networks are composed of two parts: the isolated neuron model (nodes) and the coupling configuration of the network, including the topological structure and the dynamics of the links (edges, connections). In this section, we will first introduce some popular map-based neuron models, and then describe the FTM scheme used to build hybrid networks.

### (a) Map-based neuron models

Classic models of biological neurons have traditionally been described by ODEs. In recent years, however, low-dimensional map-based neuron models have gained acceptance as versatile and computationally efficient alternatives to their ODE-based counterparts. Several such models have been proposed: the Aihara model (Aihara *et al.* 1990), the Chialvo model (Chialvo & Apkarian 1993; Chialvo 1995), Rulkov models (Rulkov 2001, 2002), Izhikevich models (Izhikevich 2003, 2004) and the Courbage model (Courbage *et al.* 2007). The Aihara model in particular is, on top of map-based, a hybrid model, and we discuss it next because it illustrates well some useful techniques. Subsequently, we will describe other models, which are not intrinsically hybrid but turn so when connected appropriately.

### (b) The Aihara model

The Aihara model derives from a lineage that has its source in the formalism proposed by McCulloch & Pitts (1943), later refined by Caianiello (1961) and Nagumo & Sato (1972). Consider the following neuron equations:
2.1
Here *H* is the Heaviside step function (*H*(*x*)=0 for *x*≤0, *H*(*x*)=1 otherwise), *S*_{n} is an external stimulus at time *n*, *θ* is a threshold value for firing and *x*_{n} represents the state of the neuron, which can be either 0 (not fired) or 1 (fired) at each time step. The meaning of the other parameters can be found in Nagumo & Sato (1972).

The system is clearly discrete-time, and the Heaviside step function makes it hybrid. To simplify the analysis, Caianiello (1961) and Nagumo & Sato (1972) introduced a new variable as follows:
so that equation (2.1) can be modified into
and
Here *a*_{n} is the transformed stimulus (Nagumo & Sato 1972), *x*_{n}=*H*(*y*_{n}) still represents the firing state of the neuron, while *y*_{n} denotes an internal state that measures how close the neuron is to fire.

The equation for *y*_{n} is easier to understand, but is still hybrid, suffering from the discontinuity in the return map due to the Heaviside step function. What Aihara *et al.* (1990) proposed was to modify the model by substituting the Heaviside step function *H*(*y*) by a steep sigmoid *F*(*y*)=1/(1+e^{−y/σ}), with small *σ*. In this way, the return map still captures the essential features of neuron firing, but is now continuous, and Lyapunov exponents are available to discuss the stability of orbits as a function of the parameter *a*_{n}=*a*. Positive exponents imply that there exist chaotic orbits. This result has been observed in the giant axon of the squid (Matsumoto *et al.* 1987). A detailed study of the Aihara model can be found in Pasemann (1997). Substitution of discontinuities by steep continuous functions is a useful resource for the analysis of hybrid dynamical systems (Shilnikov & Rulkov 2003, 2004; Belykh *et al.* 2005).

### (c) Two-dimensional, fast–slow map-based neuron model

Many map-based neuron models are two-dimensional, with one variable representing membrane voltage and the other encapsulating slow recovery mechanisms. Throughout the rest of this paper, the chaotic Rulkov map-based model will be used as an illustrative example (Rulkov 2001).

The Rulkov models are defined by the following equations (Rulkov 2001, 2002):
2.2
There are several variants of the model depending on the form of the nonlinear function *f*_{α}(*x*,*y*). The particular variant we will use, the chaotic Rulkov neuron, has
The fast variable *x*_{n} is a scaled version of the transmembrane voltage of each neuron, and the slow variable *y*_{n} denotes the slow gating process; its time scale is set by the small parameter *η* (0<*η*≪1); *α* and *σ* are control parameters. The widely differing time scales of each variable allow us to separate the system into subsystems (Fenichel 1979; Guckenheimer *et al.* 2000; Rubin & Terman 2002; Terman & Izhikevich 2008): the so-called fast subsystem, consisting of the *x* equation alone, with variable *y* considered as a fixed parameter *γ*; and the slow subsystem, consisting of the *y* equation alone, now with a parameter *γ* that evolves according to average values of *x*. This is the discrete-time counterpart of the perturbation method of averaging (Sanders & Verhulst 1985), a standard analysis technique extensively used for the purposes of theoretical neuroscience (Rinzel & Ermentrout 1989).

Detailed analyses of Rulkov neuron models can be found in Rulkov (2001), de Vries (2001), de Vries & Sherman (2001), Tanaka *et al.* (2006) and Ibarz *et al.* (2007*a*,*b*; 2008). In figure 1, we display the main properties of the chaotic model. As seen in figure 1*a*, the one-dimensional fast subsystem (remember that *γ* is a fixed-value substitute for *y*_{n}) exhibits chaotic orbits. When the full system is considered, *y*_{n} decreases while *x*_{n} is in the chaotic regime (spiking), down to values where the *x*_{n} chaotic orbit, due to an external crisis bifurcation, falls down to a stable fixed point (quiescence). Then *y*_{n} slowly increases until the fixed point vanishes in a saddle-node bifurcation, *x*_{n} jumps to chaotic spiking again, and the cycle repeats. The full bursting orbit is superimposed onto the phase plane in figure 1*b*, where the fast nullcline with its stable and unstable branches is also represented; the slow nullcline would be the horizontal line at *x*_{n}=*σ*=−1.2; figure 1*d* shows the *x* orbit through time. The key to bursting is therefore the existence of an interval of *y*_{n} values of bistability, delimited by the left-hand external crisis bifurcation *y*_{n}=*γ*_{cr} and the right-hand saddle-node bifurcation *y*_{n}=*γ*_{sn} (Wiggins 1990). The dependence of these bifurcations on parameter *α* is shown in the parameter plane (*γ*,*α*) in figure 1*c*, where the region of bistability is hatched. Notice that bursting is possible only for *α*>4, where the external crisis bifurcation exists. In what follows, *α*=4.15 will be used.

It is worth noting that the parameter *σ*, which can be interpreted as an external stimulus, plays a vital role in producing the bursting waves. In fact, three different regimes can be reproduced depending on the value of *σ*. When the line *x*_{n}=*σ* intersects the stable branch *N*_{s} of the fast nullcline, the intersection point is a stable fixed point corresponding to the regime of silence. When the line *x*_{n}=*σ* goes beyond the critical point near the intersection of the stable and unstable branches, a regime of bursting kicks in, where *y*_{n} alternately increases while *x*_{n}<*σ*, building up for spiking, and decreases during the spikes, with *x*_{n}>*σ*. The repetitive transitions between silence and spiking are shown in figure 1*d*. If *σ* increases further, the spiking regime becomes permanent.

### (d) Hybrid map-based neuron networks

A network is formed when *N* map-based neurons are synaptically coupled. The topological arrangement of nodes and links together with their dynamics determines the behaviour of the network, which may approximate that of biological neuron systems. In general, *N* coupled identical map-based neuron models can be written in the following form:
2.3
where *g*_{c} and *g*_{e} stand for the chemical and electric coupling strengths, respectively. The function *f*_{α} and the parameters *η* and *σ* have the same meaning as in §1. In (2.3), and *β*^{e}_{n,i} denote, respectively, the chemical and electrical synaptic driving forces, which are simply modelled as follows:
2.4
is the set of neurons that send chemical synapses to neuron *i*, while is the analogous set for electrical connections. Electrical connections are just linear resistive terms defined by a single conductance *g*_{e}. Chemical interaction is slightly more involved. Parameter *ν* is a reversal potential, which is determined by the nature of the postsynaptic ionic channels. A high value corresponds to excitatory coupling, a low value to inhibition. In (2.4), *θ* is the presynaptic threshold for chemical synaptic interaction, and *H*(*x*) is once again the Heaviside step function. It is this discontinuous function in the coupling that makes the full system hybrid. When a presynaptic neuron voltage *x*_{n,j} is above *θ*, the postsynaptic neuron equations will include an additional term of excitatory or inhibitory current. Otherwise, the fast chemical synapse does not affect the network. Thus, the nodes switch between different states, forming a complicated discrete-time hybrid system. This coupling scheme was first discussed by Somers & Kopell (1993) and Skinner *et al.* (1994), who named it fast threshold modulation (FTM). They used it to analyse the synchronization of two ODE-based relaxation oscillators. In §3, we present a similar analysis for the Rulkov neuron model, exemplifying useful techniques for hybrid systems.

## 3. Two identical map-based Rulkov neurons coupled by fast threshold modulation

Following Ibarz *et al.* (2008), we begin with the simplest network composed of two identical Rulkov neurons coupled exclusively by chemical synapses with FTM. Subsequently, this simple network is extended to include also electrical synapses.

The result of particularizing the network equations (2.3) to two identical neurons without electrical coupling (*g*_{e}=0) is

Depending on the state of neuron 2, neuron 1 switches between two possible modes. While *x*_{n,2} is below the threshold *θ*, neuron 1 follows the equations of an isolated neuron (equations (2.2)). On the other hand, if *x*_{n,2} is above *θ*, neuron 1 follows the modified equations
3.1
We refer to equations (3.1) as the shifted system. The important point to notice is that, while in either the isolated or shifted mode, neuron 1 is autonomous, i.e. no interaction term dependent on neuron 2 appears in the equations. Interaction is mediated by the switching times. Of course, neuron 2 is affected by neuron 1 in the same way. The simple network is a hybrid automaton exhibiting transitions between isolated and shifted behaviours in each element; switching coincides with the alternation between silence and spiking if the threshold *θ* lies just below spike initiation (we will use *θ*=−1.4 in our simulations, unless otherwise specified). Incidentally, the rapid switching of the two states is the rationale for the term FTM.

We now discuss the use of phase-plane techniques to determine the regimes of synchronization of the two-neuron system. Synchronization of bursts can be understood by looking at the differences between the isolated (equations (2.2)) and shifted (equations (3.1)) equations in terms of the position of the nullclines and curves of minimum iterates, defined by the limiting values *γ*_{sn}, corresponding to the *y* value where the stable and unstable branches of the fast nullcline meet, and *γ*_{cr}, the value of *y* of the external crisis bifurcation that terminates spiking.

The nullclines in the excitatory and inhibitory cases are depicted in figure 2. In the excitatory case (figure 2*a*), the *x*-nullcline of the isolated subsystem lies always to the right of the *x*-nullcline of the shifted subsystem. In the inhibitory case (figure 2*b*), the situation is the opposite.

The excitatory case, with the shifted nullclines to the left of the isolated nullclines, favours in-phase synchrony. Indeed, suppose that both neurons are silent and moving rightward along the slow branch of the fast nullcline. When the neuron with the higher value of *y* reaches the saddle-node bifurcation *γ*_{sn}, it will jump up into the spiking regime, above the threshold *x*=*θ*. The phase plane of the other neuron then switches to the shifted mode, its fast nullclines pushed leftwards, and it is either immediately driven into spiking or its distance to the saddle-node bifurcation is shortened. Later on, when the spiking neuron with the lower value of *y* reaches the external crisis bifurcation *γ*_{cr}, it jumps down into the silent regime below *x*=*θ*. This switches the phase plane of the second neuron into the isolated mode, shifting nullclines and driving it closer, in turn, to silence. In the inhibitory case, the shifted nullclines are to the right of the isolated nullclines. A similar reasoning demonstrates that this favours anti-phase synchronization (Ibarz *et al.* 2008). The two regimes of synchronization, in-phase and anti-phase, are illustrated in figure 3.

Therefore, the relative position in the *y* axis of the saddle-node and external crisis bifurcations in the isolated and shifted systems can be used to predict burst synchronization. Figure 4*a* shows the dependence of these points on coupling strength *g*_{c} for three values of reversal potential *ν*. For inhibitory synapses (*ν*=−2), both saddle-node and external crisis points shift to the right with respect to the isolated system, the more so, the stronger the coupling: anti-phase synchronization ensues. For excitatory synapses (*ν*=−0.5), the opposite is true. For intermediate values (*ν*=−1.4), the saddle node is shifted to the left while the external crisis is shifted to the right. This means that burst initiation tends to synchronize bursts in-phase, while burst termination tends to split bursting in anti-phase. As discussed in Ibarz *et al.* (2008), this opens up the possibility of controlling synchrony by means of the external excitation *σ*. This effect is quantified in figure 4*b*, where the average cross-correlation along time of the *x*_{n} values of the two neurons is represented as a function of external stimulation *σ*. Positive values are indicative of mostly in-phase bursting and vice versa.

## 4. Coupling with both chemical and electrical synapses

When *g*_{e}>0 in equations (2.3), the dynamics of each neuron can no longer be analysed in terms of switching between two modes of autonomous evolution, because electrical interaction is permanent. However, thanks to the slow–fast decomposition of the system, an averaged version of the interaction can be used to transform it into a switching one, and take advantage of the technique we have previously described to predict and control burst synchrony.

The idea consists simply of averaging the value of *x* during the spiking phase and the silent phase of a burst to obtain an averaged electrical interaction that can be treated just as the chemical term. While neuron 2 is spiking, the electrical current on neuron 1, *β*_{n,1}, described in equations (2.4), averages out to
where *μ*_{up} is the average value of *x* during the spiking phase of a burst. Although this value depends somewhat on the coupling, for our approximate treatment, it is enough to use the isolated neuron level. With *α*=4.15 and *σ* in the bursting region, *μ*_{up}≈−0.35. Similarly, while neuron 2 is silent, neuron 1 receives an average electrical interaction current:
With our parameters, *μ*_{down}≈−1.85. In these cases, averaging is not strictly justified, since the *x* value of the quiescent neuron 2 follows along the stable branch of the nullcline with the same time scale as the slow variable *y*. Nevertheless, because variation of *x* along this branch is not so pronounced, the approximation yields good results.

We can now proceed as in §3, considering two modes for the dynamics of neuron 1; the new ‘isolated’ mode, which we will more appropriately call ‘low-input’, since it differs from the true isolated mode of equations (2.2), follows equations
4.1
while the new shifted mode now takes the form
4.2
Thus, we have managed to recast the system in terms of elements that behave autonomously, except for their switching between modes. We can now predict the regime of synchrony depending on *g*_{e}, *g*_{c} and *ν*. The simplest case, *g*_{c}=0, is depicted in figure 5, much in the vein of figure 4*a*; it gives an idea of the effect of electrical synapses alone. For any value of *g*_{e}, both the saddle-node and the external crisis bifurcations shift towards the left when switching from the low-input to the shifted mode. As explained in the previous section, this favours in-phase synchronization. Notice, indeed, that when *g*_{c}=0, equations (4.2), the electrical shifted mode, are identical to equations (3.1), the chemical shifted mode with an equivalent reversal potential *ν*_{eq}=*μ*_{up}. Because *μ*_{up} is high, the electrical synapses produce currents of an excitatory nature while the other neuron is spiking. The low-input case (equations (4.1)) is identical to the chemical shifted mode with *ν*_{eq}=*μ*_{down}, an inhibitory value; because it enters the neuron while the other is silent, it also helps synchrony.

We see, therefore, that electrical coupling is strongly favourable to in-phase synchrony. Indeed, in the absence of chemical coupling, a value of *g*_{e} as low as 10^{−3} is enough to induce in-phase bursting. If excitatory chemical synapses are added, the tendency is simply reinforced. A more interesting case arises, however, when electrical coupling is combined with inhibitory chemical synapses. This is a relevant combination, because cortical fast-spiking neurons are known to inhibit each other while at the same time sharing gap junctions; the interplay between both types of connection is believed to mediate synchrony in the cortex (Tamas *et al.* 2000). What electrical coupling strength is necessary to produce in-phase synchrony in the presence of chemical inhibitory coupling? And what happens for intermediate values, where their driving forces cancel each other?

We can attempt to answer these questions with the same simple reasoning that was the rationale for figures 4*a* and 5. We check the sign of *γ*_{sn}(*g*_{e},*g*_{c},*ν*)−*γ*′_{sn}(*g*_{e},*g*_{c},*ν*), and of *γ*_{cr}(*g*_{e},*g*_{c},*ν*)−*γ*′_{cr}(*g*_{e},*g*_{c},*ν*), that is, the relative positions of the saddle-node and external crisis bifurcations in the low-input and shifted modes. Positive values of both predict in-phase synchronization (i.e. the electrical coupling is strong enough to overcome the inhibitory effect); negative values, anti-phase (electrical coupling too weak). Differing signs indicate that burst initiation and termination have opposite effects on synchrony, which might allow controlling synchrony by means of external signals. Figure 6*a* illustrates this point: with weak electrical coupling (*g*_{e}=0.025), cross-correlation between the two neurons is negative for every value of external excitation *σ*. With strong coupling (*g*_{e}=0.065), the opposite is true. But for an intermediate value (*g*_{e}=0.045), synchronization is in-phase at low excitation values and anti-phase with high excitation.

The simple phase-plane analysis in terms of the relative positions of saddle-node and external crisis bifurcations can be used to predict the values of *g*_{e} that make synchrony controllable. Figure 6*b* shows, in grey, the region in the parameter plane *g*_{c}–*g*_{e} where the signs of *γ*_{sn}(*g*_{e},*g*_{c},*ν*)−*γ*′_{sn}(*g*_{e},*g*_{c},*ν*) and of *γ*_{cr}(*g*_{e},*g*_{c},*ν*)−*γ*′_{cr}(*g*_{e},*g*_{c},*ν*) are opposite, for two different values of inhibitory reversal potentials *ν*. At the same time, the parameter plane has been explored via numerical simulation and the points where the system synchrony is controllable have been marked; the criterion for controllability is that the average cross-correlation coefficient between *x*_{n,1} and *x*_{n,2} must change from positive to negative values beyond the 99 per cent significance thresholds as the external excitation sweeps across the bursting region (as in the middle trace of figure 6*a*). The controllable configurations lie inside the predicted region. It is always the case that non-controllable configurations with *g*_{e} above the controllable values yield in-phase synchronization independently of *σ*, while weaker *g*_{e} values produce anti-phase synchrony.

## 5. Generalization to networks of identical neurons coupled by chemical and electrical synapses

So far we have described how simple conditions on the phase plane can help us predict the synchronization modes between two neurons. Tanaka *et al.* (2006) generalized this result using a slightly different coupling scheme. The map-based network of neurons is still represented in equations (2.3), but the chemical interaction term takes a form different from that of equations (2.4), namely
5.1

Notice that now the chemical synaptic current is directly proportional to the *x* value of the presynaptic neurons, and therefore there is no discontinuous transition between ‘on’ and ‘off’ states of the chemical coupling. This makes it possible to discuss the stability of the invariant synchronized manifold of equations (2.3) with the help of master stability functions (Pecora & Carroll 1998). The goal is to explore the relationship between network topology and dynamics.

The variational equation of equations (2.3) restricted to the synchronization manifold *Π*={(*x*_{1},…,*x*_{N}),(*y*_{1},…,*y*_{N})|*x*_{1}=⋯=*x*_{N},*y*_{1}=⋯=*y*_{N}} can be written as
Here *N* is the number of neurons in the network, *I*_{N} is the *N*×*N* identity matrix, ⊗ is the Kronecker product,
and *G*=−*g*_{c}*G*_{c}+*g*_{e}*G*_{e}, where *G*_{c} and *G*_{e} are the matrices associated to chemical and electrical coupling, respectively; *G*_{c} will have zeros along the diagonal (in the absence of self-coupling), while *G*_{e} will be Laplacian.

After diagonalization of the coupling matrix *G*, the full system Jacobian is transformed into a block diagonal matrix, with *N* 2×2 blocks. All blocks are identical except for a term depending on each of the eigenvalues of *G*. Thus, the eigenvalues of the full system are available as a function (the master stability function) of neuron parameters and of the eigenvalues of *G*, and the influence of network topology on stability can be fully understood. A detailed analysis can be found in Tanaka *et al.* (2006), where the implications for emergent bursting and the appearance of irregular patterns of activity in electrically coupled networks are also discussed.

Interestingly, when inhibitory chemical synapses and electrical connections are combined in a network, the value of *g*_{e} that produces synchronization for a given *g*_{c} can be roughly predicted from the two-neuron network analysis of the previous section, as long as chemical and electrical synapses are tied, that is, as long as, whenever two neurons share an electrical gap junction, they also share reciprocal chemical synapses. Figure 7*a* shows the activity of a 32-neuron ring, where each neuron sends inhibitory chemical synapses to its six nearest neighbours (three to each side). During the first half of the raster, only the chemical connections are active; after *n*=5000, tied electrical synapses are activated and the network synchronizes in-phase. The minimum ratio *g*_{e}/*g*_{c} necessary for this is close to 0.5, not far from the two-neuron case depicted in figure 6*b*. However, if the electrical synapses are distributed at random, they are unable to bring about synchrony. Figure 7*c* represents the minimum *g*_{e} that will synchronize the ring as a function of *g*_{c}, for two different electrical connection topologies; the two-neuron case is also included for comparison. The dependence is approximately linear, and the slope of the tied case (electrical ring) is similar to the two-neuron case, while random electrical synapses, although better at synchronizing the network in the absence of chemical synapses (note the crossing of lines near *g*_{c}=0), require larger and larger *g*_{e} with increasing *g*_{c}.

## 6. Conclusions

In this paper, map-based neuron models and networks have been examined as hybrid systems, and some useful techniques, such as approximating discontinuities by sharp continuous transitions, phase-plane analysis technique, fast–slow decomposition and master stability functions, have been discussed. We have applied them to show how electrical synapses and inhibitory chemical synapses interact to produce controllability of synchrony, and how connection topology affects this interaction. With the growing interest in hybrid map-based neuron networks, a coherent perspective will gradually emerge and possibilities of application will multiply in the future.

## Acknowledgements

This work is supported by a grant from the Chinese Natural Science Foundation under Project No. 10771012 (H.C.) and by the Fundación Ramón Areces (B.I.). H.C. is grateful for Prof. K. Aihara’s invitation to a fruitful two-week research visit to Prof. K. Aihara’s laboratory, where H.C. had the opportunity to discuss hybrid dynamical systems with Prof. K. Aihara, Dr G. Tanaka and many young promising researchers there.

## Footnotes

One contribution of 12 to a Theme Issue ‘Theory of hybrid dynamical systems and its applications to biological and medical systems’.

- © 2010 The Royal Society