next up previous contents index
Next: 9.3 Patterns of spike Up: 9. Spatially Structured Networks Previous: 9.1 Stationary patterns of

Subsections



9.2 Dynamic patterns of neuronal activity

Up to now we have treated only a single sheet of neurons that were all of the same type. Excitatory and inhibitory couplings were lumped together in a single function w that gave the `average' coupling strength of two neurons as a function of their distance. `Real' neurons, however, are either excitatory or inhibitory, because they can use only one type of neurotransmitter (Dale's law). A coupling function that yields both positive and negative values for the synaptic couplings is therefore not realistic.

We can easily extend the previous model so as to account for different types of neuron or for several separate layers of neuronal tissue. To this end we embellish the variable u for the average membrane potential with an additional index k, k = 1,..., n, that denotes the type of the neuron or its layer. Furthermore, we introduce coupling functions wkl(x, x') that describe the coupling strength of a neuron from layer l at position x' to a neuron located in layer k and position x. In analogy to Eq. (9.4) the field equations will be defined as

$\displaystyle \tau_{k}^{}$ $\displaystyle {\frac{{\partial u_k(x,t)}}{{\partial t}}}$ = - uk(x, t) + $\displaystyle \sum_{{l=1}}^{n}$$\displaystyle \int$dy  wkl(| x - y|) g[ul(y, t)] + Iext, k(x, t) , (9.35)

with k = 1,..., n. We will be particularly interested in systems made up of two different layers (n = 2) where layer 1 comprises all excitatory neurons and layer 2 all inhibitory neurons. Accordingly, the signs of the coupling functions are as follows

w11$\displaystyle \ge$0,    w21$\displaystyle \ge$0,    w12$\displaystyle \le$0,    andw22$\displaystyle \le$0 . (9.36)

For the sake of simplicity we assume that all coupling functions are bell-shaped, e.g.,

wkl(x) = $\displaystyle {\frac{{\bar{w}_{kl}}}{{\sqrt{2 \pi \sigma_{kl}^2}}}}$ exp$\displaystyle \left[\vphantom{ -x^2 / (2 \sigma_{kl}^2) }\right.$ - x2/(2$\displaystyle \sigma_{{kl}}^{2}$)$\displaystyle \left.\vphantom{ -x^2 / (2 \sigma_{kl}^2) }\right]$ , (9.37)

with mean coupling strength $ \bar{{w}}_{{kl}}^{}$ and spatial extension $ \sigma_{{kl}}^{}$.


9.2.1 Oscillations

As before we start our analysis of the field equations by looking for homogeneous solutions. Substitution of uk(x, t) = uk(t) into Eq. (9.36) yields

$\displaystyle \tau_{k}^{}$ $\displaystyle {\frac{{{\text{d}}u_k(t)}}{{{\text{d}}t}}}$ = - uk(t) + $\displaystyle \sum_{{l=1}}^{n}$$\displaystyle \bar{{w}}_{{kl}}^{}$ g[ul(t)] + Iext, k , (9.38)

with $ \bar{{w}}_{{kl}}^{}$ = $ \int$dx wkl(| x|), as before.

We can gain an intuitive understanding of the underlying mechanism by means of phase-plane analysis - a tool which we have already encountered in Chapter 3. Figure 9.8 shows the flow-field and null-clines of Eq. (9.39) with $ \tau_{1}^{}$ = 1, $ \tau_{2}^{}$ = 5, $ \bar{{w}}_{{11}}^{}$ = $ \bar{{w}}_{{21}}^{}$ = 2, $ \bar{{w}}_{{12}}^{}$ = - 1, and $ \bar{{w}}_{{22}}^{}$ = 0. The gain function has a standard form, i.e., g(u) = {1 + exp[$ \beta$(u - $ \theta$)]}-1 with $ \beta$ = 5 and $ \theta$ = 1.

For zero external input Eq. (9.39) has only a single stable fixed point close to (u1, u2) = (0, 0). This fixed point is attractive so that the system will return immediately to its resting position after a small perturbation; cf. Fig. 9.8A. If, for example, the external input to the excitatory layer is gradually increased, the behavior of the systems may change rather dramatically. Figure 9.8B shows that for Iext, 1 = 0.3 the system does not return immediately to its resting state after an initial perturbation but takes a large detour through phase space. In doing so, the activity of the network transiently increases before it finally settles down again at its resting point; cf. Fig. 9.8B. This behavior is qualitatively similar to the triggering of an action potential in a two-dimensional neuron model (cf. Chapter 3), though the interpretation in the present case is different. We will refer to this state of the network as an excitable state.

Figure 9.8: Phase-space diagrams for homogeneous solutions of the field equation. The arrows indicates the flow field of the differential equation (9.39), the thin lines are the null-clines for u1 and u2, and the thick lines, finally, give a sample trajectory with starting point (u1, u2) = (0.9, 0). Existence and stability properties of the fixed points depend on the amount of external input to the layer of excitatory neurons ( Iext, 1). For Iext, 1 < 0.35 there is an attractive fixed point close to (0, 0) (A and B). For Iext, 1 = 0.5 the fixed point at (0, 0) is replaced by an unstable fixed point close to (1, 1) which is surrounded by a stable limit cycle (C). In A the sample trajectory reaches the fixed point on the shortest possible way, but in B it takes a large detour which corresponds to a spike-like overshoot of neuronal activity.
\begin{minipage}{0.32\textwidth}
{\bf A} ($I^{\text{ext},1}=0$)
\par\includegra...
...xt},1}=0.5$)
\par\includegraphics[width=\textwidth]{bif_3.ps.gz}
\end{minipage}

If the strength of the input is further increased the system undergoes a series of bifurcations so that the attractive (0, 0)-fixed point will finally be replaced by an unstable fixed point near (1, 1) which is surrounded by a stable limit cycle; cf. Fig. 9.8C. This corresponds to an oscillatory state where excitatory and inhibitory neurons get activated alternatingly. Provided that the homogenous solution is stable with respect to inhomogeneous perturbations global network oscillations can be observed; cf. Fig. 9.9.

Figure 9.9: Depending on the amount of external input homogeneous network oscillations can be observed. A. Average membrane potential u1(x, t) of the layer of excitatory neurons. B. Time course of the average membrane potential of the excitatory (solid line) and the inhibitory (dashed line) layer.
\begin{minipage}[t]{0.48\textwidth}
{\bf A}
\par\includegraphics[width=\textwid...
...ce{5mm}
\includegraphics[width=\textwidth]{oscillation_2.ps.gz}
\end{minipage}


9.2.2 Traveling waves

Traveling waves are a well-known phenomenon and occur in a broad class of different systems that have collectively been termed excitable media. A large class of examples for these systems is provided by reaction-diffusion systems where the interplay of a chemical reaction with the diffusion of its reactants results in an often surprisingly rich variety of dynamical behavior. All these systems share a common property, namely `excitability'. In the absence of an external input the behavior of the system is characterized by a stable fixed point, its resting state. Additional input, however, can evoke a spike-like rise in the activation of the system. Due to lateral interactions within the system such a pulse of activity can propagate through the medium without changing its form and thus forming a traveling wave.

In the previous section we have seen that the present system consisting of two separate layers of excitatory and inhibitory neurons can indeed exhibit an excitable state; cf. Fig. 9.8B. It is thus natural to look for a special solution of the field equations (9.36) in the form of a traveling wave. To this end we make an ansatz,

uk(x, t) = $\displaystyle \hat{{u}}_{k}^{}$(x - v t) , (9.39)

with an up to now unknown function $ \hat{{u}}_{k}^{}$ that describes the form of the traveling wave. We can substitute this ansatz into Eq. (9.36) and after a transformation into the moving frame of reference, i.e., with z $ \equiv$ x - v t, we find

- $\displaystyle \tau_{k}^{}$ v $\displaystyle {\frac{{{\text{d}}\hat{u}_k(z)}}{{{\text{d}}z}}}$ = - $\displaystyle \hat{{u}}_{k}^{}$(z) + $\displaystyle \sum_{{l=1}}^{n}$$\displaystyle \int$d$\displaystyle \zeta$  wkl(| z - $\displaystyle \zeta$|) g[$\displaystyle \hat{{u}}_{l}^{}$($\displaystyle \zeta$)] + Iext, k . (9.40)

This is a nonlinear integro-differential equation for the form of the traveling wave. In order to obtain a uniquely determined solution we have to specify appropriate boundary conditions. Neurons cannot `feel' each other over a distance larger than the length scale of the coupling function. The average membrane potential far away from the center of the traveling wave will therefore remain at the low-activity fixed point $ \bar{{u}}_{k}^{}$, i.e.,

$\displaystyle \lim_{{z\to\pm\infty}}^{}$$\displaystyle \hat{{u}}_{k}^{}$(z) = $\displaystyle \bar{{u}}_{k}^{}$ , (9.41)

with

0 = - $\displaystyle \bar{{u}}_{k}^{}$ + $\displaystyle \sum_{{l=1}}^{n}$$\displaystyle \bar{{w}}_{{kl}}^{}$ g[$\displaystyle \bar{{u}}_{l}^{}$] + Iext, k . (9.42)

This condition, however, still does not determine the solution uniquely because Eq. (9.41) is invariant with respect to translations. That is to say, with $ \hat{u}_{k}^{}$(z) a solution of Eq. (9.41), $ \hat{u}_{k}^{}$(z + $ \Delta$z), is a solution as well for every $ \Delta$z $ \in$ $ \mathbb {R}$.

Finding a solution of the integro-differential equation ([*]) analytically is obviously a hard problem unless a particularly simple form of the gain function g is employed. One possibility is to use a step function such as

g(u) = \begin{displaymath}\begin{cases}
0\,, & u<\vartheta \\ 1\,, & u \ge \vartheta \end{cases}\end{displaymath} (9.43)

with $ \vartheta$ $ \in$ $ \mathbb {R}$ being the threshold of the activation function. In this case we can use the translation invariance and look for solutions of Eq. (9.41) containing a single pulse of activation that exceeds threshold on a certain finite interval. Since g(uk) is equal to unity inside this interval and vanishes outside, the integral in Eq. (9.41) can be carried out and we are left with a system of ordinary differential equations. These differential equations are subject to boundary conditions at z = ±$ \infty$ [cf. Eq. (9.42)] and, for the sake of self-consistency, to uk(z) = $ \vartheta$ on the boundaries of the above mentioned interval. In fact, this is too large a number of boundary conditions so as to find a solution in all cases. The differential equations together with its boundary conditions thus form an eigenvalue problem for the remaining parameters such as the propagation velocity, the width of the pulses of activity, and the time lag between the excitatory and the inhibitory pulse. We will not go further into details but refer the reader to the work of, e.g., Amari (1977b).

Figure 9.10: Traveling wave in a network consisting of two separate layers for excitatory and inhibitory neurons. A. Average membrane potential of the excitatory neurons. An additional pulse of external input at t = 10 and x = 0 triggers two pulses of activity that propagate symmetrically to the left and to the right. B. Snapshot of the spatial distribution of the average membrane potential at time t = 50. The solid line corresponds to the excitatory neurons whereas the dashed line is for the inhibitory ones. Note that the activation of the inhibitory neurons is somewhat lagging behind.
\begin{minipage}[t]{0.48\textwidth}
{\bf A}
\par\includegraphics[width=\textwid...
...ar\vspace{5mm}
\includegraphics[width=\textwidth]{wave_3.ps.gz}
\end{minipage}

Figure 9.10 shows an example of a traveling wave in a network with excitatory (layer 1, $ \tau_{1}^{}$ = 1) and inhibitory (layer 2, $ \tau_{2}^{}$ = 5) neurons. The coupling functions are bell-shaped [cf. Eq. (9.38)] with $ \sigma_{{11}}^{}$ = $ \sigma_{{12}}^{}$ = $ \sigma_{{21}}^{}$ = 1 and $ \bar{{w}}_{{11}}^{}$ = $ \bar{{w}}_{{21}}^{}$ = 2, $ \bar{{w}}_{{12}}^{}$ = - 1, and $ \bar{{w}}_{{22}}^{}$ = 0, as before. The excitatory neurons receive tonic input Iext, 1 = 0.3 in order to reach the excitable state (cf. Fig. 9.8B). A short pulse of additional excitatory input suffices to trigger a pair of pulses of activity that travel in opposite direction through the medium.


next up previous contents index
Next: 9.3 Patterns of spike Up: 9. Spatially Structured Networks Previous: 9.1 Stationary patterns of
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.