next up previous contents index
Next: 8.4 Summary Up: 8. Oscillations and Synchrony Previous: 8.2 Synchronized Oscillations and

Subsections



8.3 Oscillations in reverberating loops

In many areas of the brain synaptic projections form so-called reverberating loops. Neurons from one cortical area innervate an anatomically distinct nucleus that in turn projects back to the cortex in a topographically organized fashion. A prominent example is the olivo-cerebellar system. The inferior olive (IO) is a nucleus in the brain stem that is part of a reverberating loop formed by the cerebellar cortex and the deep cerebellar nuclei. A single round-trip from the IO to the cerebellar cortex, the deep cerebellar nuclei, and back to the olive takes about 100 ms - a rather long delay that is the result of slow synaptic processes, in particular of post-inhibitory rebound firing; cf. Chapter 2.3.3. It is known that IO neurons tend to fire synchronously at about 10 Hz which is due to sub-threshold oscillations of the membrane potential (Bell and Kawasaki, 1972; Sotelo et al., 1974; Llinás and Yarom, 1986; De Zeeuw et al., 1998) and an exceptionally high density of gap junctions. The delayed feedback can thus give rise to oscillations of the population activity in the olive. Analogously organized projections together with 10 Hz oscillations (the so-called theta rhythm) can also be observed in other areas of the brain including the olfactory system, hippocampus, and cortico-thalamic loops.

In the previous sections of this chapter we have dealt with networks that exhibit regular oscillations of the neuronal activity. On the other hand, experiments show that though oscillations are a common phenomenon, spike trains of individual neurons are often highly irregular. Here we investigate the question whether these observations can be reconciled: Is it possible to have a periodic large-amplitude oscillation of the population activity and at the same time irregular spike trains? The answer is positive, provided that individual neurons fire with an average frequency that is significantly lower than the frequency of the population activity. Similarly to the cluster states discussed above, each neuron fires on average only in, say, one out of ten cycles of the population activity - the composition of the clusters of synchronously firing neurons, however, changes from cycle to cycle resulting in a broad distribution of inter-spike intervals; cf. Section 8.1. This is exactly what has been observed in the inferior olive. Individual neurons have a low firing rate of one spike per second; the population activity, however, oscillates at about 10 Hz; cf. Fig. 8.11.

Figure 8.11: Synchronous oscillation with irregular spike trains. Neurons tend to fire synchronously but with an average rate that is significantly lower than the oscillation frequency of the population activity (bottom). Each neuron is thus firing only in one out of, say, ten cycles, giving rise to highly irregular spike trains. Short vertical lines indicate the spikes of a set of 6 neurons (schematic figure).
\centerline{\includegraphics[width=10cm]{timewindow.eps}}

We are particularly interested in the effect of feedback projections on the generated spike patterns. In keeping with experimental findings we assume that the feedback projections are sparse, i.e., that spikes from a given neuron in one cycle affect only a small portion of the whole population during the next cycle. Hence, we drop the assumption of an all-to-all connectivity and use randomly connected networks instead. It turns out that irregular spike trains can indeed be generated by the ``frozen noise'' of the network connectivity; cf. Chapter 6.4.3. Since the connectivity is random but fixed the spike patterns of noiseless neurons are fully deterministic though they look irregular. Strong oscillations with irregular spike trains have interesting implications for short-term memory and timing tasks (Billock, 1997; Kistler and De Zeeuw, 2002; Nützel et al., 1994).

This chapter is dedicated to an investigation of the dynamical properties of neuronal networks that are part of a reverberating loop. We assume that the feedback is in resonance with a T-periodic oscillation of the population activity and that the neurons stay synchronized, i.e., fire only during narrow time windows every T milliseconds. We furthermore assume that the set of neurons that is active in each cycle depends only on the synaptic input that is due to the reverberating loop and thus depends only on the activity of the previous cycle. With these assumptions it is natural to employ a time-discrete description based on McCulloch-Pitts neurons. Each time step corresponds to one cycle of length T. The wiring of the reverberating loop is represented by a random coupling matrix. The statistical properties of the coupling matrix reflect the level of divergence and convergence within the reverberating network.


8.3.1 From oscillations with spiking neurons to binary neurons

We have seen that - depending on the noise level - a network can reach a state where all neurons are firing in lockstep. Such a large-amplitude oscillation implies that neurons do only fire only during short time windows around t $ \approx$ n T. Whether or not a neuron fires within the `allowed' time window depends on the input it receives from other neurons in the population.

The membrane potential for SRM0 neurons is given by

ui(t) = $\displaystyle \eta$(t - $\displaystyle \hat{{t}}_{i}^{}$) + $\displaystyle \sum_{j}^{}$wij$\displaystyle \sum_{f}^{}$$\displaystyle \epsilon$(t - tj(f) - $\displaystyle \Delta$) , (8.31)

where $ \eta$(t - $ \hat{{t}}_{i}^{}$) is the refractory effect of the last output spike of neuron i and $ \epsilon$(t - tj(f) - $ \Delta$) is the postsynaptic potential caused by the firing of other neurons j with transmission delay $ \Delta$. A spike is triggered as soon as the threshold is reached. Here we assume that the network is in an oscillatory state so that spikes are fired only if t $ \approx$ n T. Due to refractoriness each neuron can fire at most on spike per cycle. Furthermore, we assume that the transmission delay $ \Delta$ (and hence the period T) is long as compared to the characteristic time scale of $ \epsilon$ and $ \eta$. Therefore, $ \epsilon$(s) and $ \eta$(s) are negligible for s$ \ge$T. Finally, we adjust the voltage scale so that $ \epsilon$(T - $ \Delta$) = 1.

With these assumptions, the dynamics of the spiking neuron model (8.31) reduces to a binary model in discrete time (McCulloch and Pitts, 1943). Let us set tn = n T and introduce binary variables Si $ \in$ {0, 1} for each neuron indicating whether neuron i is firing a spike at ti(f) $ \approx$ tn or not. Equation (8.31) can thus be rewritten as

ui(tn+1) = $\displaystyle \sum_{j}^{}$wij Sj(tn) . (8.32)

The threshold condition ui(tn+1) = $ \vartheta$ determines the state of the neuron in the next time step,

Si(tn+1) = $\displaystyle \Theta$[ui(tn+1) - $\displaystyle \vartheta$] , (8.33)

where $ \Theta$ is the Heaviside step function with $ \Theta$(x) = 1 if x$ \ge$ 0 and $ \Theta$(x) = 0 for x < 0. The simple recursion defined by Eqs. (8.32) and (8.33) fully determines the sequence of spike patterns that is generated by the network given its coupling matrix wij and the initial firing pattern Si(0).

8.3.2 Mean field dynamics

The reduction of the spiking neuron model to discrete time and binary neurons allows us to study oscillations with irregular spike trains in a transparent manner. In a first step we derive mean field equations and discuss their macroscopic behavior. In a second step we look more closely into the microscopic dynamics. It will turn out that subtle changes in the density of excitatory and inhibitory projections can have dramatic effects on the microscopic dynamics that do not show up in a mean field description. Binary discrete-time models with irregular spike trains have been studied in various contexts by (Kirkpatrick and Sherrington, 1978), Derrida et al. (1987), Crisanti and Sompolinsky (1988), Nützel (1991), Kree and Zippelius (1991), van Vreeswijk and Sompolinsky (1996) to mention only a few. As we have seen above, strong oscillations of the population activity provide a neuronal clocking mechanism and hence a justification of time-discretization.


8.3.2.1 Purely excitatory projections

We consider a population of N McCulloch-Pitts neurons (McCulloch and Pitts, 1943) that is described by a state vector $ \vec{{S}}\,$ $ \in$ {0, 1}N. In each time step tn any given neuron i is either active [ Si(tn) = 1] or inactive [ Si(tn) = 0]. Due to the reverberating loop, neurons receive (excitatory) synaptic input h that depends on the wiring of the loop - described by a coupling matrix wij - and on the activity during the previous cycle, i.e.,

ui(tn) = $\displaystyle \sum_{{j=1}}^{N}$wij Sj(tn-1) . (8.34)

Since the wiring of the reverberating loop at the neuronal level is unknown we adopt a random coupling matrix with binary entries. More precisely, we take all entries wij to be identically and independently distributed (i.i.d.) with

prob{wij = 1} = $\displaystyle \lambda$/N .    

We thus neglect possible differences in the synaptic coupling strength and content ourself with a description that accounts only for the presence or absence of a projection. In that sense, $ \lambda$ is the convergence and divergence ratio of the network, i.e., the averaged number of synapses that each neuron receives from and connects to other neurons, respectively.

The neurons are modeled as deterministic threshold elements. The dynamics is given by

Si(tn) = $\displaystyle \Theta$$\displaystyle \left[\vphantom{ u_i(t_n)-\vartheta }\right.$ui(tn) - $\displaystyle \vartheta$$\displaystyle \left.\vphantom{ u_i(t_n)-\vartheta }\right]$ , (8.35)

with $ \vartheta$ being the firing threshold and $ \Theta$ the Heaviside step function with $ \Theta$(x) = 1 if x$ \ge$1 and $ \Theta$(x) = 0 for x < 0.

Starting with a random initial firing pattern,

Si(t0) $\displaystyle \in$ {0, 1} i.i.d. with prob{Si(t0) = 1} = a0 ,    

we can easily calculate the expectation value of the activity a1 = N-1 $ \sum_{{i=1}}^{N}$Si(t1) in the next time step. According to Eq. (8.36) a neuron is active if it receives input from at least $ \vartheta$ neurons that have been active during the last cycle. The initial firing pattern $ \vec{S}\,$(t0) and the coupling matrix wij are independent, so that the synaptic input h in Eq. (8.34) follows a binomial distribution. The probability a1 of any given neuron to be active in the next cycle is thus

a1 = $\displaystyle \sum_{{k=\vartheta}}^{N}$$\displaystyle \binom{N}{k} $(a0 $\displaystyle \lambda$ N-1)k (1 - a0 $\displaystyle \lambda$ N-1)N-k . (8.36)

This equation gives the network activity a1 as a function of the activity a0 in the previous cycle. It is tempting to generalize this expression so as to relate the activity an in cycle n recursively to the activity in cycle n - 1,

an+1 = 1 - $\displaystyle \sum_{{k=0}}^{{\vartheta-1}}$$\displaystyle \binom{N}{k} $(an $\displaystyle \lambda$ N-1)k (1 - an $\displaystyle \lambda$ N-1)N-k . (8.37)

Unfortunately, this is in general not possible because the activity pattern $ \vec{S}\,$ in cycle n$ \ge$1 and the coupling matrix wij are no longer independent and correlations in the firing patterns may occur. For sparse networks with $ \lambda$ $ \ll$ N, however, these correlations can be neglected and Eq. (8.39) can be used as an approximation [see Kree and Zippelius (1991) for a precise definition of `` $ \lambda$ $ \ll$ N'']. Fortunately, the case with $ \lambda$ $ \ll$ N is the more interesting one anyway, because otherwise, a1 is a steep sigmoidal function of a0 and the network activity either saturates ( a1 $ \approx$ 1) or dies out ( a1 $ \approx$ 0) after only one iteration. Furthermore, $ \lambda$ $ \ll$ N may be a realistic assumption for certain biological reverberating loops such as the olivo-cerebellar system. In the following we thus assume that $ \lambda$ $ \ll$ N so that the network activity is given by Eq. (8.39), or - if we approximate the binomial distribution by the corresponding Poisson distribution - by the recursion

an+1 = 1 - $\displaystyle \sum_{{k=0}}^{{\vartheta-1}}$$\displaystyle {\frac{{(a_n\,\lambda)^k}}{{k!}}}$ e-an $\scriptstyle \lambda$ . (8.38)

The dynamics of the population activity is completely characterized by the mean field equation (8.40). For instance, it can easily be shown that an = 0 is a stable fixed point except if $ \vartheta$ = 1 and $ \lambda$ > 1. Furthermore, an is a monotonously growing function of an-1. Therefore, no macroscopic oscillations can be expected. In summary, three different constellations can be discerned; cf. Fig. 8.12. First, for $ \vartheta$ = 1 and $ \lambda$ > 1 there is a stable fixed point at high levels of an; the fixed point at an = 0 is unstable. Second, if the firing threshold $ \vartheta$ is large as compared to the convergence $ \lambda$ only an = 0 is stable. Finally, if $ \vartheta$ > 1 and $ \lambda$ sufficiently large, bistability of an = 0 and an > 0 can be observed.

Figure 8.12: Dynamics of a reverberating loop with purely excitatory projections. The upper row shows the mean field approximation of the population activity an+1 as a function of the activity in the previous cycle an; cf. Eq. (8.40). The raster diagrams in the lower row give examples of the underlying microscopic dynamics in a simulation of N = 100 neurons. A horizontal bar indicates that the neuron is active. A, $ \lambda$ = 2, $ \vartheta$ = 1: Stable fixed point at a $ \approx$ 0.8. B, $ \lambda$ = 3, $ \vartheta$ = 2: Only an = 0 is stable. Note the long transient until the fixed point is reached. C, $ \lambda$ = 8, $ \vartheta$ = 4: Bistability of an = 0 and an $ \approx$ 0.95.
\begin{minipage}{0.3\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]{...
...[clip=true,width=\textwidth,trim=0 0 0 13]{reverb_sim_8_4.ps.gz}
\end{minipage}


8.3.2.2 Balanced excitation and inhibition

In a network with purely excitatory interactions the non-trivial fixed point corresponds to a microscopic state where some neurons are active and others inactive. Since the active neurons fire at practically every cycle of the oscillation, we do not find the desired broad distribution of interspike intervals; cf. Fig. 8.12A. As we have already seen in Chapter 6.4.3, a random network with balanced excitation and inhibition is a good candidate for generating broad interval distributions. Reverberating projections are, in fact, not necessarily excitatory. Instead, they are often paralleled by an inhibitory pathway that may either involve another brain region or just inhibitory interneurons. Our previous model can easily be extended so as to account both for excitatory and inhibitory projections. The wiring of the excitatory loop is characterized, as before, by a random matrix wijexc $ \in$ {0, 1} with

prob{wijexc = 1} = $\displaystyle \lambda_{{\text{exc}}}^{}$/N        i.i.d.    

Similarly, the wiring of the inhibitory loop is given by a random matrix wijinh $ \in$ {0, 1} with

prob{wijinh = 1} = $\displaystyle \lambda_{{\text{inh}}}^{}$/N        i.i.d.    

The parameters $ \lambda_{{\text{exc}}}^{}$ and $ \lambda_{{\text{inh}}}^{}$ describe the divergence or convergence of excitatory and inhibitory projections, respectively.

Let us assume that a neuron is activated if the difference between excitatory and inhibitory input exceeds its firing threshold $ \vartheta$. The dynamics is thus given by

Si(tn) = $\displaystyle \Theta$$\displaystyle \left[\vphantom{ \sum_{j=1}^N w_{ij}^{\text{exc}} \, S_j(t_{n-1})- \sum_{j=1}^N w_{ij}^{\text{inh}} \, S_j(t_{n-1})- \vartheta }\right.$$\displaystyle \sum_{{j=1}}^{N}$wijexc Sj(tn-1) - $\displaystyle \sum_{{j=1}}^{N}$wijinh Sj(tn-1) - $\displaystyle \vartheta$$\displaystyle \left.\vphantom{ \sum_{j=1}^N w_{ij}^{\text{exc}} \, S_j(t_{n-1})- \sum_{j=1}^N w_{ij}^{\text{inh}} \, S_j(t_{n-1})- \vartheta }\right]$ . (8.39)

As in the previous section we can calculate the mean-field activity in cycle n + 1 as a function of the activity in the previous cycle. We obtain

an+1 = $\displaystyle \sum_{{k=\vartheta}}^{N}$$\displaystyle \sum_{{l=0}}^{{k-\vartheta}}$$\displaystyle {\frac{{a_n^{k+l}\,\lambda_{\text{exc}}^k \, \lambda_{\text{inh}}^l}}{{k! \, l!}}}$ e-an ($\scriptstyle \lambda_{{\text{exc}}}$+$\scriptstyle \lambda_{{\text{inh}}}$) . (8.40)

The mean-field approximation is valid for sparse networks, i.e., if $ \lambda_{{\text{exc}}}^{}$ $ \ll$ N and $ \lambda_{{\text{inh}}}^{}$ $ \ll$ N.

As compared to the situation with purely excitatory feedback Eq. (8.44) does not produce new modes of behavior. The only difference is that an+1 is no longer a monotonous function of an; cf. Fig. 8.13.

Figure 8.13: Dynamics of a reverberating loop with excitatory and inhibitory projections (similar plots as in Fig. 8.12). A, $ \lambda_{{\text{exc}}}^{}$ = 6, $ \lambda_{{\text{inh}}}^{}$ = 4, $ \vartheta$ = 1: Stable fixed point at an $ \approx$ 0.6. B, $ \lambda_{{\text{exc}}}^{}$ = 4, $ \lambda_{{\text{inh}}}^{}$ = 10, $ \vartheta$ = 1: Stable fixed point at an $ \approx$ 0.15. C, $ \lambda_{{\text{exc}}}^{}$ = 10, $ \lambda_{{\text{inh}}}^{}$ = 4, $ \vartheta$ = 3: Bistability between an = 0 and an $ \approx$ 0.73. Note the high level of irregularity in the raster diagrams. Although the mean field dynamics is characterized by a simple fixed point the corresponding limit cycle of the microscopic dynamics can have an extremely long period.
\begin{minipage}{0.3\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]{...
...[width=\textwidth,trim=0 10 0 10]{reverb_sim_10_4_3_subtr.ps.gz}
\end{minipage}


8.3.3 Microscopic dynamics

As it is already apparent from the examples shown in Figs. 8.12 and 8.13 the irregularity of the spike trains produced by different reverberating loops can be quite different. Numerical experiments show that in the case of purely excitatory projections fixed points of the mean field dynamics almost always correspond to a fixed point of the microscopic dynamics, or at least to a limit cycle with short period. As soon as inhibitory projections are introduced this situation changes dramatically. Fixed points in the mean field dynamics still correspond to limit cycles in the microscopic dynamics; the length of the periods, however, is substantially larger and grows rapidly with the network size; cf. Fig. 8.14 (Nützel, 1991; Kirkpatrick and Sherrington, 1978). The long limit cycles induce irregular spike trains which are reminiscent of those found in the asynchronous firing state of randomly connected integrate-and-fire network; cf. Chapter 6.4.3.

Figure 8.14: Attractor length as a function of the inhibitory projection density $ \lambda_{{\text{inh}}}^{}$ and the network size N = 10, 20, 50, 100. A, length of the attractor averaged over 100 realizations of the coupling matrices and the initial pattern. The density of excitatory projections is kept constant at $ \lambda_{{\text{exc}}}^{}$ = 3; the firing threshold is $ \vartheta$ = 1. The dynamics is given by Eq. (8.43). B, maximal length of the attractor of 100 randomly chosen realizations of coupling matrices and initial patterns. Comparison of A and B shows that there is a large variability in the actual attractor length.
\begin{minipage}{0.49\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]...
...
\par\includegraphics[width=\textwidth]{max_attractor_length.ps}
\end{minipage}

With respect to potential applications it is particularly interesting to see how information about the initial firing pattern is preserved in the sequence of patterns generated by the reverberating network. Figure 8.15A shows numerical results for the amount of information that is left after n iterations. At t = 0 firing is triggered in a subset of neurons. After n iterations, the patterns of active neurons may be completely different. The measure In/I0 is the normalized transinformation between the initial pattern and the pattern after n iterations. In/I0 = 1 means that the initial pattern can be completely reconstructed from the activity pattern at iteration n; In/I0 = 0 means that all the information is lost.

Once the state of the network has reached a limit cycle it will stay there forever due to the purely deterministic dynamics given by Eq. (8.36), or (8.43). In reality, however, the presence of noise leads to mixing in the phase space so that the information about the initial state will finally be lost. There are several sources of noise in a biological network - the most prominent are uncorrelated ``noisy'' synaptic input from other neurons and synaptic noise caused by synaptic transmission failures.

Figure 8.15B shows the amount of information about the initial pattern that is left after n iterations in the presence of synaptic noise in a small network with N = 16 neurons. As expected, unreliable synapses lead to a faster decay of the initial information. A failure probability of 5 percent already leads to a significantly reduced capacity. Nevertheless, a failure rate of 5 percent leaves after 5 iterations more than 10 percent of the information about the initial pattern; cf. Fig. 8.15B. This means that 10 neurons are enough to discern two different events half a second - given a 10 Hz oscillation - after they actually occurred. Note that this is a form of ``dynamic short-term memory'' that does not require any form of synaptic plasticity. Information about the past is implicitly stored in the neuronal activity pattern. Superordinated neurons can use this information to react with a certain temporal relation to external events (Billock, 1997; Kistler and De Zeeuw, 2002; Kistler et al., 2000).

Figure 8.15: A. Preservation of information about the initial firing pattern in a reverberating loop with N = 16. The transinformation I($ \vec{{S}}_{0}^{}$,$ \vec{{S}}_{n}^{}$) between the initial pattern and the pattern after n iterations is normalized by the maximum I($ \vec{{S}}_{0}^{}$,$ \vec{{S}}_{0}^{}$). Error bars give the standard deviation of 10 different realizations of the coupling matrices ( $ \lambda_{{\text{exc}}}^{}$ = 5, $ \lambda_{{\text{inh}}}^{}$ = 5, $ \vartheta$ = 1; cf. Eq. (8.43)). B. Similar plots as in A but with synaptic noise. The solid line is the noise-free reference (N = 16, pfail = 0), the dashed lines correspond to pfail = 0.001, pfail = 0.01, and pfail = 0.05 (from top to bottom).
\begin{minipage}{0.45\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]...
...f B}
\par\includegraphics[width=\textwidth]{transinfo_noise.eps}
\end{minipage}


8.3.3.1 Quantifying the information content (*)

Information theory (Cover and Thomas, 1991; Shannon, 1948; Ash, 1990) provides us with valuable tools to quantify the amount of ``uncertainty'' contained in a random variable and the amount of ``information'' that can be gained by measuring such a variable. Consider a random variable X that takes values xi with probability p(xi). The entropy H(X),

H(X) = - $\displaystyle \sum_{i}^{}$p(xi) log2p(xi) , (8.41)

is a measure for the ``uncertainty'' of the outcome of the corresponding random experiment. If X takes only a single value x1 with p(x1) = 1 then the ``uncertainty'' H(X) is zero since log21 = 0. On the other hand, if X takes two different values x1 and x2 with equal probability p(x1, 2) = 0.5 (e.g. tossing a coin) then the entropy H(X) yields unity (``one bit'').

If we have two random variables X and Y with joint probability p(xi, yj) then we can define the conditioned entropy H(Y| X) that gives the (remaining) uncertainty for Y given X,

H(Y| X) = - $\displaystyle \sum_{i}^{}$$\displaystyle \sum_{j}^{}$p(xi, yj) log2$\displaystyle {\frac{{p(x_i,y_j)}}{{p(x_j)}}}$ . (8.42)

For example, if Y gives the number of eyes obtained by throwing a dice while X is 0 if this number is odd and 1 if it is even, then the conditioned entropy yields H(Y| X) $ \approx$ 1.58 which is just 1 (bit) less than the full uncertainty of the dice experiment, H(Y) $ \approx$ 2.58. The difference between the full uncertainty and the conditioned uncertainty is the amount of information that we have ``gained'' through the observation of one of the variables. It is thus natural to define the transinformation I(X, Y) between the random variables X and Y as

I(X, Y) = H(X) - H(X| Y) . (8.43)

Note that I(X, Y) is symmetric, i.e., I(X, Y) = I(Y, X).

In order to produce Fig. 8.15 we have generated random initial patterns $ \vec{{S}}_{0}^{}$ together with the result of the iteration, $ \vec{{S}}_{n}^{}$, and incremented the corresponding counters in a large ( 216×216) table so as to estimate the joint probability distribution of $ \vec{{S}}_{0}^{}$ and $ \vec{{S}}_{n}^{}$. Application of Eq. (8.45)-Eq. ([*]) yields Fig. 8.15.


next up previous contents index
Next: 8.4 Summary Up: 8. Oscillations and Synchrony Previous: 8.2 Synchronized Oscillations and
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.