next up previous contents index
Next: 9.5 Summary Up: 9. Spatially Structured Networks Previous: 9.3 Patterns of spike

Subsections



9.4 Robust transmission of temporal information

Any information processing scheme that relies on the precise timing of action potentials obviously requires a means to transmit spikes without destroying their temporal structure. A critical question is thus whether a packet of initially synchronous action potentials can be transmitted from one brain area to the next without loosing the information. In this section we show that packets of (almost) synchronous spikes can propagate in a feed-forward structure from one layer to the next in such a way that their degree of synchrony is preserved - despite the presence of noise in the spike generating mechanism. Moreover, the temporal dispersion within such a packet can even be reduced during the transmission. This results in a stable wave form of the spike packet that can propagate - very much like a soliton - through the network (Kistler and Gerstner, 2001; Diesmann et al., 1999; Gewaltig, 2000).

The phenomenon of a stable propagation of synchronous spikes has been proposed by M. Abeles as an explanation for precisely timed spike events in multi-electrode recordings that seem to occur with a frequency that is incompatible with purely random (Poisson) spike trains; but see Oram et al. (1999). He suggested that neurons that participate in the transmission of these spikes form a so-called `syn-fire chain' (Abeles, 1991). More generally, the propagation of (partially) synchronous spikes is expected to play a role whenever information about a new stimulus has to be reliably transmitted from one set of neurons to the next. The initial response of neurons to stimulus onset appears to have a similar form in different brain areas with a remarkably low jitter (Maršálek et al., 1997).

The mechanisms that produce the low jitter in neuronal firing times during transmission from one `layer' of neurons to the next can be readily understood. Noise and broad postsynaptic potentials tend to smear out initially sharp spike packets. If, however, the synaptic coupling is strong enough, then postsynaptic neurons will start firing already during the rising phase of their membrane potential. If, in addition, these neurons show pronounced refractory behavior, then firing will cease before the postsynaptic potentials have reached their maximum so that a sharp pulse of spike activity is generated. Refractoriness thus counteracts the effects of noise and synaptic transmission and helps to maintain precise timing.

In the following we show how the theory of population dynamics developed in Chapter 6 can be used to provide a quantitative description of the transmission of spike packets. We consider M pools containing N neurons each that are connected in a purely feed-forward manner, i.e., neurons from pool n project only to pool n + 1 and there are no synapses between neurons from the same pool. We assume all-to-all connectivity between two successive pools with uniform synaptic weights $ \omega_{{ij}}^{}$ = $ \omega$/N; cf. Fig. 9.15. In the framework of the Spike Response Model the membrane potential of a neuron i $ \in$ $ \Gamma$(n + 1) from pool n + 1 that has fired its last spike at $ \hat{{t}}$ is given by

ui(t,$\displaystyle \hat{{t}}_{i}^{}$) = $\displaystyle {\frac{{\omega}}{{N}}}$$\displaystyle \sum_{{j\in\Gamma(n)}}^{}$$\displaystyle \int_{0}^{\infty}$$\displaystyle \epsilon$(t'Sj(t - t') dt' + $\displaystyle \eta$(ti - $\displaystyle \hat{{t}}_{i}^{}$)    
  = $\displaystyle \omega$ $\displaystyle \int_{0}^{\infty}$$\displaystyle \epsilon$(t'An(t - t') dt' + $\displaystyle \eta$(ti - $\displaystyle \hat{{t}}_{i}^{}$) . (9.58)

As usual, Si denotes the spike train of neuron i, $ \epsilon$ and $ \eta$ are response kernels describing postsynaptic potential and afterpotential, respectively. $ \Gamma$(n) is the index set of all neurons that belong to pool n, and An(t) = N-1$ \sum_{{j\in\Gamma(n)}}^{}$Sj(t) is the population activity of pool n; cf. Eq. (6.1).

Figure 9.15: Schematic representation of the network architecture. We investigate the transmission of spike packets in a linear chain of pools of neurons that are connected in a strictly feed-forward manner.
[htbp]
\centerline{\includegraphics[width=10cm]{pops.eps}}

In contrast to the previous section, we explicitely take noise into account. To this end we adopt the `escape noise model' (Section 5.3) and replace the sharp firing threshold by a firing probability that is a function of the membrane potential. The probability to find an action potential in the infinitesimal interval [t, t + dt) provided that the last spike occured at $ \hat{{t}}$ is given by

prob{spike in[t, t + dt) | last spike at $\displaystyle \hat{{t}}$} = f[u(t,$\displaystyle \hat{{t}}$)] . (9.59)

For the sake of simplicity we choose a semi-linear hazard function f, i.e.,

f (u) = \begin{displaymath}\begin{cases}
0\,, & u\le 0 \\ u\,, & u>0 \end{cases}\end{displaymath} . (9.60)

With this probabilistic criterion for triggering spikes both spike train and membrane potential become random variables. However, each pool is supposed to contain a large number of neurons (N $ \gg$ 1) so that we can replace the population activity An in Eq. (9.61) by its expectation value which is given by a normalization condition,

$\displaystyle \int_{{-\infty}}^{t}$Sn(t|$\displaystyle \hat{{t}}$An($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ = 1 - sn(t) , (9.61)

cf. Eq. (6.73). The survivor function Sn(t|$ \hat{{t}}$) for neurons of pool n is the probability that a neuron that has fired at $ \hat{{t}}$ survives without firing until t. Here, sn(t) = Sn(t| - $ \infty$) accounts for those neurons that have been quiescent in the past, i.e., have not fired at all up to time t. We have seen in Section 5.2 that

Si(t|$\displaystyle \hat{{t}}_{i}^{}$) = exp$\displaystyle \left\{\vphantom{ -\int_{\hat{t}_i}^t f[u_i(t',\hat{t}_i)] {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}_i}}^{t}$f[ui(t',$\displaystyle \hat{{t}}_{i}^{}$)]dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}_i}^t f[u_i(t',\hat{t}_i)] {\text{d}}t' }\right\}$ . (9.62)

Simulation studies suggest that a pronounced refractory behavior is required in order to obtain a stable propagation of a spike packet from one layer to the next (Diesmann et al., 1999; Gewaltig, 2000). If neurons were allowed to fire more than once within one spike packet the number of spikes per packet and therewith the width of the packet would grow in each step. Therefore, we use a strong and long-lasting after potential $ \eta$ so that each neuron can fire only once during each pulse. The survivor function thus equals unity for the duration $ \tau_{{\text{AP}}}^{}$ of the afterpotential, i.e., sn(t,$ \hat{{t}}$) = 1 for 0 < t - $ \hat{t}$ < $ \tau_{{\text{AP}}}^{}$ and $ \tau_{{\text{AP}}}^{}$ being large as compared to the typical pulse width. Let us denote by Tn the moment when a pulse packet arrives at pool n. We assume that for t < Tn, all neurons in layer n have been inactive, i.e., An(t) = 0 for t < Tn. Differentiation of Eq. (9.64) with respect to t leads to

An(t) = - $\displaystyle {\frac{{\partial}}{{\partial t}}}$sn(t) = f[un(t)] exp$\displaystyle \left\{\vphantom{ -\int_{-\infty}^t f[u_{n}(t')] \,{\text{d}}t' }\right.$ - $\displaystyle \int_{{-\infty}}^{t}$f[un(t')] dt'$\displaystyle \left.\vphantom{ -\int_{-\infty}^t f[u_{n}(t')] \,{\text{d}}t' }\right\}$ , (9.63)

with

un(t) = $\displaystyle \omega$ $\displaystyle \int_{0}^{\infty}$$\displaystyle \epsilon$(t'An-1(t - t') dt' . (9.64)

Equation (9.66) provides an explicit expression for the firing-time distribution An(t) in layer n as a function of the time course of the membrane potential. The membrane potential un(t), in turn, depends on the time course of the activity An-1(t) in the previous layer, as shown in Eq. (9.67). Both Eq. (9.66) and Eq. (9.67) can easily be integrated numerically; an analytic treatment, however, is difficult, even if a particularly simple form of the response kernel $ \epsilon$ is chosen. Following (Diesmann et al., 1999) we therefore concentrate on the first few moments of the firing-time distribution in order to characterize the transmission properties. More precisely, we approximate the firing-time distribution An-1(t) by a gamma distribution and calculate - in step (i) - the zeroth, first, and second moment of the resulting membrane potential in the following layer n. In step (ii), we use these results to approximate the time course of the membrane potential by a gamma distribution and calculate the moments of the corresponding firing-time distribution in layer n. We thus obtain an analytic expression for the amplitude and the variance of the spike packet in layer n as a function of amplitude and variance of the spike packet in the previous layer.

Particularly interesting is the iteration that describes the amplitude of the spike packet. We will see below that the amplitude an in layer n as a function of the amplitude an-1 in the previous layer is independent of the shape of the spike packet, viz.,

an = 1 - e-$\scriptstyle \omega$ an-1 . (9.65)

If $ \omega$$ \le$1, the mapping an-1$ \to$an has a single (globally attractive) fixed point at a = 0. In this case no stable propagation of spike packets is possible since any packet will finally die out9.1. For $ \omega$ > 1 a second fixed point at a$\scriptstyle \infty$ $ \in$ (0, 1) emerges through a pitchfork bifurcation. The new fixed point is stable and its basin of attraction contains the open interval (0, 1). This fixed point determines the wave form of a spike packet that propagates from one layer to the next without changing its form; cf. Fig. 9.16. The fact that the all-off state at a = 0 is unstable for $ \omega$ > 1 is related to the fact that there is no real firing threshold in our model.

Figure 9.16: Propagation of spike packets through a feed-forward network. A. Evolution of the firing-time distribution of a spike packet as it propagates from one layer to the next ( n = 0, 1,..., 4). Neurons between layers are only weakly coupled ($ \omega$ = 1) so that the packet will fade out. The neurons in layer n = 0 are driven by an external input that creates a sharp initial spike-packet ( $ \alpha_{0}^{}$ = 10, $ \lambda_{0}^{}$ = 0.1, a0 = 1). The bars (bin width 0.2) represent the results of a simulation with N = 1000 neurons per layer; the solid line is the firing-time distribution as predicted by the theory; cf. Eqs. (9.73) and (9.75). The ``flow field'' to the right characterizes the transmission function for spike-packets in terms of their amplitude an and width $ \sigma_{n}^{}$ = $ \sqrt{{\alpha_n}}$ $ \lambda_{n}^{}$. Open symbols connected by a dashed line represent the simulations shown to the left, filled symbols connected by solid lines represent the corresponding theoretical trajectories. Time is given in units of the membrane time constant $ \tau$. B. Same as in A but with increased coupling strength, $ \omega$ = 4. There is an attractive fixed point of the flow field at a = 0.98 and $ \sigma$ = 1.5 that corresponds to the stable wave form of the spike packet. [Taken from Kistler and Gerstner (2001)].
\renewedcommand{baselinestretch}{1.0}\normalsize \begin{center}
{\bf A}
\begin...
...etext\{ normalsize \{ bf
% Fig.\}~ ref\{fig:simulations\} (previous page):
\par

9.4.0.1 Derivation of the spike packet transfer function (*)

In the following we calculate the form of the spike packet in layer n as a function of the form of the packet in layer n - 1. To this end we describe the spike packet in terms of the first few moments, as outlined above. In step (i) we assume that the activity An-1(t) in layer n - 1 is given by a gamma distribution with parameters $ \alpha_{{n-1}}^{}$ and $ \lambda_{{n-1}}^{}$, i.e.,

An-1(t) = an-1 $\displaystyle \gamma_{{\alpha_{n-1},\lambda_{n-1}}}^{}$(t) . (9.66)

Here, an-1 is the portion of neurons of layer n - 1 that contribute to the spike packet, $ \gamma_{{\alpha,\lambda}}^{}$(t) = t$\scriptstyle \alpha$-1 e-t/$\scriptstyle \lambda$ $ \Theta$(t)/[$ \Gamma$($ \alpha$$ \lambda^{\alpha}_{}$] the density function of the gamma distribution, $ \Gamma$ the complete gamma function, and $ \Theta$ the Heaviside step function with $ \Theta$(t) = 1 for t > 0 and $ \Theta$(t) = 0 else. The mean $ \mu$ and the variance $ \sigma^{2}_{}$ of a gamma distribution with parameters $ \alpha$ and $ \lambda$ are $ \mu$ = $ \alpha$ $ \lambda$ and $ \sigma^{2}_{}$ = $ \alpha$ $ \lambda^{2}_{}$, respectively.

The membrane potential un(t) in the next layer results from a convolution of An-1 with the response kernel $ \epsilon$. This is the only point where we have to refer explicitely to the shape of the $ \epsilon$ kernel. For the sake of simplicity we use a normalized $ \alpha$ function,

$\displaystyle \epsilon$(t) = $\displaystyle {\frac{{t}}{{\tau^2}}}$ e-t/$\scriptstyle \tau$ $\displaystyle \Theta$(t) $\displaystyle \equiv$ $\displaystyle \gamma_{{2,\tau}}^{}$(t) , (9.67)

with time constant $ \tau$. The precise form of $ \epsilon$ is not important and similar results hold for a different choice of $ \epsilon$.

We want to approximate the time course of the membrane potential by a gamma distribution $ \gamma_{{\tilde\alpha_n,\tilde\lambda_n}}^{}$. The parameters9.2 $ \tilde{\alpha}_{n}^{}$ and $ \tilde{\lambda}_{n}^{}$ are chosen so that the first few moments of the distribution are identical to those of the membrane potential, i.e.,

un(t) $\displaystyle \approx$ $\displaystyle \tilde{a}_{n}^{}$ $\displaystyle \gamma_{{\tilde\alpha_n,\tilde\lambda_n}}^{}$(t) , (9.68)

with

$\displaystyle \int_{0}^{\infty}$tn un(t) dt$\displaystyle \overset{!}{=}$$\displaystyle \int_{0}^{\infty}$tn $\displaystyle \tilde{{a}}_{n}^{}$ $\displaystyle \gamma_{{\tilde\alpha_n,\tilde\lambda_n}}^{}$(t) dt ,        n $\displaystyle \in$ {0, 1, 2} . (9.69)

As far as the first two moments are concerned, a convolution of two distributions reduces to a mere summation of their mean and variance. Therefore, the convolution of An-1 with $ \epsilon$ basically translates the center of mass by 2$ \tau$ and increases the variance by 2$ \tau^{2}_{}$. Altogether, amplitude, center of mass, and variance of the time course of the membrane potential in layer n are

\begin{equation*}\left.\vphantom{ \begin{aligned}\tilde a_n&= \omega \, a_{n-1} ...
...a^2_n&= \sigma^2_{n-1} + 2\tau^2 \,, \end{aligned} \quad }\right.\end{equation*}\begin{equation*}\begin{aligned}\tilde a_n&= \omega \, a_{n-1} \,, \\ \tilde\mu_...
... \\ \tilde\sigma^2_n&= \sigma^2_{n-1} + 2\tau^2 \,, \end{aligned}\end{equation*}    \begin{equation*}\left.\vphantom{ \begin{aligned}\tilde a_n&= \omega \, a_{n-1} ...
...^2_n&= \sigma^2_{n-1} + 2\tau^2 \,, \end{aligned} \quad }\right\}\end{equation*}

respectively. The parameters $ \tilde{\alpha}_{n}^{}$ and $ \tilde{\lambda}_{n}^{}$ of the gamma distribution are directly related to mean and variance, viz., $ \tilde{\alpha}_{n}^{}$ = $ \tilde{\mu}_{n}^{2}$/$ \tilde{\sigma}_{n}^{2}$, $ \tilde{\lambda}_{n}^{}$ = $ \tilde{\sigma}_{n}^{2}$/$ \tilde{\mu}_{n}^{}$.

In step (ii) we calculate the firing-time distribution that results from a membrane potential with time course given by a gamma distribution as in Eq. (9.71). We use the same strategy as in step (i), that is, we calculate the first few moments of the firing-time distribution and approximate it by the corresponding gamma distribution,

An(t) $\displaystyle \approx$ an $\displaystyle \gamma_{{\alpha_n,\lambda_n}}^{}$(t) . (9.71)

The zeroth moment of An(t) (i.e., the portion of neurons in layer n that participate in the activity pulse) can be cast in a particularly simple form; the expressions for higher order moments, however, contain integrals that have to be evaluated numerically. For amplitude, center of mass, and variance of An(t) we find

\begin{equation*}\left.\vphantom{ \begin{aligned}a_n&= 1- {\text{e}}^{-\tilde a_...
...(2)}_n- \left[ m^{(1)}_n\right]^2 \,, \end{aligned} \quad}\right.\end{equation*}\begin{equation*}\begin{aligned}a_n&= 1- {\text{e}}^{-\tilde a_n} \,, \\ \mu_n&=...
...igma^2_n&= m^{(2)}_n- \left[ m^{(1)}_n\right]^2 \,, \end{aligned}\end{equation*}    \begin{equation*}\left.\vphantom{ \begin{aligned}a_n&= 1- {\text{e}}^{-\tilde a_...
...2)}_n- \left[ m^{(1)}_n\right]^2 \,, \end{aligned} \quad}\right\}\end{equation*}

with

mn(k) = $\displaystyle \left(\vphantom{1 - {\text{e}}^{-\tilde a_n}}\right.$1 - e-$\scriptstyle \tilde{a}_{n}$$\displaystyle \left.\vphantom{1 - {\text{e}}^{-\tilde a_n}}\right)^{{-1}}_{}$ $\displaystyle \int_{0}^{\infty}$un(t) exp$\displaystyle \left[\vphantom{ -\int_{-\infty}^t u_{n}(t') \,{\text{d}}t' }\right.$ - $\displaystyle \int_{{-\infty}}^{t}$un(t') dt'$\displaystyle \left.\vphantom{ -\int_{-\infty}^t u_{n}(t') \,{\text{d}}t' }\right]$ tk dt    
  = $\displaystyle {\frac{{\tilde a_n\, \tilde\lambda_n^k}}{{\left(1 - {\text{e}}^{-\tilde a_n}\right ) \Gamma(\tilde\alpha_n)}}}$ $\displaystyle \int_{0}^{\infty}$exp$\displaystyle \left[\vphantom{ -t - \tilde a_n\, \Gamma(\tilde\alpha_n,0,t)/\Gamma(\tilde\alpha_n) }\right.$ - t - $\displaystyle \tilde{a}_{n}^{}$ $\displaystyle \Gamma$($\displaystyle \tilde{\alpha}_{n}^{}$, 0, t)/$\displaystyle \Gamma$($\displaystyle \tilde{\alpha}_{n}^{}$)$\displaystyle \left.\vphantom{ -t - \tilde a_n\, \Gamma(\tilde\alpha_n,0,t)/\Gamma(\tilde\alpha_n) }\right]$ tk-1+$\scriptstyle \tilde{\alpha}_{n}$  dt (9.73)

being the kth moment of the firing-time distribution (9.66) that results from a gamma-shaped time course of the membrane potential. $ \Gamma$(z, t1, t2) = $ \int_{{t_1}}^{{t_2}}$tz-1 e-t dt is the generalized incomplete gamma function. The last equality in Eq. (9.76) has been obtained by substituting $ \tilde{a}_{n}^{}$ $ \gamma_{{\tilde\alpha_n,\tilde\lambda_n}}^{}$(t) for un(t).

A combination of Eqs. (9.73) and (9.75) yields explicit expressions for the parameters (an,$ \mu_{n}^{}$,$ \sigma_{n}^{}$) of the firing-time distribution in layer n as a function of the parameters in the previous layer. The mapping (an-1,$ \mu_{{n-1}}^{}$,$ \sigma_{{n-1}}^{}$)$ \to$(an,$ \mu_{n}^{}$,$ \sigma_{n}^{}$) is closely related to the neural transmission function for pulse-packet input as discussed by (Diesmann et al., 1999).


next up previous contents index
Next: 9.5 Summary Up: 9. Spatially Structured Networks Previous: 9.3 Patterns of spike
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.