next up previous contents index
Next: 5.10 Summary Up: 5. Noise in Spiking Previous: 5.8 Stochastic resonance

Subsections



5.9 Stochastic firing and rate models

All neuron models considered up to now emit spikes, either explicit action potentials that are generated by ionic processes as in Chapter 2, or formal spikes that a generated by a threshold process as in Chapter 4. On the other hand, if we take the point of view of rate coding, single spikes of individual neurons do not play an important role; cf. Chapter 1.4. The essential quantity to be transmitted from one group of neurons to the next is the firing rate, defined either as a temporal or as a population average. If this is true, models formulated on the level of firing rates would be sufficient.

As we have seen in Chapter 1.4, there are several ways to define the firing rate of a neuron. Consequently, rate-based models differ with respect to their notion of `firing rate'. Here we focus on three different rate models, viz., analog neurons (averaging over time), stochastic rate models (averaging over a stochastic ensemble), and population rate models (averaging over a population of neurons).


5.9.1 Analog neurons

If rate coding is understood in the sense of a spike count, then the essential information is carried by the mean firing rate, defined by the number nsp(T) of spikes that occur in a given time interval T divided by T

$\displaystyle \nu$ = $\displaystyle {n_{\rm sp}(T) \over T}$ (5.125)

In the limit of a large interval T, many spikes occur within T and we can approximate the empirical rate by a continuous variable $ \nu$.

We have seen in the previous chapters that a neuron that is driven by a constant intracellular current I0, emits a regular spike train. The rate $ \nu$ is then simply the inverse of the constant interspike interval s. If the drive current I0 is increased, the mean firing rate increases as well until it saturates at a maximum rate $ \nu^{{\rm max}}_{}$. The relation g between the output rate and the input,

$\displaystyle \nu$ = g(I0), (5.126)

is called the gain function of the neuron. Examples of gain functions of detailed neuron models have been given in Fig. 2.5B and Fig. 2.11A. Simplified gain functions used in formal neuron models are given in Fig. 5.24. In fact, for stationary input any regularly firing (i.e., non bursting) neuron is fully characterized by its gain function.

Figure 5.24: Frequently used gain functions for rate models. The normalized output rate x = $ \nu$/$ \nu^{{\rm max}}_{}$ is plotted as a function of the total input I0. A. Sigmoidal function; cf. Eq. (5.129) with $ \beta$ = 2 and $ \vartheta$ = 1. B. Step function. C. Piecewise linear function.
\begin{minipage}{0.3\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]{...
...\par\includegraphics[width=\textwidth]{pout.dat.gain-linear.eps}
\end{minipage}

In a network of neurons in rate description, the input Ii to a neuron i is generated by the rates $ \nu_{j}^{}$ of other neurons j. Typically it is assumed that Ii is just a weighted sum,

Ii = $\displaystyle \sum_{{j\in{\Gamma_i}}}^{}$wij$\displaystyle \nu_{j}^{}$ , (5.127)

where the weighting factor wij is the synaptic efficacy. This implies that dendritic integration is a linear operation. A combination of Eq. (5.127) and (5.126) yields

$\displaystyle \nu_{i}^{}$ = g$\displaystyle \left(\vphantom{\sum_{j} w_{ij} \nu_j}\right.$$\displaystyle \sum_{{j}}^{}$wij$\displaystyle \nu_{j}^{}$$\displaystyle \left.\vphantom{\sum_{j} w_{ij} \nu_j}\right)$ , (5.128)

which gives the output rate $ \nu_{i}^{}$ of neuron i as a function of its inputs $ \nu_{j}^{}$. This equation plays a central role in the theory of neural networks (Haykin, 1994; Hertz et al., 1991).

We refer to the variable $ \nu_{i}^{}$ as the firing rate or activation of neuron i. The interpretation of the input Ii is somewhat ambiguous. Some modelers think of it as a current, consistent with our notation in Eq. (5.127). Other researchers take Ii as a voltage and call it the postsynaptic potential. In the case of constant input, the interpretation is irrelevant, since the Eq. (5.128) is only used as a phenomenological model of certain aspects of neural information processing. The neuron itself is essentially treated as a black box which transforms a set of input rates into an output rate.

5.9.1.1 Example: Gain functions of formal neurons

In formal models the transfer function is often described by a hyperbolic tangent,

g(I0) = $\displaystyle {\nu^{\rm max}\over 2}$ $\displaystyle \left\{\vphantom{1 + {\rm tanh}[\beta\,(I_0-\vartheta) ] }\right.$1 + tanh[$\displaystyle \beta$ (I0 - $\displaystyle \vartheta$)]$\displaystyle \left.\vphantom{1 + {\rm tanh}[\beta\,(I_0-\vartheta) ] }\right\}$ , (5.129)

with parameters $ \nu^{{\rm max}}_{}$, $ \beta$, and $ \vartheta$. The gain function has slope $ \nu^{{\rm max}}_{}$ $ \beta$/2 at its inflection point I0 = $ \vartheta$ and saturates at $ \nu^{{\rm max}}_{}$ as I0$ \to$$ \infty$; cf. Fig. 5.24A.

For $ \beta$$ \to$$ \infty$, the gain function (5.129) approaches a step function

g(I0) = $\displaystyle \nu^{{\rm max}}_{}$ $\displaystyle \Theta$(I0 - $\displaystyle \vartheta$) ; (5.130)

cf. Fig. 5.24B. For the sake of simplicity, the sigmoidal transfer function (5.129) is often replaced by a piecewise linear transfer function

g(I0) = \begin{displaymath}\begin{cases}
0 &\text{for } I_0 \le \vartheta \\ \nu^{\rm m...
...ta+1\\ \nu^{\rm max}& \text{for } \vartheta +1< I_0 \end{cases}\end{displaymath} (5.131)

which is particularly convenient for a mathematical analysis; see, e.g., Sections 9.1.3 and 11.1.2.

5.9.2 Stochastic rate model

If we consider spike firing as a stochastic process we can think of the firing rate $ \nu$ also as the probability density of finding a spike at a certain instance of time. In this picture, $ \nu$ is the rate of the underlying Poisson process that generates the spikes; cf. Section 5.2.3. Stochastic rate models are therefore on the border line between analog rate models and noisy spiking neuron models. The main difference is that stochastic spiking neuron models such as the Spike Response Model with escape noise (cf. Section 5.3) allows us to include refractoriness whereas a Poisson model does not (Kistler and van Hemmen, 2000a).


5.9.2.1 Example: Inhomogeneous Poisson model

A stochastic rate model in continuous time is defined by an inhomogeneous Poisson process. Spikes are formal events characterized by their firing time tj(f) where j is the index of the neuron and f counts the spikes. At each moment of time spikes are generated with rate $ \nu_{i}^{}$(t) which depends on the input. It is no longer possible to calculate the input from a rate equation as in Eq. (5.127) since the input consists now of spikes which are point events in time. We set

$\displaystyle \nu_{i}^{}$ = g(hi) (5.132)

where g(.) is the gain function of the neuron and

hi(t) = $\displaystyle \sum_{j}^{}$$\displaystyle \sum_{f}^{}$wij $\displaystyle \epsilon_{0}^{}$(t - tj(f)) . (5.133)

is the total input potential caused by presynaptic spike arrival. As in the model SRM0, each presynaptic spike generates a postsynaptic potential with time course $ \epsilon_{0}^{}$. The synaptic efficacy wij scales the amplitude of the response function. The postsynaptic potentials of all presynaptic spikes are added linearly. In contrast to the SRM0, the stochastic rate model does not take into account refractoriness.

5.9.2.2 Example: Stochastic model in discrete time

In order to illustrate the relation with the deterministic rate model of Eq. (5.128), we discretize time in steps of length $ \Delta$t = 1/$ \nu^{{\rm max}}_{}$ where $ \nu^{{\rm max}}_{}$ is the maximum firing rate. In each time step the stochastic neuron is either active ( Si = + 1) or quiescent (Si = 0). The two states are taken stochastically with a probability which depends continuously upon the input hi. The probability that a neuron is active at time t + $ \Delta$t given an input hi at time t is

Prob$\displaystyle \left\{\vphantom{{S}_i(t+\Delta t) = +1\,\vert\,h_i(t)}\right.$Si(t + $\displaystyle \Delta$t) = + 1 | hi(t)$\displaystyle \left.\vphantom{{S}_i(t+\Delta t) = +1\,\vert\,h_i(t)}\right\}$ = $\displaystyle \Delta$t g(hi) , (5.134)

where g(.) is the gain function. If we take $ \epsilon$(s) = 1/$ \Delta$t for 0 < s < $ \Delta$t and zero otherwise we find

hi(t) = $\displaystyle \sum_{{ij}}^{}$wij Sj(t) . (5.135)

5.9.3 Population rate model

Closely related to the stochastic point of view is the notion of the rate as the average activity of a population of equivalent neurons. `Equivalent' means that all neurons have identical connectivity and receive the same type of input. Noise, however, is considered to be independent for each pair of neurons so that their response to the input can be different. We have seen in Section 1.5 that we can define a `rate', if we take a short time window $ \Delta$t, count the number of spikes (summed over all neurons in the group) that occur in an interval t...t + $ \Delta$t and divide by the number of neurons and $ \Delta$t. In the limit of N$ \to$$ \infty$ and $ \Delta$t$ \to$ 0 (in this order), the activity A is an analog variable which varies in continuous time,

A(t) = lim$\scriptstyle \Delta$t$\scriptstyle \to$0limN$\scriptstyle \to$$\scriptstyle \infty$$\displaystyle {1\over \Delta t}$$\displaystyle {n_{\rm act} (t;t+\Delta t)\over N}$ . (5.136)

Let us assume that we have several groups of neurons. Each group l contains a large number of neurons and can be described by its activity Al. A simple phenomenological model for the interaction between different groups is

Ak = g$\displaystyle \left(\vphantom{\sum_l J_{kl} A_l}\right.$$\displaystyle \sum_{l}^{}$JklAl$\displaystyle \left.\vphantom{\sum_l J_{kl} A_l}\right)$ , (5.137)

where Ak is the population activity of group k which receives input from other groups l. Equation (5.137) is formally equivalent to Eq. (5.128) but the parameters Jkl are no longer the weights of synapses between two neurons but an effective interaction strength between groups of neurons.

We will see later in Chapter 6, that Eq. (5.137) is indeed a correct description of the fixed point of interacting populations of neurons, that is, if all activity values Ak are, apart from fluctuations constant. As mentioned in Chapter 1.4, the interpretation of the rate as a population activity is not without problems. There are hardly ensembles which would be large enough to allow sensible averaging and, at the same time, consist of neurons which are strictly equivalent in the sense that the internal parameters and the input is identical for all the neurons belonging to the same ensemble. On the other hand, neurons in the cortex are often arranged in groups (columns) that are roughly dealing with the same type of signal and have similar response properties. We will come back to the interpretation of Eq. (5.137) as a population activity in Chapter 6.

5.9.3.1 Example: Dynamic rate models

The population rate does not require temporal averaging and can, in principle, change on a rapid time scale. A time-dependent version of the population rate equation (5.137) is the so-called Wilson-Cowan equation (Wilson and Cowan, 1972)

$\displaystyle \tau$ $\displaystyle {{\text{d}}A_k(t) \over {\text{d}}t}$ = - Ak(t) + g$\displaystyle \left(\vphantom{ \sum_l J_{kl}\int_0^\infty \alpha(s) \, A_l(t-s)\,{\text{d}}s }\right.$$\displaystyle \sum_{l}^{}$Jkl$\displaystyle \int_{0}^{\infty}$$\displaystyle \alpha$(sAl(t - s) ds$\displaystyle \left.\vphantom{ \sum_l J_{kl}\int_0^\infty \alpha(s) \, A_l(t-s)\,{\text{d}}s }\right)$ . (5.138)

Here, Ak is the activity of a population k and the sum in the brackets runs over all other populations l which send signals to k. The signals cause postsynaptic currents with time course $ \alpha$(s) and are scaled by the coupling Jkl.

In order to derive Eq. (5.138), Wilson and Cowan had to make a couple of strong assumptions and we may wonder whether (5.138) can be considered a realistic description of the population dynamics. More specifically, what determines the time constant $ \tau$ which limits the response time of the system? Is it given by the membrane time constant of a neuron? Is $ \tau$ really constant or does it depend on the input or the activity of the system? We will see in Chapter 6 that the population activity of a group of spiking neurons can, in some cases, react instantaneously to changes in the input. This suggests that the `time constant' $ \tau$ in (5.138) is, at least in some cases, extremely short. The theory of population dynamics developed in Chapter 6 does not make use of the differential equation (5.138), but uses a slightly different mathematical framework.


next up previous contents index
Next: 5.10 Summary Up: 5. Noise in Spiking Previous: 5.8 Stochastic resonance
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.