next up previous contents index
Next: 6.3 Integral Equations for Up: 6. Population Equations Previous: 6.1 Fully Connected Homogeneous

Subsections



6.2 Density Equations

In a population of neurons, each neuron may be in a different internal state. In this section we derive partial differential equations that describe how the distribution of internal states evolves as a function of time. We start in Section 6.2.1 with a population of integrate-and-fire neurons. Since the state of an integrate-and-fire neuron is characterized by its membrane potential, we describe the dynamics of the population as the evolution of membrane potential densities. In Section 6.2.2 we turn to neurons described by the Spike Response Model. SRM neurons are characterized by their state of refractoriness so that we have to introduce refractory densities. We will see that the solution of the dynamics of refractory densities leads to a macroscopic population activity equation for escape noise models. In Section 6.2.3 we compare the two approaches (i.e. membrane potential densities and refractory densities) and show that they are equivalent in the low-noise limit. The formulation of the dynamics of a population of integrate-and-fire neurons on the level of membrane potential densities has been developed by Abbott and van Vreeswijk (1993), Brunel and Hakim (1999), Fusi and Mattia (1999), Nykamp and Tranchina (2000), and Omurtag et al. (2000). The closely related formulation in terms of refractory densities has been studied by Wilson and Cowan (1972), Gerstner and van Hemmen (1992), Bauer and Pawelzik (1993), and Gerstner and van Hemmen (1994). Generalized density equations have been discussed by Knight (2000).


6.2.1 Integrate-and-Fire Neurons with Stochastic Spike Arrival

We study a homogeneous population of integrate-and-fire neurons. The internal state of a neuron i is determined by its membrane potential which changes according to

$\displaystyle \tau_{m}^{}$$\displaystyle {{\text{d}}\over {\text{d}}t}$ui = - ui + R Ii(t) . (6.10)

Here R is the input resistance, $ \tau_{m}^{}$ = R C the membrane time constant, and I(t) the total input (external driving current and synaptic input). At ui = $ \vartheta$ the membrane potential is reset to ui = ur < $ \vartheta$.

In a population of N integrate-and-fire neurons, we may ask how many of the neurons have at time t a given membrane potential. For N$ \to$$ \infty$ the fraction of neurons i with membrane potential u0 < ui(t)$ \le$u0 + $ \Delta$u is

$\displaystyle \lim_{{N\to\infty}}^{}$$\displaystyle \left\{\vphantom{ { {\rm neurons~with~} u_0 < u_i(t) \le u_0+\Delta u \over N}}\right.$$\displaystyle {{\rm neurons~with~} u_0 < u_i(t) \le u_0+\Delta u \over N}$$\displaystyle \left.\vphantom{ { {\rm neurons~with~} u_0 < u_i(t) \le u_0+\Delta u \over N}}\right\}$ = $\displaystyle \int_{{u_0}}^{{u_0+\Delta u}}$p(u, t) du  (6.11)

where p(u, t) is the membrane potential density; cf. Chapter 5.5. The aim of this section is to describe the evolution of the density p(u, t) as a function of time. As we will see, the equation that describes the dynamics of p(u, t) is nearly identical to that of a single integrate-and-fire neuron with diffusive noise; cf. Eqs. (5.88) and (5.89).

There are three subtle differences though. First, while p(u, t) was introduced in Chapter 5.5 as probability density for the membrane potential of a single neuron, it is now interpreted as the density of membrane potentials in a large population of neurons.

Second, the normalization is different. In Chapter 5.5 the integrated density $ \int_{{-\infty}}^{\vartheta}$p(u, t) du $ \le$1 was interpreted as the probability that the neuron under consideration has not yet fired since its last spike at $ \hat{{t}}$. The value of the integral decreases therefore over time. On the other hand, if a neuron in the population fires, it remains part of the population. Apart from a reset of the membrane potential nothing changes. Thus the integral over the density remains constant over time, i.e.,

$\displaystyle \int_{{-\infty}}^{\vartheta}$p(u, t) du = 1 . (6.12)

The normalization to unity expresses the fact that all neurons have a membrane potential below or equal to threshold.

Third, the fraction of neurons that `flow' across threshold per unit of time is the (expected value of) the population activity A(t). If we denote the flux across threshold as J($ \vartheta$, t), we have

A(t) = J($\displaystyle \vartheta$, t) . (6.13)

Due to the reset, the neurons that `disappear' across threshold, `reenter' at the reset potential ur. Hence, the membrane potential density at u = ur increases at a rate proportional to A(t). More specifically, we have a `source' term A(t$ \delta$(u - ur) at the reset potential ur that balances the loss that is due to the movement across the threshold. The value of A(t) is given by Eq. (6.13).

We assume that all neurons in the population receive the same driving current Iext. In addition each neuron receives stochastic background input. We allow for different types of synapse. An input spike at a synapse of type k causes a jump of the membrane potential by an amount wk. The effective spike arrival rate (summed over all synapses of type k) is denoted as $ \nu_{k}^{}$. While the mean spike arrival rates $ \nu_{k}^{}$(t) are identical for all neurons, we assume that the actual input spike trains at different neurons and different synapses are independent6.1. With these assumptions, the dynamics for u$ \le$$ \vartheta$ is in analogy to Eq. (5.88)

$\displaystyle {\partial \over \partial t}$p(u, t) = $\displaystyle {1\over \tau_m}$ p(u, t) - $\displaystyle {1\over \tau_m}$ $\displaystyle \left[\vphantom{ -u + R\,I^{\rm ext}(t)}\right.$ - u + R Iext(t)$\displaystyle \left.\vphantom{ -u + R\,I^{\rm ext}(t)}\right]$ $\displaystyle {\partial \over \partial u}$p(u, t) (6.14)
    + $\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$(t$\displaystyle \left[\vphantom{ p(u-w_k,t) - p(u,t) }\right.$p(u - wk, t) - p(u, t)$\displaystyle \left.\vphantom{ p(u-w_k,t) - p(u,t) }\right]$ + A(t$\displaystyle \delta$(u - ur) .  

The first two terms on the right-hand side describe the continuous drift, the third term the jumps caused by stochastic spike arrival, and the last term describes the reset. Because of the firing condition, we have p(u, t) = 0 for u > $ \vartheta$.

In order to calculate the population activity A(t), we need to determine the flux across threshold. To keep the argument slightly more general, we will consider the flux J(u0, t) across an arbitrary reference potential u0,

J(u0, t) = Jdrift(u0, t) + Jjump(u0, t) , (6.15)

where Jdrift accounts for the continuous drift of the membrane potential during the time when no input spike arrives. Jjump is due to excitatory and inhibitory spike arrival.

Figure 6.2: A. All trajectory that are less than wk below u0 cross u0 upon spike arrival. B. The drift Jdrift(u0, t) depends on density of trajectories and on the slope with which the trajectories cross the boundary u0.
\hbox{{\bf A} \hspace{60mm} {\bf B}}
\hbox{\hspace{5mm}
\includegraphics[width...
...c.eps}
\hspace{10mm}
\includegraphics[width=55mm]{Figs-ch-pop/density1b.eps}}

To evaluate Jjump, let us consider excitatory input wk > 0 first. All neurons that have a membrane potential ui with u0 - wk < ui$ \le$u0 will jump across the reference potential u0 upon spike arrival at synapse k; cf. Fig. 6.2A. Since the rate of spike arrival at synapse k is $ \nu_{k}^{}$, the total flux caused by input spikes at all synapses is

Jjump(u0, t) = $\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$ $\displaystyle \int_{{u_0-w_k}}^{{u_0}}$p(u, t) du . (6.16)

The drift Jdrift(u0, t) through the reference potential u0 is given by the density p(u0, t) at the potential u0 times the momentary `velocity' du/dt; cf. Fig. 6.2B. With du/dt = [- u + R Iext(t)]/$ \tau_{m}^{}$ we have

Jdrift(u0, t) = $\displaystyle {1\over \tau_m}$ [- u0 + R Iext(t)] p(u0, t) . (6.17)

The total flux at the threshold u0 = $ \vartheta$ yields the population activity

A(t) = $\displaystyle {1\over \tau_m}$ [- $\displaystyle \vartheta$ + R Iext(t)] p($\displaystyle \vartheta$, t) + $\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$ $\displaystyle \int_{{\vartheta-w_k}}^{\vartheta}$p(u, t) du . (6.18)

Since the probability density vanishes for u > $ \vartheta$, the sum over the synapses k can be restricted to all excitatory synapses. Eqs. (6.14) and (6.18) define the dynamics in a population of integrate-and-fire neurons with stochastic background input.


6.2.1.1 Continuity equation

In this subsection we aim at an interpretation of Eqs. (6.14) - (6.18). Let us consider the portion of neurons with a membrane potential between u0 and u1; cf. Fig. 6.3. The fraction of neurons with u0 < u < u1 increases if neurons enter from below through the boundary u0 or from above through the boundary u1. A positive flux J(u, t) > 0 is defined as a flux towards increasing values of u. Since trajectories cannot simply end, we have the conservation law

$\displaystyle {\partial \over \partial t}$$\displaystyle \int_{{u_0}}^{{u_1}}$p(u', t) du' = J(u0, t) - J(u1, t)         (6.19)

Taking the derivative with respect to the upper boundary u1 and changing the name of the variable from u1 to u yields the continuity equation,

        $\displaystyle {\partial \over \partial t}$p(u, t) = - $\displaystyle {\partial \over \partial u}$J(u, t)     for u$\displaystyle \ne$ur and u$\displaystyle \ne$$\displaystyle \vartheta$ , (6.20)

which expresses the conservation of the number of trajectories. At u = ur and u = $ \vartheta$ special care has to be taken because of the reset. For u > $ \vartheta$ the flux vanishes because neurons that pass the threshold are reset. Since neurons that have fired start a new trajectory at ur, we have a `source of new trajectories' at u = ur, i.e., new trajectories appear in the interval [ur - $ \epsilon$, ur + $ \epsilon$] that have not entered the interval through one of the borders. Adding a term A(t$ \delta$(u - ur) on the right-hand side of (6.20) accounts for this source of trajectories. If we insert the explicit form of the flux that we derived in Eqs. (6.16) and (6.17) into the continuity equation (6.20), we arrive once again at Eq. (6.14). For a numerical implementation of Eq. (6.20) we refer the reader to the literature (Nykamp and Tranchina, 2000; Omurtag et al., 2000).

Figure 6.3: The number of trajectories in the interval [u0, u1] changes if one of the trajectories crosses the boundary u0 or u1. For a large number of neurons this fact is described by the continuity equation; cf. Eq. (6.20). Schematic figure where only three trajectories are shown.
\hbox{\hspace{10mm}
\includegraphics[width=60mm]{Figs-ch-pop/density1a.eps}}


6.2.1.2 Diffusion approximation

In the limit of small jump amplitudes wk, the density dynamics (6.14) can be approximated by a diffusion equation. To show this we expand the right-hand side of Eq. (6.14) into a Taylor series up to second order in wk. The result is the Fokker-Planck equation,

$\displaystyle \tau_{m}^{}$$\displaystyle {\partial \over \partial t}$p(u, t) = - $\displaystyle {\partial \over \partial u}$$\displaystyle \left\{\vphantom{
\left[
-u + R\,I^{\rm ext}(t) + \tau_m \sum_k \nu_k(t) \, w_k
\right] \, p(u,t)
}\right.$$\displaystyle \left[\vphantom{
-u + R\,I^{\rm ext}(t) + \tau_m \sum_k \nu_k(t) \, w_k
}\right.$ - u + R Iext(t) + $\displaystyle \tau_{m}^{}$$\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$(twk$\displaystyle \left.\vphantom{
-u + R\,I^{\rm ext}(t) + \tau_m \sum_k \nu_k(t) \, w_k
}\right]$ p(u, t)$\displaystyle \left.\vphantom{
\left[
-u + R\,I^{\rm ext}(t) + \tau_m \sum_k \nu_k(t) \, w_k
\right] \, p(u,t)
}\right\}$  
    + $\displaystyle {1\over 2}$ $\displaystyle \left[\vphantom{\tau_m \sum_k \nu_k(t) \, w_k^2}\right.$$\displaystyle \tau_{m}^{}$$\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$(twk2$\displaystyle \left.\vphantom{\tau_m \sum_k \nu_k(t) \, w_k^2}\right]$ $\displaystyle {\partial^2\over \partial u^2}$p(u, t) (6.21)
    + $\displaystyle \tau_{m}^{}$ A(t$\displaystyle \delta$(u - ur) + $\displaystyle \mathcal {O}$($\displaystyle \omega_{k}^{3}$) .  

The term with the second derivative describes a `diffusion' in terms of the membrane potential. The firing threshold acts as an absorbing boundary so that the density at threshold vanishes,

p($\displaystyle \vartheta$, t) = 0 . (6.22)

In order to calculate the flux through the threshold we expand Eq. (6.18) in wk about u = $ \vartheta$ and obtain

A(t) = - $\displaystyle \left.\vphantom{ {\sigma^2(t) \over 2 \tau_m} \, {\partial p(u,t) \over \partial u }}\right.$$\displaystyle {\sigma^2(t) \over 2 \tau_m}$ $\displaystyle {\partial p(u,t) \over \partial u}$$\displaystyle \left.\vphantom{ {\sigma^2(t) \over 2 \tau_m} \, {\partial p(u,t) \over \partial u }}\right\vert _{{u=\vartheta}}^{}$ , (6.23)

where we have defined

$\displaystyle \sigma^{2}_{}$(t) = $\displaystyle \tau_{m}^{}$$\displaystyle \sum_{k}^{}$$\displaystyle \nu_{k}^{}$(twk2 . (6.24)

Eqs. (6.21) - (6.23) together with the normalization (6.12) define the dynamics of a homogeneous population of integrate-and-fire units with `diffusive' noise. For a more detailed discussion of the diffusion limit see Chapter 5.5, in particular Eqs. (5.89) and (5.91).


6.2.1.3 Example: Stationary solution (*)

In this example, we derive the stationary solution p(u, t) $ \equiv$ p(u) of the Fokker-Planck equation (6.21). The stationary distribution p(u) of the membrane potential is of particular interest, since it is experimentally accessible (Calvin and Stevens, 1968; Destexhe and Pare, 1999; Ho and Destexhe, 2000).

We assume that the total input h0 = R Iext + $ \tau_{m}^{}$$ \sum_{k}^{}$$ \nu_{k}^{}$ wk is constant. In the stationary state, the temporal derivative on the left-hand-side of Eq. (6.21) vanishes. The terms on the right-hand side can be transformed so that the stationary Fokker-Planck equation reads

0 = - $\displaystyle {\partial \over \partial u}$J(u) + A0 $\displaystyle \delta$(u - ur) , (6.25)

where A0 is the population activity (or mean firing rate) in the stationary state and

J(u) = $\displaystyle {-u+h_0 \over \tau_m}$ p(u) - $\displaystyle {1\over 2}$$\displaystyle {\sigma^2\over \tau_m}$$\displaystyle {\partial \over \partial u}$p(u) (6.26)

is the total flux; cf. Eq. (6.20). The meaning of Eq. (6.25) is that the flux is constant except at u = ur where it jumps by an amount A0. Similarly, the boundary condition p($ \vartheta$, t) = 0 implies a second discontinuity of the flux at u = $ \vartheta$.

With the results from Chapter 5.5 in mind, we expect that the stationary solution approaches a Gaussian distribution for u$ \to$ - $ \infty$. In fact, we can check easily that for any constant c1

p(u) = $\displaystyle {c_1\over \sigma}$exp$\displaystyle \left[\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right.$ - $\displaystyle {(u-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right]$     for u$\displaystyle \le$ur (6.27)

is a solution of Eq. (6.25) with flux J(u) = 0. However, for u > ur a simple Gaussian distribution cannot be a solution since it does not respect the boundary condition p($ \vartheta$) = 0. Nevertheless, we can make an educated guess and try a modified Gaussian, (Giorno et al., 1992; Brunel and Hakim, 1999; Tanabe et al., 1998)

p(u) = $\displaystyle {c_2\over \sigma^2}$exp$\displaystyle \left[\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right.$ - $\displaystyle {(u-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right]$  .  $\displaystyle \int_{u}^{\vartheta}$exp$\displaystyle \left[\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right.$$\displaystyle {(x-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right]$ dx     for ur < u$\displaystyle \le$$\displaystyle \vartheta$ , (6.28)

with some constant c2. We have written the above expression as a product of two terms. The first factor on the right-hand side is a standard Gaussian while the second factor guarantees that p(u)$ \to$ 0 for u$ \to$$ \vartheta$. If we insert Eq. (6.28) in (6.25) we can check that it is indeed a solution. The constant c2 is proportional to the flux,

c2 = 2 $\displaystyle \tau_{m}^{}$ J(u)     for ur < u$\displaystyle \le$$\displaystyle \vartheta$ . (6.29)

The solution defined by Eqs. (6.27) and (6.28) must be continuous at u = ur. Hence

c1 = $\displaystyle {c_2 \over \sigma}$$\displaystyle \int_{{u_r}}^{\vartheta}$exp$\displaystyle \left[\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right.$$\displaystyle {(x-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right]$ dx . (6.30)

Finally, the constant c2 is determined by the normalization condition (6.12). We use Eqs. (6.27), (6.28), and (6.30) in (6.12) and find

$\displaystyle {1\over c_2}$ = $\displaystyle \int_{{-\infty}}^{{u_r}}$$\displaystyle \int_{{u_r}}^{\vartheta}$f (x, u) dx du + $\displaystyle \int_{{u_r}}^{\vartheta}$$\displaystyle \int_{{u}}^{\vartheta}$f (x, u) dx du = $\displaystyle \int_{{u_r}}^{\vartheta}$$\displaystyle \int_{{-\infty}}^{x}$f (x, u) du dx , (6.31)

with

f (x, u) = $\displaystyle {1\over \sigma^2}$exp$\displaystyle \left[\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right.$ - $\displaystyle {(u-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ - {(u-h_0)^2 \over \sigma^2}}\right]$ exp$\displaystyle \left[\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right.$$\displaystyle {(x-h_0)^2 \over \sigma^2}$$\displaystyle \left.\vphantom{ {(x-h_0)^2 \over \sigma^2}}\right]$ . (6.32)

Figure 6.4B shows the stationary density p(u) for different amplitudes of the noise.

The activity A0 is identical to the flux J(u) between ur and $ \vartheta$ and therefore proportional to the constant c2; cf. Eq. (6.29). If we express the integral over u in Eq. (6.31) in terms of the error function, erf(x) = $ {2\over \sqrt{\pi}}$$ \int_{0}^{x}$exp(- u2) du, we obtain

A0-1 = $\displaystyle \tau_{m}^{}$ $\displaystyle \sqrt{{\pi}}$ $\displaystyle \int_{{{u_r - h_0 \over \sigma}}}^{{{\vartheta - h_0 \over \sigma}}}$ exp$\displaystyle \left(\vphantom{{x}^2}\right.$x2$\displaystyle \left.\vphantom{{x}^2}\right)$ $\displaystyle \left[\vphantom{1+ {\rm erf}(x)}\right.$1 + erf(x)$\displaystyle \left.\vphantom{1+ {\rm erf}(x)}\right]$ dx , (6.33)

which is identical to expression (5.104) obtained in Chapter 5.5.

Figure 6.4: A. Membrane potential trajectories of 5 neurons (R = 1 and $ \tau_{m}^{}$ = 10 ms) driven by a constant background current I0 = 0.8 and stochastic background input with $ \nu_{+}^{}$ = $ \nu_{-}^{}$ = 0.8 kHz and w± = ±0.05. These parameters correspond to h0 = 0.8 and $ \sigma$ = 0.2 in the diffusive noise model. B. Stationary membrane potential distribution in the diffusion limit for $ \sigma$ = 0.2 (solid line), $ \sigma$ = 0.1 (short-dashed line), and $ \sigma$ = 0.5 (long-dashed line). (Threshold $ \vartheta$ = 1). C. Mean activity of a population of integrate-and-fire neurons with diffusive noise as a function of h0 for four different noise levels, viz. (from top to bottom) $ \sigma$ = 1.0,$ \sigma$ = 0.5,$ \sigma$ = 0.2 (solid line), $ \sigma$ = 0.1,$ \sigma$ = 0.0.
\begin{minipage}{0.3\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]{...
...ar\includegraphics[width=\textwidth]{Figs-ch-pop/gain-IF-A0.eps}
\end{minipage}


6.2.2 Spike Response Model Neurons with Escape Noise

In this section we develop a density formalism for spike response neurons, similar to the membrane potential density approach for integrate-and-fire neurons that we have discussed in the preceding section. The main difference is that we replace the membrane potential density p(u, t) by a refractory density q(r, t), to be introduced below.

We study a homogeneous population of SRM neurons with escape noise. The membrane potential of the neurons,

u(t) = $\displaystyle \eta$(t - $\displaystyle \hat{{t}}$) + hPSP(t|$\displaystyle \hat{{t}}$) , (6.34)

depends on their refractory state,

r = t - $\displaystyle \hat{{t}}$$\displaystyle \ge$0 , (6.35)

i.e., on the time that has passed since the last spike. If we know r and the total input current in the past, we can calculate the membrane potential,

u(t) = $\displaystyle \eta$(r) + hPSP(t| t - r) . (6.36)

Given the importance of the refractory variable r, we may wonder how many neurons in the population have a refractory state between r0 and r0 + $ \Delta$r. For a large population ( N$ \to$$ \infty$) the fraction of neurons with a momentary value of r in the interval [r0, r0 + $ \Delta$r] is given by

$\displaystyle \lim_{{N\to\infty}}^{}$$\displaystyle \left\{\vphantom{ { \text{neurons with } r_0 < r (t) \le r_0+\Delta r \over N}}\right.$$\displaystyle {\text{neurons with } r_0 < r (t) \le r_0+\Delta r \over N}$$\displaystyle \left.\vphantom{ { \text{neurons with } r_0 < r (t) \le r_0+\Delta r \over N}}\right\}$ = $\displaystyle \int_{{r_0}}^{{r_0+\Delta r}}$q(r, t) dr , (6.37)

where q(r, t) is the refractory density. The aim of this section is to describe the dynamics of a population of SRM neurons by the evolution of q(r, t).

We start from the continuity equation,

$\displaystyle {\partial \over \partial t}$q(r, t) = - $\displaystyle {\partial \over \partial r}$Jrefr(r, t) , (6.38)

where we have introduce the flux Jrefr along the axis of the refractory variable r. As long as the neuron does not fire, the variable r = t - $ \hat{{t}}$ increases at a speed of dr/dt = 1. The flux is the density q times the velocity, hence

Jrefr(r, t) = q(r, t$\displaystyle {{\text{d}}r \over {\text{d}}t}$ = q(r, t) . (6.39)

The continuity equation (6.38) expresses the fact that, as long as a neuron does not fire, its trajectories r(t) can neither start nor end. On the other hand, if a neuron fires, the trajectory stops at the current value of r and `reappears' at r = 0. In the escape rate formalism, the instantaneous firing rate of a neuron with refractory variable r is given by the hazard

$\displaystyle \rho$(t| t - r) = f[$\displaystyle \eta$(r) + hPSP(t| t - r)] . (6.40)

If we multiply the hazard (6.40) with the density q(r, t), we get the loss per unit of time,

Jloss = - $\displaystyle \rho$(t| t - rq(r, t) . (6.41)

The total number of trajectories that disappear at time t due to firing is equal to the population activity, i.e.,

A(t) = $\displaystyle \int_{0}^{{\infty}}$$\displaystyle \rho$(t| t - rq(r, t)dr . (6.42)

The loss (6.41) has to be added as a `sink' term on the right-hand side of the continuity equation, while the activity A(t) acts as a source at r = 0. The full dynamics is

$\displaystyle {\partial \over \partial t}$q(r, t) = - $\displaystyle \left[\vphantom{{\partial \over \partial r} q(r,t) }\right.$$\displaystyle {\partial \over \partial r}$q(r, t)$\displaystyle \left.\vphantom{{\partial \over \partial r} q(r,t) }\right]$ - $\displaystyle \rho$(t| t - rq(r, t) + $\displaystyle \delta$(rA(t) . (6.43)

This partial differential equation is the analog of the Fokker-Planck equation (6.21) for the membrane potential density of integrate-and-fire neurons. The relation between the two equations will be discussed in Section 6.2.3.

Equation (6.43) can be rewritten in form of an integral equation for the population activity. The mathematical details of the integration will be presented below. The final result is

A(t) = $\displaystyle \int_{{-\infty}}^{t}$PI(t|$\displaystyle \hat{{t}}$A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ , (6.44)

where

PI(t|$\displaystyle \hat{{t}}$) = $\displaystyle \rho$(t|$\displaystyle \hat{{t}}$) exp$\displaystyle \left[\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ (6.45)

is the interval distribution of neurons with escape noise; cf. Eq. (5.9). Thus, neurons that have fired their last spike at time $ \hat{{t}}$ contribute with weight PI(t|$ \hat{{t}}$) to the activity at time t. Integral equations of the form (6.44) are the starting point for a formal theory of population activity; cf. Section 6.3. For a numerical implementation of population dynamics, it is more convenient to work directly on the level of the density equations (6.43). A simple discretization scheme for numerical implementations is discussed below.

6.2.2.1 Integrating the partial differential equation (*)

All neurons that have fired together at time $ \hat{{t}}$ form a group that moves along the r-axis at constant speed. To solve Eq. (6.43) we turn to a frame of reference that moves along with the group. We replace the variable r by t - r $ \equiv$ $ \hat{{t}}$ and define a new density

Q($\displaystyle \hat{{t}}$, t) = q(t - $\displaystyle \hat{{t}}$, t) , (6.46)

with $ \hat{{t}}$$ \le$t. The total derivative of Q with respect to t is

$\displaystyle {{\text{d}}\over {\text{d}}t}$Q($\displaystyle \hat{{t}}$, t) = $\displaystyle {\partial \over \partial r}$$\displaystyle \left.\vphantom{ q(r,t)}\right.$q(r, t)$\displaystyle \left.\vphantom{ q(r,t)}\right\vert _{{r=t-\hat{t}}}^{}$ $\displaystyle {{\text{d}}r \over {\text{d}}t}$ + $\displaystyle {\partial \over \partial t}$$\displaystyle \left.\vphantom{ q(r,t)}\right.$q(r, t)$\displaystyle \left.\vphantom{ q(r,t)}\right\vert _{{r=t-\hat{t}}}^{}$  (6.47)

with dr/dt = 1. We insert Eq. (6.43) on the right-hand side of (6.47) and obtain

$\displaystyle {{\text{d}}\over {\text{d}}t}$Q($\displaystyle \hat{{t}}$, t) = - $\displaystyle \rho$(t|$\displaystyle \hat{{t}}$Q($\displaystyle \hat{{t}}$, t) . (6.48)

The partial differential equation (6.43) has thus been transformed into an ordinary differential equation that is solved by

Q($\displaystyle \hat{{t}}$, t) = Q($\displaystyle \hat{{t}}$, t0) exp$\displaystyle \left[\vphantom{ -\int_{t_0}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{t_0}}^{t}$$\displaystyle \rho$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ -\int_{t_0}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ , (6.49)

where Q($ \hat{{t}}$, t0) is the initial condition, which is still to be fixed.

From the definition of the refractory density q(r, t) we conclude that q(0, t) is the proportion of neurons at time t that have just fired, i.e., q(0, t) = A(t) or, in terms of the new refractory density, Q(t, t) = A(t). We can thus fix the initial condition in Eq. (6.49) at t0 = $ \hat{{t}}$ and find

Q($\displaystyle \hat{{t}}$, t) = A($\displaystyle \hat{{t}}$) exp$\displaystyle \left[\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ . (6.50)

On the other hand, from (6.42) we have

A(t) = $\displaystyle \int_{{-\infty}}^{t}$$\displaystyle \rho$(t|$\displaystyle \hat{{t}}$Q($\displaystyle \hat{{t}}$, t) d$\displaystyle \hat{{t}}$ . (6.51)

If we insert (6.50) into (6.51), we find

A(t) = $\displaystyle \int_{{-\infty}}^{t}$$\displaystyle \rho$(t|$\displaystyle \hat{{t}}$) exp$\displaystyle \left[\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right]$ A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ (6.52)

which is the population equation (6.44) mentioned above (Gerstner and van Hemmen, 1992). See also the papers by Wilson and Cowan (1972) and Knight (1972a) for related approaches. If we insert Eq. (6.50) into the normalization condition 1 = $ \int_{{-\infty}}^{t}$Q($ \hat{{t}}$, t) d$ \hat{{t}}$ we arrive at

1 = $\displaystyle \int_{{-\infty}}^{t}$exp$\displaystyle \left[\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$$\displaystyle \rho$(t'|$\displaystyle \hat{{t}}$) dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}}^t \rho(t'\vert\hat{t}) \, {\text{d}}t' }\right]$  A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ . (6.53)

Both the population equation (6.52) and the normalization condition (6.53) will play an important role below in Section 6.3.

6.2.2.2 Numerical implementation (*)

For a numerical implementation of Eq. (6.43) we discretize the refractory variable r in steps of $ \Delta$t and define nk(t) = $ \int_{0}^{{\Delta t}}$q(k $ \Delta$t + s, t) ds. The normalization is $ \sum_{k}^{}$nk(t) = 1 for all t. The probability that a neuron with refractory state r = k $ \Delta$t fires in a time step $ \Delta$t is

PF(k) = 1 - exp$\displaystyle \left\{\vphantom{-f[\eta(k\,\Delta t) + h_{\rm PSP}(t\vert t-k\,\Delta t)]\, \Delta t }\right.$ - f[$\displaystyle \eta$(k $\displaystyle \Delta$t) + hPSP(t| t - k $\displaystyle \Delta$t)] $\displaystyle \Delta$t$\displaystyle \left.\vphantom{-f[\eta(k\,\Delta t) + h_{\rm PSP}(t\vert t-k\,\Delta t)]\, \Delta t }\right\}$ . (6.54)

Discretization of Eq. (6.43) yields

nk(t + $\displaystyle \Delta$t) = nk-1(t)[1 - PF(k - 1)] + $\displaystyle \delta_{{k,0}}^{}$ $\displaystyle \sum_{{k'=1}}^{\infty}$PF(k'nk'(t) . (6.55)

If refractory effects are negligible for r$ \ge$$ \Delta^{{\rm refr}}_{}$, then we can truncate the summation at kmax$ \ge$$ \Delta^{{\rm refr}}_{}$/$ \Delta$t. All neurons with k$ \ge$kmax have the same firing probability PF(k) $ \equiv$ Pfree. If we introduce the normalization $ \sum_{{k>k_{\rm max}}}^{}$nk = 1 - $ \sum_{{k=1}}^{{k_{\rm max}}}$ we arrive at the update rule

n0(t + $\displaystyle \Delta$t) = Pfree + $\displaystyle \sum_{{k=1}}^{{k_{\rm max}}}$$\displaystyle \left[\vphantom{ P_{\rm free} - P_F(k)}\right.$Pfree - PF(k)$\displaystyle \left.\vphantom{ P_{\rm free} - P_F(k)}\right]$ nk(t) (6.56)

and for 1$ \le$k$ \le$kmax

nk(t + $\displaystyle \Delta$t) = $\displaystyle \left[\vphantom{ 1 - P_F(k-1) }\right.$1 - PF(k - 1)$\displaystyle \left.\vphantom{ 1 - P_F(k-1) }\right]$ nk-1 .             (6.57)

Note that n0 is the number of neurons that fire in the interval [t, t + $ \Delta$t]. Hence we set A(t + $ \Delta$t) = n0(t + $ \Delta$t)/$ \Delta$t. The above algorithm allows for a rapid integration of the population density equation (6.43).

6.2.2.3 Example: Time-dependent input

We simulate a population of integrate-and-fire neurons with escape rate given by Eq. (5.116). At t = 100ms a time-dependent input is switched on. The population activity A(t) calculated numerically by iterating Eqs. (6.56) and (6.57) responds to the input in a non-linear fashion; cf. Fig. 6.5. The population activity of the model with escape noise can be compared to that of a model with diffusive noise. The results are strikingly similar. We note that even the rapid components of the input current are, at least partially, visible in the population activity.


6.2.3 Relation between the Approaches

Density methods have proven to be a useful tool for the simulation and analysis of the behavior of large populations of neurons. The two approaches that we have discussed in this section, viz., membrane potential densities and refractory densities, are closely related. For noise-free neurons driven by a constant super-threshold stimulus, the two mathematical formulations are, in fact, equivalent and related by a simple change of variables. But even for noisy neurons with sub-threshold stimulation the two approaches are comparable; cf. Fig. 6.5. Both methods are linear in the densities and amenable to efficient numerical implementations. The formal mathematical relation between the two approaches is shown at the end of this section.

Figure 6.5: A. Activity of a population of integrate-and-fire neurons with diffusive noise (solid line) or escape noise (dashed line). At t = 150ms a time-dependent current is switched on. B. Time course of the input current. The current contains 5 randomly chosen frequencies components between 5 and 500Hz. Parameters: $ \vartheta$ = 1, R = 1, $ \tau_{m}^{}$ = 10 ms and ur = 0; diffusive noise model with $ \sigma$ = 0.2; escape rate given by Eq. (5.116) with $ \sigma$ = 0.2; the diffusive noise model has been simulated using membrane potential densities while the escape noise model has been simulated using refractory densities.
\hbox{{\bf A}\hspace{58mm}{\bf B}}
\par\hbox{\hspace{5mm}
\includegraphics[width...
...-A-C.eps}
\hspace{10mm}
\includegraphics[width=50mm]{Figs-ch-pop/inputoft.eps}}

What are the main differences between the two approaches? Both are single-variable density methods. In the refractory density method the relevant variable is the refractory variable r. The refractory method is therefore not compatible with diffusive noise since diffusive noise generates a distribution of membrane potentials so that we would need a description by two-dimensional densities. Diffusive noise can, to a certain degree, be replaced by a suitably chosen escape function f in a noisy threshold model. But while escape rate models can mimic the consequence of stochastic spike arrival for the population activity, escape rates cannot serve as a model for membrane potential fluctuations that are commonly seen in intracellular recordings.

On the other hand, refractory density methods combined with escape noise are well suited to describe neuronal refractoriness. It is, for example, straightforward to implement a spiking neuron model with two or more time scales, a slow one for the spike after-potential $ \eta$ and a fast one for the postsynaptic potential $ \epsilon$ -- as it is typical for Hodgkin-Huxley type neurons; cf. Chapters 2.2 and 4.3. A finite rise time of the postsynaptic potential can be included in the analysis without any additional effort. While it is difficult to include reversal potentials and adaptation in the analysis of refractory density models, both effects can be incorporated phenomenologically in numerical implementations by a few additional macroscopic variables. An advantage of refractory densities combined with escape noise is that the density equations can be integrated formally. While the integral representation is not useful for numerical solutions, it is useful for an interpretation of the dynamics in terms of interspike interval distributions.

Membrane potential density methods work best for integrate-and-fire neurons. Since the membrane potential of an integrate-and-fire neuron is described by a single differential equation, the derivation of the density equation is straightforward. A numerical integration of the density equations causes no major problems if the threshold and reset mechanism as well as the `drift' and `jump' terms are carefully implemented (Nykamp and Tranchina, 2000). Reversal potential can be included at no extra cost -- both in simulations and in the analysis (Abbott and van Vreeswijk, 1993; Nykamp and Tranchina, 2000; Omurtag et al., 2000). Stochastic spike arrival can be described explicitly and both the amplitude and the frequency spectrum of the fluctuations of the membrane potential can be predicted. While the formal approach can be extended to detailed neuron models, the implementation of high-dimensional density equations requires efficient numerical implementations (Knight, 2000). Most implementations, so far, have been restricted to integrate-and-fire neurons with or without reversal potentials.


6.2.3.1 From membrane potential densities to phase densities (*)

For constant input current I0 it is possible to transform membrane potential densities into phase densities. A description by phase variables is interesting in itself; at the same time this section will introduce the methods that we will use below for the transformation from membrane potential densities to refractory densities.

We consider a population of noise-free integrate-and-fire neurons stimulated by a constant super-threshold current. To keep the arguments general, we consider a non-linear integrate-and-fire neuron (cf. Chapter 4.1) defined by

$\displaystyle {{\text{d}}\over {\text{d}}t}$u = F(u) = - $\displaystyle {u-h \over \tau(u)}$ + $\displaystyle {R(u) \over \tau(u)}$ I0 . (6.58)

The voltage dependence of $ \tau$ and r can account for voltage dependent conductances and reversal potentials. For $ \tau$(u) = $ \tau_{m}^{}$ and R(u) = R we have the standard equation of an integrate-and-fire neuron.

For I0 sufficiently large, the neuron will reach the firing threshold $ \vartheta$ = 1. After firing, the membrane potential is reset to ur = 0. Integration of the differential equation (6.58) yields the membrane trajectory,

u(t) = $\displaystyle \psi$(t - $\displaystyle \hat{{t}}$) , (6.59)

with d$ \psi$/dt = F[$ \psi$(t - $ \hat{{t}}$)]. Here $ \hat{{t}}$ denotes the last firing time of the neuron; cf. Fig. 6.6. For constant super-threshold input, the process of integration and reset repeats with a period T, i.e., u(t + T) = u(t). We may therefore introduce a phase variable $ \phi$ that increases from zero to T and is after each firing reset to zero, i.e., $ \phi$ = t mod T. This allows us to rewrite the membrane potential trajectory as u(t) = $ \psi$($ \phi$).

Figure 6.6: A. With constant input, a noise-free (non-linear) integrate-and-fire neuron fires regularly with period T. We define a phase variable by $ \phi$ = t - $ \hat{{t}}$ and write the trajectory as u = $ \psi$($ \phi$). B. The membrane potential density p(u, t) is related to the phase density q($ \phi$, t) multiplied by d$ \psi$/d$ \phi$. The number of neurons with refractory variable between $ \phi_{0}^{}$ and $ \phi_{1}^{}$ (shaded region) is equal to the number of neurons with membrane potential between u0 = $ \psi$($ \phi_{0}^{}$) and u1 = $ \psi$($ \phi_{1}^{}$).
\hbox{\hspace{0mm} {\bf A} \hspace{65mm} {\bf B}}
\hbox{\hspace{5mm}
\includegra...
...
\hspace{10mm}
\includegraphics[height=30mm]{Figs-ch-pop/transformationb.eps}
}

We can introduce a phase density q($ \phi$, t) so that $ \int_{{\phi_0}}^{{\phi_1}}$q($ \phi$, t) d$ \phi$ is the fraction of neurons with phase variable in the interval [$ \phi_{0}^{}$,$ \phi_{1}^{}$]. The phase density q is related to the membrane potential density p by q($ \phi$, t) d$ \phi$ = p(u, t) du (see Fig. 6.6B) so that

q($\displaystyle \phi$, t) = $\displaystyle \left.\vphantom{ p(u,t)}\right.$p(u, t)$\displaystyle \left.\vphantom{ p(u,t)}\right\vert _{{u=\psi(\phi)}}^{}$ $\displaystyle \left.\vphantom{ {{\text{d}}u \over {\text{d}}t} }\right.$$\displaystyle {{\text{d}}u\over {\text{d}}t}$$\displaystyle \left.\vphantom{ {{\text{d}}u \over {\text{d}}t} }\right\vert _{{u=\psi(\phi)}}^{}$ $\displaystyle {{\text{d}}t \over {\text{d}}\phi}$ = p[$\displaystyle \psi$($\displaystyle \phi$), tF[$\displaystyle \psi$($\displaystyle \phi$)]  (6.60)

where we have used d$ \phi$/dt = 1. The normalization is therefore

$\displaystyle \int_{0}^{T}$q($\displaystyle \phi$, t) d$\displaystyle \phi$ = $\displaystyle \int_{0}^{T}$p($\displaystyle \psi$($\displaystyle \phi$), tF[$\displaystyle \psi$(r)] d$\displaystyle \phi$ = $\displaystyle \int_{0}^{1}$p(u, t) du = 1 (6.61)

as expected.

We now want to derive the continuity equation for the phase variable q($ \phi$, t). We start from the continuity equation (6.20) of the membrane potential densities which reduces in the absence of noise to

$\displaystyle {\partial \over \partial t}$p(u, t) = - $\displaystyle {\partial \over \partial u}$$\displaystyle \left[\vphantom{ p(u,t)\, F(u)}\right.$p(u, tF(u)$\displaystyle \left.\vphantom{ p(u,t)\, F(u)}\right]$ ,    for 0 < u < 1 . (6.62)

The term in square brackets is the drift current with F(u) = du/dt. We use the product rule to evaluate the derivative on the right-hand side and multiply by F(u). The result is

F(u$\displaystyle {\partial \over \partial t}$p(u, t) = - F2(u$\displaystyle {\partial \over \partial u}$p(u, t) - F(up(u, t$\displaystyle {\partial \over \partial u}$F(u) ,    for 0 < u < 1 . (6.63)

The left-hand side is identical to $ {\partial \over \partial t}$q($ \phi$, t); cf. Eq. (6.60). Taking the partial derivative of Eq. (6.60) with respect to $ \phi$ yields the right-hand side of (6.63). Thus Eq. (6.63) can be rewritten as

$\displaystyle {\partial \over \partial t}$q($\displaystyle \phi$, t) = - $\displaystyle {\partial \over \partial \phi}$q($\displaystyle \phi$, t)    for 0 < $\displaystyle \phi$ < T ; (6.64)

cf. (Abbott and van Vreeswijk, 1993). The phase variable $ \phi$ plays a role similar to the refractory variable r. In fact, Eq. (6.64) is identical to the noise-free refractory density equation (6.43). We emphasize, however, that phase variables are restricted to periodic behavior and require therefore constant super-threshold input - in contrast to refractory densities.

6.2.3.2 From membrane potential densities to refractory densities (*)

In this paragraph we want to show the formal relation between the dynamics of p(u, t) and the evolution of the refractory densities q(r, t). We focus on a population of standard integrate-and-fire neurons with escape noise. The equation of the membrane potential

u(t|$\displaystyle \hat{{t}}$) = $\displaystyle \eta_{0}^{}$ exp$\displaystyle \left(\vphantom{-{t-\hat{t}\over \tau_m}}\right.$ - $\displaystyle {t-\hat{t}\over \tau_m}$$\displaystyle \left.\vphantom{-{t-\hat{t}\over \tau_m}}\right)$ + $\displaystyle \int_{{\hat{t}}}^{t}$exp$\displaystyle \left(\vphantom{ - { t'-\hat{t}\over \tau_m}}\right.$ - $\displaystyle {t'-\hat{t}\over \tau_m}$$\displaystyle \left.\vphantom{ - { t'-\hat{t}\over \tau_m}}\right)$ I(t') dt' (6.65)

can be used to define a transformation from voltage to refractory variables: u $ \longrightarrow$ r = t - $ \hat{{t}}$. It turns out that the final equations are even simpler if we take $ \hat{{t}}$ instead of r as our new variable. We therefore consider the transformation u $ \longrightarrow$ $ \hat{{t}}$.

Before we start, we calculate the derivatives of Eq. (6.65). The derivative with respect to t yields $ \partial$u/$ \partial$t = [- u + R I(t)]/$ \tau_{m}^{}$ as expected for integrate-and-fire neurons. The derivative with respect to $ \hat{{t}}$ is

$\displaystyle {\partial u \over \partial \hat{t}}$ = $\displaystyle {\eta_0 - R\,I(t) \over \tau_m}$ exp$\displaystyle \left(\vphantom{-{t-\hat{t}\over \tau_m}}\right.$ - $\displaystyle {t-\hat{t}\over \tau_m}$$\displaystyle \left.\vphantom{-{t-\hat{t}\over \tau_m}}\right)$ = F(t,$\displaystyle \hat{{t}}$) , (6.66)

where the function F is defined by Eq. (6.66).

The densities in the variable $ \hat{{t}}$ are denoted as Q($ \hat{{t}}$, t). From Q($ \hat{{t}}$, t) d$ \hat{{t}}$ = p(u, t) du we have

Q($\displaystyle \hat{{t}}$, t) = p[u(t|$\displaystyle \hat{{t}}$), tF(t,$\displaystyle \hat{{t}}$) . (6.67)

We now want to show that the differential equation for the density Q($ \hat{{t}}$, t) that we derived in (6.48),

$\displaystyle {\partial \over \partial t}$Q($\displaystyle \hat{{t}}$, t) = - $\displaystyle \rho$(t|$\displaystyle \hat{{t}}$Q($\displaystyle \hat{{t}}$, t)  ,    for $\displaystyle \hat{{t}}$ < t , (6.68)

is equivalent to the partial differential equation for the membrane potential densities. If we insert Eq. (6.67) into Eq. (6.68) we find

$\displaystyle {\partial p \over \partial u}$ $\displaystyle {\partial u \over \partial t}$ F + $\displaystyle {\partial p \over \partial t}$ F + p $\displaystyle {\partial F\over \partial t}$ = - $\displaystyle \rho$ p F . (6.69)

For the linear integrate-and-fire neuron we have $ \partial$F/$ \partial$t = - F/$ \tau_{m}^{}$. Furthermore for R I(t) > $ \eta_{0}^{}$ we have F$ \ne$ 0. Thus we can divide (6.69) by F and rewrite Eq. (6.69) in the form

$\displaystyle {\partial p(u,t) \over \partial t}$ = - $\displaystyle {\partial \over \partial u}$$\displaystyle \left[\vphantom{ {-u + R\, I(t) \over \tau_m} \, p(u,t) }\right.$$\displaystyle {-u + R\, I(t) \over \tau_m}$ p(u, t)$\displaystyle \left.\vphantom{ {-u + R\, I(t) \over \tau_m} \, p(u,t) }\right]$ - f (u - $\displaystyle \vartheta$p(u, t) ,    for ur < u < $\displaystyle \vartheta$ , (6.70)

where we have used the definition of the hazard via the escape function $ \rho$(t|$ \hat{{t}}$) = f[u(t|$ \hat{{t}}$) - $ \vartheta$] and the definition of the reset potential ur = $ \eta_{0}^{}$. If we compare Eq. (6.70) with the Fokker-Planck equation (6.21), we see that the main difference is the treatment of the noise. For noise-free integrate-and-fire neurons (i.e., $ \rho$(t|$ \hat{{t}}$) = 0 for u$ \ne$$ \vartheta$) the equation (6.21) for the membrane potential densities is therefore equivalent to the density equation $ \partial$Q($ \hat{{t}}$, t)/$ \partial$t = 0; cf. Eq. (6.68).


next up previous contents index
Next: 6.3 Integral Equations for Up: 6. Population Equations Previous: 6.1 Fully Connected Homogeneous
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.