next up previous contents index
Next: 6.5 Interacting Populations and Up: 6. Population Equations Previous: 6.3 Integral Equations for

Subsections



6.4 Asynchronous firing

We define asynchronous firing of a neuronal population as a macroscopic firing state with constant activity A(t) = A0. In this section we use the population activity equations (6.73) and (6.75) to study the existence of asynchronous firing states in a homogeneous population of spiking neurons. We will see that the neuronal gain function plays an important role. More specifically, we will show that the knowledge of the single-neuron gain function g(I0) and the coupling parameter J0 is sufficient to determine the activity A0 during asynchronous firing.

Figure 6.9: Asynchronous firing. For a sufficient amount of noise, the population activity in a network of independent spiking neurons with constant external input approaches a stationary value A0. A. The population activity of 1000 neurons has been filtered with a time window of 1ms duration. B. Same parameters as before, but the size of the population has been increased to N = 4000. Fluctuations decrease with N and approach the value of A0 = 50Hz predicted by theory.
\hbox{{\bf A} \hspace{62mm} {\bf B}}
\hbox{\hspace{10mm}
\includegraphics[width=...
...% includegraphics[width=50mm]\{Figs-ch-pop/sp-Aoft-large-4.eps\}\}
\vspace{0mm}

6.4.1 Stationary Activity and Mean Firing Rate

In this section we will show that during asynchronous firing the population activity A0 is equal to the mean firing rate of a single neuron in the population. To do so, we search for a stationary solution A(t) = A0 of the population equation (6.73). Given constant activity A0 and constant external input Iext0, the total input I0 to each neuron is constant. In this case, the state of each neuron depends only on t - $ \hat{{t}}$, i.e., the time since its last output spike. We are thus in the situation of stationary renewal theory.

In the stationary state, the survivor function and the interval distribution can not depend explicitly upon the absolute time, but only on the time difference s = t - $ \hat{{t}}$. Hence we set

SI($\displaystyle \hat{{t}}$ + s | $\displaystyle \hat{{t}}$) $\displaystyle \longrightarrow$ S0(s) (6.92)
PI($\displaystyle \hat{{t}}$ + s | $\displaystyle \hat{{t}}$) $\displaystyle \longrightarrow$ P0(s) (6.93)

The value of the stationary activity A0 follows now directly from the normalization condition (6.73),

1 = A0$\displaystyle \int_{0}^{\infty}$S0(s) ds . (6.94)

We use dS0(s)/ds = - P0(s) and integrate by parts,

1 = A0$\displaystyle \int_{0}^{\infty}$S0(s) ds = A0$\displaystyle \int_{0}^{\infty}$s P0(s) ds , (6.95)

where we have exploited the fact that s S0(s) vanishes for s = 0 and for s$ \to$$ \infty$. We recall from Chapter 5 that

$\displaystyle \int_{0}^{\infty}$s P0(s)ds = $\displaystyle \langle$T$\displaystyle \rangle$ (6.96)

is the mean interspike interval. Hence

A0 = $\displaystyle {1\over\langle T \rangle}$ . (6.97)

This equation has an intuitive interpretation: If everything is constant, then averaging over time (for a single neuron) is the same as averaging over a population of identical neurons.

6.4.1.1 Example: Comparison with simulations

Figure 6.10: Spike trains (black dots) of selected neurons as a function of time. A. Eight neurons out of the 1000 neurons in the simulation of Fig. 6.9A have been randomly chosen. If we sum vertically over the spikes of all 1000 neurons within time bins of 1ms, we retrieve the plot of Fig. 6.9A. Note that intervals vary considerably, since the noise level is high. The mean interval is $ \langle$T$ \rangle$ = 20ms. B Noise-free model network with the same mean firing rate. All neurons fire regularly with 50Hz, but firing times of different neurons are shifted with respect to each other. Neuron numbers have been ordered in order to make the temporal structure visible.
\hbox{{\bf A} \hspace{65mm} {\bf B}}
\hbox{\hspace{10mm}
\includegraphics[width=...
...\includegraphics[width=53mm]{Figs-ch-pop/spikeraster-nonoise.eps}}
\vspace{0mm}

How can we compare the population activity A0 calculated in Eq. (6.97) with simulation results? In a simulation of a population containing a finite number N of spiking neurons, the observed activity fluctuates. Formally, the (observable) activity A(t) has been defined in Eq. (6.1) as a sum over $ \delta$ functions. The activity A0 predicted by the theory is the expectation value of the observed activity. Mathematically speaking, the observed activity A converges for N$ \to$$ \infty$ in the weak topology to its expectation value. More practically this implies that we should convolve the observed activity with a continuous test function $ \gamma$(s) before comparing with A0. We take a function $ \gamma$ with the normalization $ \int_{0}^{{s^{\rm max}}}$$ \gamma$(s) ds = 1. For the sake of simplicity we assume furthermore that $ \gamma$ has finite support so that $ \gamma$(s) = 0 for s < 0 or s > smax. We define

$\displaystyle \overline{{A}}$(t) = $\displaystyle \int_{0}^{{s^{\rm max}}}$$\displaystyle \gamma$(sA(t - s) ds . (6.98)

The firing is asynchronous if the averaged fluctuations $ \langle$|$ \overline{{A}}$(t) - A0|2$ \rangle$ decrease with increasing N; cf. Fig. 6.9.

For the purpose of illustration, we have plotted in Fig. 6.10A the spikes of eight neurons of the network simulation shown in Fig. 6.9. The mean interspike-interval for a single neuron is $ \langle$T$ \rangle$ = 20ms which corresponds to a population activity of A0 = 50Hz.


6.4.2 Gain Function and Fixed Points of the Activity

The gain function of a neuron is the firing rate $ \langle$T$ \rangle^{{-1}}_{}$ as a function of its input current I. In the previous subsection, we have seen that the firing rate is equivalent to the population activity A0 in the state of asynchronous firing. We thus have

A0 = g(I) . (6.99)

Recall that the total input I to a neuron consists of the external input Iext(t) and a component that is due to the interaction of the neurons within the population. In case of the simple Spike Response Model (SRM0) the input is constant for stationary activity A(t) = A0 and constant external input Iext(t) = Iext0,

h(t) = J0 A0 $\displaystyle \int_{0}^{\infty}$$\displaystyle \epsilon_{0}^{}$(s) ds + Iext0 $\displaystyle \int_{0}^{\infty}$$\displaystyle \kappa_{0}^{}$(s) ds $\displaystyle \equiv$ h0 . (6.100)

The constant factor $ \int_{0}^{\infty}$$ \epsilon_{0}^{}$(s) ds can be absorbed in the definition of J0 and will be dropped in the following. The coupling to the external current is given by the input resistance $ \int_{0}^{\infty}$$ \kappa$(s) ds = R, so that

h0 = J0 A0 + R Iext0 . (6.101)

This, however, is rather an input potential than an input current. In order to be compatible with the definition of the gain function, we should divide the above expression by R so as to obtain the total input current, but for the sake of simplicity we set R = 1 in the following. Together with Eq. (6.99) we thus find the following equation for the population activity A0,

A0 = g$\displaystyle \left(\vphantom{ {J_0}\, A_0 + I_0^{\rm ext} + }\right.$J0 A0 + I0ext +$\displaystyle \left.\vphantom{ {J_0}\, A_0 + I_0^{\rm ext} + }\right)$ . (6.102)

This is the central result of this section, which is not only valid for SRM0 neurons, but also holds for other spiking neuron models.

Figure 6.11 shows a graphical solution of Eq. (6.102) in terms of the mean interval $ \langle$T$ \rangle$ as a function of the input I0 (i.e., the gain function) and the total input I0 as a function of the activity A0. The intersections of the two functions yield fixed points of the activity A0.

Figure 6.11: Graphical solution for the fixed point A0 of the activity in a population of SRM0 neurons. The intersection of the gain function A0 = g(I0) (solid line) with the straight line A0 = [I0 - Iext0]/J0 (dotted) gives the value of the activity A0. Depending on the parameters, several solutions may coexist (dashed line).
\hbox{\hspace{20mm}
\includegraphics[width=50mm]{Figs-ch-pop/fig15b.eps}}

As an aside we note that the graphical construction is identical to that of the Curie-Weiss theory of ferromagnetism which can be found in any physics textbook. More generally, the structure of the equations corresponds to the mean-field solution of a system with feedback. As shown in Fig. 6.11, several solutions may coexist. We cannot conclude from the figure, whether one or several solutions are stable. In fact, it is possible that all solutions are unstable. In the latter case, the network leaves the state of asynchronous firing and evolves towards an oscillatory or quiescent state. The stability analysis of the asynchronous state is deferred to Chapter 8.

6.4.2.1 Example: SRM0 neurons with escape noise

Consider a population of (noisy) SRM0 neurons with escape rate f, e.g. f (u - $ \vartheta$) = exp[$ \beta$ (u - $ \vartheta$)]; cf. Chapter 5.3. The stationary activity A0 in the presence of a constant input potential h0 = R I0 is given by

A0 = $\displaystyle \left[\vphantom{ \int_0^\infty s\, P_0(s) {\text{d}}s }\right.$$\displaystyle \int_{0}^{\infty}$s P0(s)ds$\displaystyle \left.\vphantom{ \int_0^\infty s\, P_0(s) {\text{d}}s }\right]^{{-1}}_{}$  
  = $\displaystyle \left[\vphantom{ \int_0^\infty s\, f[u(s) - \vartheta] \,
\exp\left\{ - \int_{0}^s f[u(s') - \vartheta] {\text{d}}s' \right\} {\text{d}}s
}\right.$$\displaystyle \int_{0}^{\infty}$s f[u(s) - $\displaystyle \vartheta$] exp$\displaystyle \left\{\vphantom{ - \int_{0}^s f[u(s') - \vartheta] {\text{d}}s' }\right.$ - $\displaystyle \int_{{0}}^{s}$f[u(s') - $\displaystyle \vartheta$]ds'$\displaystyle \left.\vphantom{ - \int_{0}^s f[u(s') - \vartheta] {\text{d}}s' }\right\}$ds$\displaystyle \left.\vphantom{ \int_0^\infty s\, f[u(s) - \vartheta] \,
\exp\le...
...{0}^s f[u(s') - \vartheta] {\text{d}}s' \right\} {\text{d}}s
}\right]^{{-1}}_{}$  (6.103)

where u(s) = $ \eta$(s) + h0. Figure 6.12A shows the activity as a function of the total input current I0. Note that the shape of the gain function depends on the noise level $ \beta$. The stationary activity A0 in a population with lateral coupling J0$ \ne$ 0 is given by the intersections of the gain function g(I0) with the straight line that gives the total input I0 as a function of the activity A0; cf. Fig. 6.12A.

6.4.2.2 Example: Integrate-and-fire model with diffusive noise

In the limit of diffusive noise the stationary activity is

A0 = $\displaystyle \left\{\vphantom{\tau_m \sqrt{\pi} \int_{{u_r - h_0 \over \sigma}...
...}} {\text{d}}u \, \exp\left(u^2\right) \, \left[1+ {\rm erf}(u)\right] }\right.$$\displaystyle \tau_{m}^{}$$\displaystyle \sqrt{{\pi}}$$\displaystyle \int_{{{u_r - h_0 \over \sigma}}}^{{{\vartheta - h_0 \over \sigma}}}$du exp$\displaystyle \left(\vphantom{u^2}\right.$u2$\displaystyle \left.\vphantom{u^2}\right)$ $\displaystyle \left[\vphantom{1+ {\rm erf}(u)}\right.$1 + erf(u)$\displaystyle \left.\vphantom{1+ {\rm erf}(u)}\right]$$\displaystyle \left.\vphantom{\tau_m \sqrt{\pi} \int_{{u_r - h_0 \over \sigma}}...
...}}u \, \exp\left(u^2\right) \, \left[1+ {\rm erf}(u)\right] }\right\}^{{-1}}_{}$ , (6.104)

where $ \sigma^{2}_{}$ is the variance of the noise; cf. Eq. (6.33). In a asynchronously firing population of N integrate-and-fire neurons coupled by synapses with efficacy wij = J0/N and normalized postsynaptic currents ( $ \int_{0}^{\infty}$$ \alpha$(s) = 1), the total input current is

I0 = Iext0 + J0 A0 ; (6.105)

cf. Eq. (6.5). The fixed points for the population activity are once more determined by the intersections of these two functions; cf. Fig. 6.12B.

Figure 6.12: A. Determination of the population activity A0 for noisy SRM0 neurons with exponential escape rate f (u) = exp[$ \beta$ (u - $ \vartheta$)]. Depending on the noise level, there are one or several intersections between the gain functions (solid lines) and the dashed line. Noise parameters are $ \beta$ = 2, 5, and 10. B. Similar construction for integrate-and-fire neurons with diffusive noise. The solid lines show the single-neuron firing rate as a function of the constant input current I0 for four different noise levels, viz. $ \sigma$ = 1.0,$ \sigma$ = 0.5,$ \sigma$ = 0.1,$ \sigma$ = 0.0 (from top to bottom). The intersection with the dashed line with slope 1/J0 gives potential solutions for the stationary activity A0 in a population with excitatory coupling J0. Other parameters: $ \vartheta$ = 1, R = 1, $ \tau$ = 10ms.
\begin{minipage}{0.4\textwidth}
{\bf A}
\par\includegraphics[width=\textwidth]{...
...ar\includegraphics[width=\textwidth]{Figs-ch-pop/gain-IF-s0.eps}
\end{minipage}


6.4.3 Low-Connectivity Networks

In the preceding subsections we have studied the stationary state of a population of neurons for a given noise level. The noise was modeled either as diffusive noise mimicking stochastic spike arrival or as escape noise mimicking a noisy threshold. In both cases noise was added explicitly to the model. In this section we discuss how a network of deterministic neurons with fixed random connectivity can generate its own noise. In particular, we will focus on spontaneous activity and argue that there exist stationary states of asynchronous firing at low firing rates which have broad distributions of interspike intervals even though individual neurons are deterministic. This point has been emphasized by van Vreeswijk and Sompolinsky (1996,1998) who used a network of binary neurons to demonstrate broad interval distribution in deterministic networks. Amit and Brunel (1997a,b) where the first to analyze a network of integrate-and-fire neurons with fixed random connectivity. While they allowed for an additional fluctuating input current, the major part of the fluctuations were in fact generated by the network itself. The theory of randomly connected integrate-and-fire neurons has been further developped by Brunel and Hakim (1999). In a recent study, Brunel (2000) confirmed that asynchronous highly irregular firing can be a stable solution of the network dynamics in a completely deterministic network consisting of excitatory and inhibitory integrate-and-fire neurons. The analysis of randomly connected networks of integrate-and-fire neurons is closely related to earlier theories for random nets of formal analog or binary neurons (Nützel, 1991; Kree and Zippelius, 1991; Amari, 1977b,1972,1974; Crisanti and Sompolinsky, 1988; Cessac et al., 1994).

The network structure plays a central role in the arguments. While we assume that all neurons in the population are of the same type, the connectivity between the neurons in the population is not homogeneous. Rather it is random, but fixed. Each neuron in the population of N neurons receives input from C randomly selected neurons in the population. Sparse connectivity means that the ratio

$\displaystyle \delta$ = $\displaystyle {C \over N}$ $\displaystyle \ll$ 1 (6.106)

is a small number. Is this realistic? A typical pyramidal neuron in the cortex receives several thousand synapses from presynaptic neurons while the total number of neurons in the cortex is much higher. Thus globally the cortical connectivity C/N is low. On the other hand, we may concentrate on a single column in visual cortex and define, e.g., all excitatory neurons in that column as one population. We estimate that the number N of neurons in one column is below ten thousand. Each neuron receives a large number of synapses from neurons within the same column. In order to have a connectivity ratio of 0.1, each neuron should have connections to about a thousand other neurons in the same column.

As a consequence of the sparse random network connectivity two neurons i and j share only a small number of common inputs. In the limit of C/N$ \to$ 0 the probability that neurons i and j have a common presynaptic neuron vanishes. Thus, if the presynaptic neurons fire stochastically, then the input spike trains that arrive at neuron i and j are independent (Kree and Zippelius, 1991; Derrida et al., 1987). In that case, the input of neuron i and j can be described as stochastic spike arrival which, as we have seen, can be described by a diffusive noise model.

The above reasoning, however, is based on the assumption that the presynaptic neurons (that are part of the population) fire stochastically. To make the argument self-consistent, we have to show that the firing of the postsynaptic neuron is, to a good approximation, also stochastic. The self-consistent argument will be outlined in the following.

We have seen in Chapter 5 that integrate-and-fire neurons with diffusive noise generate spike trains with a broad distribution of interspike intervals when they are driven in the sub-threshold regime. We will use this observation to construct a self-consistent solution for the stationary states of asynchronous firing.

We consider two populations, an excitatory population with NE neurons and an inhibitory population with NI neurons. We assume that excitatory and inhibitory neurons have the same parameters $ \vartheta$, $ \tau_{m}^{}$, R, and ur. In addition all neurons are driven a common external current Iext. Each neuron in the population receives CE synapses from excitatory neurons with weight wE > 0 and CI synapses from inhibitory neurons with weight wI < 0. If an input spike arrives at the synapses of neuron i from a presynaptic neuron j, its membrane potential changes by an amount $ \Delta$ui = wj where wj = wE if j is excitatory and wj = wI if j is inhibitory. We set

$\displaystyle \gamma$ = $\displaystyle {C_I\over C_E}$ andg = - $\displaystyle {w_I \over w_E}$ . (6.107)

Since excitatory and inhibitory neurons receive the same number of inputs in our model, we assume that they fire with a common firing rate $ \nu$. The total input potential generated by the external current and by the lateral couplings is

h0 = R Iext + $\displaystyle \tau_{m}^{}$$\displaystyle \sum_{j}^{}$$\displaystyle \nu_{j}^{}$ wj  
  = h0ext + $\displaystyle \tau_{m}^{}$ $\displaystyle \nu$ wE CE [1 - $\displaystyle \gamma$ g] . (6.108)

The variance of the input is given by Eq. (6.24), i.e.,
$\displaystyle \sigma^{2}_{}$ = $\displaystyle \tau_{m}^{}$$\displaystyle \sum_{j}^{}$$\displaystyle \nu_{j}^{}$ wj2  
  = $\displaystyle \tau_{m}^{}$ $\displaystyle \nu$ wE2 CE [1 + $\displaystyle \gamma$ g2] . (6.109)

The stationary firing rate A0 of the population with mean h0 and variance $ \sigma$ is given by Eq. (6.33) which is repeated here for convenience

A0 = $\displaystyle {1\over \tau_m}$$\displaystyle \left\{\vphantom{ {\sqrt{\pi}} \int_{{u_r - h_0 \over \sigma}}^{{...
... \exp\left({x}^2\right) \, \left[1+ {\rm erf}(x)\right] \, {\text{d}}x }\right.$$\displaystyle \sqrt{{\pi}}$$\displaystyle \int_{{{u_r - h_0 \over \sigma}}}^{{{\vartheta - h_0 \over \sigma}}}$ exp$\displaystyle \left(\vphantom{{x}^2}\right.$x2$\displaystyle \left.\vphantom{{x}^2}\right)$ $\displaystyle \left[\vphantom{1+ {\rm erf}(x)}\right.$1 + erf(x)$\displaystyle \left.\vphantom{1+ {\rm erf}(x)}\right]$ dx$\displaystyle \left.\vphantom{ {\sqrt{\pi}} \int_{{u_r - h_0 \over \sigma}}^{{\...
...{x}^2\right) \, \left[1+ {\rm erf}(x)\right] \, {\text{d}}x }\right\}^{{-1}}_{}$ . (6.110)

In a stationary state we must have A0 = $ \nu$. To get the value of A0 we must therefore solve Eqs. (6.108) - (6.110) simultaneously for $ \nu$ and $ \sigma$. Since the gain function, i.e., the firing rate as a function of the input potential h0 depends on the noise level $ \sigma$, a simple graphical solution as in Section 6.4. is no longer possible. In the following paragraphs we give some examples of how to construct self-consistent solutions. Numerical solutions of Eqs. (6.108) - (6.110) have been obtained by Amit and Brunel (1997a,b). For a mixed graphical-numerical approach see Mascaro and Amit (1999).

The arguments that have been developed above for low-connectivity networks can be generalized to fully connected networks with asymmetric random connectivity (Sompolinsky et al., 1988; van Vreeswijk and Sompolinsky, 1996; Ben Arous and Guionnet, 1995; Amari, 1972; Cessac et al., 1994).


6.4.3.1 Example: Balanced excitation and inhibition

In the preceding sections, we have often considered neurons driven by a mean input potential h0 = 0.8 and a variance $ \sigma$ = 0.2. Let us find connectivity parameters of our network so that $ \sigma$ = 0.2 is the result of stochastic spike arrivals from presynaptic neurons within the network. As always we set R = $ \vartheta$ = 1 and $ \tau_{m}^{}$ = 10ms.

Figure 6.13A shows that h0 = 0.8 and $ \sigma$ = 0.2 correspond to a firing rate of A0 = $ \nu$ $ \approx$ 16 Hz. We set wE = 0.025, i.e., 40 simultaneous spikes are necessary to make a neuron fire. Inhibition has the same strength wI = - wE so that g = 1. We constrain our search to solutions with CE = CI so that $ \gamma$ = 1. Thus, on the average, excitation and inhibition balance each other. To get an average input potential of h0 = 0.8 we need therefore a constant driving current Iext = 0.8.

To arrive at $ \sigma$ = 0.2 we solve Eq. (6.109) for CE and find CE = CI = 200. Thus for this choice of the parameters the network generates enough noise to allow a stationary solution of asynchronous firing at 16Hz.

Figure 6.13: A. Mean activity of a population of integrate-and-fire neurons with diffusive noise of $ \sigma$ = 0.2 as a function of h0 = R I0. For h0 = 0.8 the population rate is $ \nu$ $ \approx$ 16Hz (dotted line). B. Mean activity of a population of integrate-and-fire neurons with diffusive noise of $ \sigma$ = 0.54 as a function of h0 = R I0. For h0 = 0.2 the population rate is $ \nu$ = 8Hz (dotted line). The long-dashed line shows A0 = [h0 - h0ext]/Jeff with an effective coupling Jeff < 0.
\hbox{{\bf A} \hspace{58mm} {\bf B}}
\hbox{\hspace{10mm}
\includegraphics[width=...
...15mm}
\includegraphics[width=45mm]{Figs-ch-pop/gain-IF-s0.54.eps}}
\vspace{0mm}

Note that, for the same parameter, the inactive state where all neurons are silent is also a solution. Using the methods discussed in this section we cannot say anything about the stability of these states. For the stability analysis see (Brunel, 2000) and Chapter 7.


6.4.3.2 Example: Spontaneous cortical activity

About eighty percent of the neurons in the cerebral cortex are excitatory and twenty percent inhibitory. Let us suppose that we have NE = 8000 excitatory and NI = 2000 inhibitory neurons in a cortical column. We assume random connectivity and take CE = 800, CI = 200 so that $ \gamma$ = 1/4. As before, excitatory synapses have a weight wE = 0.025, i.e, an action potential can be triggered by the simultaneous arrival of 40 presynaptic spikes. If neurons are driven in the regime close to threshold, inhibition is rather strong and we take wI = - 0.125 so that g = 5. Even though we have less inhibitory than excitatory neurons, the mean feedback is then dominated by inhibition since $ \gamma$ g > 1. We search for a consistent solution of Eqs. (6.108) - (6.110) with a spontaneous activity of $ \nu$ = 8Hz.

Given the above parameters, the variance is $ \sigma$ $ \approx$ 0.54; cf. Eq. (6.109). The gain function of integrate-and-fire neurons gives us for $ \nu$ = 8Hz a corresponding total potential of h0 $ \approx$ 0.2; cf. Fig. 6.13B. To attain h0 we have to apply an external stimulus h0ext = R Iext which is slightly larger than h0 since the net effect of the lateral coupling is inhibitory. Let us introduce the effective coupling Jeff = $ \tau$ CE wE (1 - $ \gamma$ g). Using the above parameters we find from Eq. (6.108) h0ext = h0 - Jeff A0 $ \approx$ 0.6.

The external input could, of course, be provided by (stochastic) spike arrival from other columns in the same or other areas of the brain. In this case Eq. (6.108) is to be replaced by

h0 = $\displaystyle \tau_{m}^{}$ $\displaystyle \nu$ wE CE [1 - $\displaystyle \gamma$ g]  + $\displaystyle \tau_{m}^{}$ $\displaystyle \nu_{{\rm ext}}^{}$wext Cext , (6.111)

with Cext the number of connections that a neuron receives from neurons outside the population, wext their typical coupling strength, and $ \nu_{{\rm ext}}^{}$ their spike arrival rate (Amit and Brunel, 1997a,b). Due to the extra stochasticity in the input, the variance $ \sigma$ is larger and the total variance is

$\displaystyle \sigma^{2}_{}$ = $\displaystyle \tau_{m}^{}$ $\displaystyle \nu$ wE2 CE [1 + $\displaystyle \gamma$ g2] + $\displaystyle \tau_{m}^{}$ $\displaystyle \nu_{{\rm ext}}^{}$wext2 Cext (6.112)

The equations (6.110), (6.111) and (6.112) can be solved numerically (Amit and Brunel, 1997a,b). The analysis of the stability of the solution is slighlty more involved but can be done (Brunel, 2000; Brunel and Hakim, 1999).


next up previous contents index
Next: 6.5 Interacting Populations and Up: 6. Population Equations Previous: 6.3 Integral Equations for
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.