next up previous contents index
Next: 12.4 Subtraction of Expectations Up: 12. Plasticity and Coding Previous: 12.2 Learning to be

Subsections



12.3 Sequence Learning

It has been recognized for a long time that asymmetric Hebbian plasticity is well suited to store spatio-temporal patterns in a neuronal network (Gerstner et al., 1993b; Minai and Levy, 1993; Herz et al., 1989; Hertz and Prugel-Bennet, 1996; Sompolinsky and Kanter, 1986). In standard sequence learning models, where groups of neurons are trained to fire one after the other, external input is used during an initial training period in order to induce neurons to fire in the desired spatio-temporal spike pattern. Let us suppose that neuron j fires shortly before neuron i. Hebbian synaptic plasticity with an asymmetric learning window will thus strengthen the synapse from neuron j to neuron i. After succesfull training, weights are kept fixed and the network is able to reproduce the learnt spatio-temporal spike pattern even in the absence of the external input because spikes of neuron j will stimulate neuron i and hence help to `recall' the sequence of firing. The resulting network architecture is equivalent to a synfire chain (Abeles, 1991) with feedforward connectivity; cf. Section 9.4.

Figure 12.8: Sequence learning. A. In a sequence, different groups of neurons (1,2,3,4) are activated one after the other. During learning the sequence is imposed by external stimuli. B. A presynaptic neuron j in one of the groups is connected to the postsynaptic neuron i in another group via several synapses with different axonal transmission delays $ \Delta_{1}^{}$,$ \Delta_{2}^{}$,$ \Delta_{3}^{}$. Initially, external stimulation by short current pulses (left) causes pre- and postsynaptic spikes (short vertical bars) at time tj(f) and ti(f), respectively. The presynaptic spike generates three EPSPs with different delays (middle). If a Hebbian learning rule strengthens the connection with transmission delay $ \Delta_{2}^{}$, a presynaptic spike tj(f) can later cause a postsynaptic spike ti(f) with approximately the same time difference ti(f) - tj(f) as imposed during the initial stimulation. The bottom graph shows a hypothetical sharply peaked learning window (dashed line) and a more realistic window with two phases (solid line). The maximum of the learning window is so that the connection with transmission delay $ \Delta_{2}^{}$ is maximally reinforced. tpre indicates the presynaptic spike arrival time. (schematic figure).
\hbox{{\bf A} \hspace{50mm} {\bf B}} \hbox{\hspace{2mm}
\includegraphics[height...
...pace{20mm}
\includegraphics[height=40mm]{Figs-ch-hebbcode/sequ-schema-b.eps} }

The network should be able to recall the learnt sequence at the correct speed. This can most easily be achieved if each pair of neurons has several connections with a broad distributuion of delays; cf. Fig. 12.8B. We assume that neuron i receives input from neuron j via three connections with different axonal transmission delays $ \Delta_{1}^{}$, $ \Delta_{2}^{}$, and $ \Delta_{3}^{}$. A single presynaptic action potential that has been fired at time tj(f) evokes therefore three EPSPs which start after the presynaptic spike arrival time tpre = ti(f) + $ \Delta_{1}^{}$, ti(f) + $ \Delta_{2}^{}$, and ti(f) + $ \Delta_{3}^{}$, respectively. In order to preserve the timing, Hebbian learning should maximally reinforce the connection that could have been causal for the postsynaptic spike at time ti(f). A postsynaptic action potential is triggered in the low-noise limit most likely during the rising phase of the EPSP, while in the high-noise limit it occurs most likely at the time when the EPSP reaches its maximum; cf. Section 7.4.1, in particular Fig. 7.12. If we denote the rise time of the EPSP as $ \delta^{{\rm rise}}_{}$ and the time difference between presynaptic spike arrival and postsynaptic firing by s = tpre - ti(f), then the learning window W(s) should have its maximum s* in the range 0.5 $ \delta^{{\rm rise}}_{}$ < - s* < $ \delta^{{\rm rise}}_{}$; (Gerstner et al., 1993b; Herz et al., 1989; Gerstner et al., 1996a; Senn et al., 2001a). We call this the causality condition of Hebbian learning.

In early papers on sequence learning, it was assumed that the learning window is sharply peaked at s*, so that only connections with the optimal delay are strengthened (Gerstner et al., 1993b; Herz et al., 1989). It is, however, also possible to achieve selective reinforcement of the optimal delay lines with a broader learning window, if a competition mechanism between different synapses leading onto the same postsynaptic neuron is implemented (Gerstner et al., 1996a; Senn et al., 2001a). As we have seen in Section 11.2.3, synaptic competition in a stochastically firing network of spiking neurons can be induced by a stabilization of the postsynaptic firing rate.


12.3.0.1 Example: Coding by spatio-temporal spike patterns

A network of N = 1000 spiking neurons has been trained on three spatio-temporal patterns that are defined with a temporal resolution of one millisecond. Each pattern consists of a sequence of spikes from different neurons during a time window of T = 40 time steps, i.e., 40ms. The sequence is then repeated. A spike pattern $ \mu$ ( 1$ \le$$ \mu$$ \le$3) is defined here by exactly one firing time ti(f)($ \mu$) for each single neuron 1$ \le$i$ \le$N. The firing time is drawn from a random distribution with uniform probability p = 0.025 for all discrete time steps ti(f) $ \in$ {1, 2,...40}. Thus, in an ideal and noiseless pattern all neurons fire regularly with a rate of 25Hz, but the firing of different neurons is randomly correlated.

During the training session all spike patterns 1$ \le$$ \mu$$ \le$3 are impressed on the neurons and the synaptic efficacies are adjusted according to a Hebbian learning rule with a suitable time window. In order to check whether the patterns are now stable attractors of the neuronal dynamics, retrieval of the patterns has to be studied. A retrieval session is started by a short external stimulus of duration tinit = 5ms. It consists of a spatio-temporal sequence of short pulses that initialized the network during 5 ms in a state consistent with one of the learnt patterns. The pattern $ \mu$ that is matched should be completed and cyclically retrieved afterwards.

Figure 12.9: Retrieval of spatio-temporal spike patterns. See text for details; taken from Gerstner et al. (1993b).
\vbox{\hspace{-12mm} {\bf A}} \vspace{-2mm}
\vbox{\includegraphics[height=56mm,...
...ics[height=56mm,width=100mm]{Figs-ch-hebbcode/spikepattern4.eps}}
\vspace{1mm}

The results of three retrieval sessions with different stimuli are shown in Fig. 12.9A-C. For each pattern, the ensemble activity (spatial average) during retrieval is plotted in (i), the spike pattern of a few selected neurons during the retrieval session is shown in (ii) and the mean firing rate in (iii).

Let us first turn to the spatio-temporal spike pattern, Fig. 12.9A(ii)-C(ii). We have selected 30 neurons whose label has been plotted along the y-axis. Time is plotted along the x-axis. The origin t = 0 marks the end of the stimulus, thus for all t$ \ge$ 0, the external input vanishes. All spikes of a given neuron i appear as black dots along a line parallel to the x=axis. For ease of visualization of the different spike patterns, we have used a little trick. Neurons with index i = 1,..., 10 did not learn random patterns but `meaningful' objects like diagonal stripes so that different spike patterns can be easily recognized.

If we analyze the series of Fig. 12.9A-C, a number of conclusions regarding potential neuronal coding schemes can be drawn. First of all, it is indeed possible to store and retrieve spatio-temporal spike patterns with a time resolution of 1ms in a network of spiking neurons. This may seem remarkable in view of the typical duration of an EPSP (approximately 5-15ms) which is much longer, but can easily be explained since (i) firing occurs, at least in the low-noise limit, during the rise time of the EPSP which is typically much shorter (1-5ms) than the duration of the EPSP and (ii) after firing a neuron becomes refractory so that it cannot emit further spikes which would reduce the temporal precision; cf. Section 9.4.

Second, we see from Fig. 12.9 that several patterns can be stored in the same network. These patterns are defined by their spatio-temporal correlations and cannot be distinguished by mean firing rates or ensemble activity. To illustrate this point, let us count the number of spikes along a horizontal line and divide by the total recording time (T = 200ms). This procedure allows us to determine the mean firing rate of the neuron plotted in subgraphs (iii) to the right of the spike pattern. We see that all neurons have approximately the same firing rate, 25 Hz. Thus, if we consider the mean firing rate only, we cannot detect any significant structure in the firing behavior of the neurons. Instead of averaging over time we could also average over space. If we count the number of spikes in every millisecond (along a vertical line in the spike raster) and divide by the total number of neurons, we find the ensemble activity plotted in (i). We see that immediately after the stimulus the ensemble activity is high ( $ \approx$ 5%), but then settles rapidly to an average of 2.5% and no significant structure is left. Nevertheless, if we look at the spike raster (ii), we see that the network remains in a regular firing state. The specific spike pattern has been induced by the stimulus and is different for Figs. 12.9A, B, and C. Data analysis methods that are based on mean firing rates or ensemble activities would miss the information contained in the time-resolved spike raster. Indeed, the above examples clearly show that single spikes can carry important information.

Does this imply that the cortical spike activity is actually a huge spatio-temporal pattern that is stored in the synaptic connectivity? Do neurons use a temporal code at a millisecond time scale? Do different brain states correspond to different spatio-temporal patterns that are recalled from the storage? The answer is most likely negative - for a number of reasons. Simulations, for example, show that the spatio-temporal patterns that are presented during learning become stable attractors of the network dynamics. However, if we try to store in a network of 1000 neurons more than, say, 20 patterns the dynamics breaks down rapidly. Since a single spatio-temporal pattern of 40 millisecond duration contains 40 different spatial patterns that are retrieved one after another, a lot of information needs to be stored in the synaptic connections to cover a short interval of time.

On the other hand, specialized structures such as delayed reverberating loops in the olivo-cerebellar system that operate intrinsically at a time scale of 100 ms instead of 1 ms may actually rely on spatio-temporal spike patterns as a neuronal code; cf. Section 8.3.3. Furthermore, transient spatio-temporal spike activity without stable attractors could play a role for dynamical short-term memory and information processing (Kistler and De Zeeuw, 2002; Maass et al., 2002).

The final decision, whether the brain uses spatio-temporal spike patterns as a code has to come from experiments. The simulations shown in Fig. 12.9 suggest that during retrieval of a spatio-temporal spike pattern neuronal firing times are strongly correlated, a result which should clearly be visible in experimental data. On the other hand, recent analysis of experimental spike trains show that correlations between the firing times of different neurons are rather weak and can be explained by stochastic models (Oram et al., 1999). Thus, spatio-temporal spike patterns with milisecond resolution are probably not a widely spread coding scheme in the brain.


next up previous contents index
Next: 12.4 Subtraction of Expectations Up: 12. Plasticity and Coding Previous: 12.2 Learning to be
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.