next up previous contents index
Next: 12. Plasticity and Coding Up: 11. Learning Equations Previous: 11.2 Learning in Spiking

Subsections


11.3 Summary

The synaptic weight dynamics can be studied analytically if weights are changing slowly as compared to the time scale of the neuronal activity. We have seen that weight changes are driven by correlations between pre- and postsynaptic activity. More specifically, simple Hebbian learning rules can find the first principal component of a normalized input data set. If non-Hebbian terms are included then both spike-based and rate-based learning rules can be constructed that are characterized by a stable fixed point for the sum of the synaptic weights. This fixed point leads to an intrinsic normalization of the output firing rate.

The interesting aspect of spike-time dependent plasticity is that it naturally accounts for temporal correlations in the input by means of a learning window. Explicit expressions for temporal spike-spike correlations can be obtained for certain simple types of neuron model such as the linear Poisson model. In this case, correlations between pre- and postsynaptic neurons can be formulated in terms of the correlations in the input. It can be shown that, under certain circumstances, the weight vector evolves in the direction of the principal component of the input pattern set, even if the input is not normalized.

Spike-based and rate-based rules of plasticity are equivalent as long as temporal correlations are disregarded. The integral over the learning window $ \int_{{-\infty}}^{\infty}$W(s) ds plays the role of the Hebbian correlation term ccorr2. If rates vary rapidly, i.e. on the time scale of the learning window, then spike-time dependent plasticity is distinct from a rate-based formulation.

In addition to an analysis of the expectation value of the synaptic weight vector the distribution of weights can be described by means of a Fokker-Planck equation. The stationary distribution depends on the details of the learning rule.

References

More on the theory of unsupervised learning and principal component analysis can be found in the classical book by Hertz et al. (1991). Models of the development of receptive fields and cortical maps have a long tradition in the field of computational neuroscience; see, e.g., Shouval and Perrone (1995); Miller (1994); Miller et al. (1989); von der Malsburg (1973); Linsker (1986c); Sejnowski (1977); Kohonen (1984); Sejnowski and Tesauro (1989); MacKay and Miller (1990); for a review see, e.g., Erwin et al. (1995); Wiskott and Sejnowski (1998). The linear rate model discussed in Section 11.1 is reviewed in Miller (1995). The essential aspects of the weight dynamics in linear networks are discussed in Miller and MacKay (1994); Oja (1982); MacKay and Miller (1990).

The theory of spike-based Hebbian learning has been developed by Rubin et al. (2001); Roberts and Bell (2000); Eurich et al. (1999); Roberts (1999,2000); Senn et al. (2001b); Song et al. (2000); Häfliger et al. (1997); van Rossum et al. (2000); (); Gerstner et al. (1996a); Ruf and Schmitt (1997); Kistler and van Hemmen (2000b); (); and others. Spike-based learning rules are closely related to rules for sequence learning (Gerstner et al., 1993b; Herz et al., 1988; Gerstner and Abbott, 1997; Minai and Levy, 1993; Herz et al., 1989; van Hemmen et al., 1990; Abbott and Blum, 1996), where the idea of an asymmetric learning windows is exploited.


next up previous contents index
Next: 12. Plasticity and Coding Up: 11. Learning Equations Previous: 11.2 Learning in Spiking
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.