next up previous contents index
Next: 11.1 Learning in Rate Up: III. Models of Synaptic Previous: 10.5 Summary


11. Learning Equations

Neurons in the central nervous system form a complex network with a high degree of plasticity. In the previous chapter we have discussed synaptic plasticity from a phenomenological point of view. We now ask `What are the consequences for the connectivity between neurons if synapses are plastic?'. To do so we consider a scenario known as unsupervised learning. We assume that some of the neurons in the network are stimulated by input with certain statistical properties. Synaptic plasticity generates changes in the connectivity pattern that reflect the statistical structure of the input. The relation between the input statistics and the synaptic weights that evolve due to Hebbian plasticity is the topic of this chapter. We start in Section 11.1 with a review of unsupervised learning in a rate-coding paradigm. The extension of the analysis to spike-time dependent synaptic plasticity is made in Section 11.2. We will see that spike-based learning naturally accounts for spatial and temporal correlations in the input and can overcome some of the problems of a simple rate-based learning rule.



Subsections
next up previous contents index
Next: 11.1 Learning in Rate Up: III. Models of Synaptic Previous: 10.5 Summary
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.