next up previous contents index
Next: 11.2 Learning in Spiking Up: 11. Learning Equations Previous: 11. Learning Equations

Subsections



11.1 Learning in Rate Models

We would like to understand how activity-dependent learning rules influence the formation of connections between neurons in the brain. We will see that plasticity is controlled by the statistical properties of the presynaptic input that is impinging on the postsynaptic neuron. Before we delve into the analysis of the elementary Hebb rule we therefore need to recapitulate a few results from statistics and linear algebra.


11.1.1 Correlation Matrix and Principal Components

A principal component analysis (PCA) is a standard technique to describe statistical properties of a set of high-dimensional data points and is usually performed in order to find those components of the data that show the highest variability within the set. If we think of the input data set as of a cloud of points in a high-dimensional vector space centered around the origin, then the first principal component is the direction of the longest axis of the ellipsoid that encompasses the cloud; cf. Fig. 11.1. If the data points consisted of, say, two separate clouds then the first principal component would give the direction of a line that connects the center points of the two clouds. A PCA can thus be used to break a large data set into separate clusters. In the following, we will quickly explain the basic idea and show that the first principal component gives the direction where the variance of the data is maximal.

Figure 11.1: Ellipsoid approximating the shape of a cloud of data points. The first principal component $ \vec{e}_{1}^{}$ corresponds to the principal axis of the ellipsoid.
\centerline{\includegraphics[width=6cm]{Figs-ch-hebb-anal/ellipsoid.ps}}

Let us consider an ensemble of data points {$ \vec{\xi}^{1}_{}$,...,$ \vec{\xi}^{p}_{}$} drawn from a (high-dimensional) vector space, for example $ \vec{\xi}^{\mu}_{}$ $ \in$ $ \mathbb {R}$N. For this set of data points we define the correlation matrix Cij as

Cij = $\displaystyle {\frac{{1}}{{p}}}$$\displaystyle \sum_{{\mu=1}}^{p}$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$ = $\displaystyle \left<\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right.$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$$\displaystyle \left.\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right>_{\mu}^{}$ . (11.1)

Angular brackets $ \left<\vphantom{ \cdot }\right.$ . $ \left.\vphantom{ \cdot }\right>_{\mu}^{}$ denote an average over the whole set of data points. Similar to the variance of a single random variable we can also define the covariance matrix Vij of our data set,

Vij = $\displaystyle \left<\vphantom{ (\xi_i^\mu - \langle \xi_i^\mu\rangle_\mu) \, (\xi_j^\mu - \langle \xi_j^\mu\rangle_\mu) }\right.$($\displaystyle \xi_{i}^{\mu}$ - $\displaystyle \langle$$\displaystyle \xi_{i}^{\mu}$$\displaystyle \rangle_{\mu}^{}$) ($\displaystyle \xi_{j}^{\mu}$ - $\displaystyle \langle$$\displaystyle \xi_{j}^{\mu}$$\displaystyle \rangle_{\mu}^{}$)$\displaystyle \left.\vphantom{ (\xi_i^\mu - \langle \xi_i^\mu\rangle_\mu) \, (\xi_j^\mu - \langle \xi_j^\mu\rangle_\mu) }\right>_{\mu}^{}$ . (11.2)

In the following we will assume that the coordinate system is chosen so that the center of mass of the set of data points is located at the origin, i.e., $ \left<\vphantom{ \xi_i }\right.$$ \xi_{i}^{}$$ \left.\vphantom{ \xi_i }\right>_{\mu}^{}$ = $ \left<\vphantom{ \xi_j }\right.$$ \xi_{j}^{}$$ \left.\vphantom{ \xi_j }\right>_{\mu}^{}$ = 0. In this case, correlation matrix and covariance matrix are identical.

The principal components of the set {$ \vec{\xi}^{1}_{}$,...,$ \vec{\xi}^{p}_{}$} are defined as the eigenvectors of the covariance matrix V. Note that V is symmetric, i.e., Vij = Vji. The eigenvalues of V are thus real-valued and different eigenvectors are orthogonal (Horn and Johnson, 1985). Furthermore, V is positive semi-definite since

$\displaystyle \vec{y}^{{\text{T}}}_{}$ V $\displaystyle \vec{y}\,$ = $\displaystyle \sum_{{ij}}^{}$yi $\displaystyle \left<\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right.$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$$\displaystyle \left.\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right>_{\mu}^{}$ yj = $\displaystyle \left<\vphantom{ \Bigl [ \sum_{i} y_i \, \xi_i^\mu \Bigl ]^2 }\right.$$\displaystyle \Bigl[$$\displaystyle \sum_{{i}}^{}$yi $\displaystyle \xi_{i}^{\mu}$$\displaystyle \Bigl]^{2}_{}$$\displaystyle \left.\vphantom{ \Bigl [ \sum_{i} y_i \, \xi_i^\mu \Bigl ]^2 }\right>_{\mu}^{}$$\displaystyle \ge$ 0 (11.3)

for any vector $ \vec{y}\,$ $ \in$ $ \mathbb {R}$N. Therefore, all eigenvalues of V are non-negative.

We can sort the eigenvectors $ \vec{e}_{i}^{}$ according to the size of the corresponding eigenvalues $ \lambda_{1}^{}$$ \ge$$ \lambda_{2}^{}$$ \ge$...$ \ge$ 0. The eigenvector with the largest eigenvalue is called the first principal component. It points in the direction where the variance of the data is maximal. To see this we calculate the variance of the projection of $ \vec{\xi}^{\mu}_{}$ onto an arbitrary direction $ \vec{y}\,$ that we write as $ \vec{y}\,$ = $ \sum_{i}^{}$ai $ \vec{e}_{i}^{}$ with $ \sum_{i}^{}$ai2 = 1 so that |$ \vec{y}\,$| = 1. The variance $ \sigma^{2}_{{\vec y}}$ along $ \vec{y}\,$ is

$\displaystyle \sigma^{2}_{{\vec y}}$ = $\displaystyle \left<\vphantom{ \left [ \vec x^\mu \cdot \vec y \right ]^2 }\right.$$\displaystyle \left[\vphantom{ \vec x^\mu \cdot \vec y }\right.$$\displaystyle \vec{x}^{\mu}_{}$ . $\displaystyle \vec{y}\,$$\displaystyle \left.\vphantom{ \vec x^\mu \cdot \vec y }\right]^{2}_{}$$\displaystyle \left.\vphantom{ \left [ \vec x^\mu \cdot \vec y \right ]^2 }\right>_{\mu}^{}$ = $\displaystyle \vec{y}^{{\text{T}}}_{}$ V $\displaystyle \vec{y}\,$ = $\displaystyle \sum_{i}^{}$$\displaystyle \lambda_{i}^{}$ ai2 . (11.4)

The right-hand side is maximal under the constraint $ \sum_{i}^{}$ai2 = 1 if a1 = 1 and ai = 0 for i = 2, 3,..., N, that is, if $ \vec{y}\,$ = $ \vec{e}_{1}^{}$.


11.1.2 Evolution of synaptic weights

In the following we analyze the evolution of synaptic weights using the Hebbian learning rules that have been described in Chapter 10. To do so, we consider a highly simplified scenario consisting of an analog neuron that receives input from N presynaptic neurons with firing rates $ \nu_{i}^{{\text{pre}}}$ via synapses with weights wi; cf. Fig. 11.2A. We think of the presynaptic neurons as `input neurons', which, however, do not have to be sensory neurons. The input layer could, for example, consist of neurons in the lateral geniculate nucleus (LGN) that project to neurons in the visual cortex. We will see that the statistical properties of the input control the evolution of synaptic weights.

For the sake of simplicity, we model the presynaptic input as a set of static patterns. Let us suppose that we have a total of p patterns {$ \vec{\xi}^{\mu}_{}$;1 < $ \mu$ < p}. At each time step one of the patterns $ \vec{\xi}^{\mu}_{}$ is selected at random and presented to the network by fixing the presynaptic rates at $ \nu_{i}^{{\rm pre}}$ = $ \xi_{i}^{\mu}$. We call this the static-pattern scenario. The presynaptic activity drives the postsynaptic neuron and the joint activity of pre- and postsynaptic neurons triggers changes of the synaptic weights. The synaptic weights are modified according to a Hebbian learning rule, i.e., according to the correlation of pre- and postsynaptic activity; cf. Eq. (10.3). Before the next input pattern is chosen, the weights are changed by an amount

$\displaystyle \Delta$wi = $\displaystyle \gamma$ $\displaystyle \nu^{{\text{post}}}_{}$ $\displaystyle \nu_{i}^{{\text{pre}}}$ (11.5)

Here, 0 < $ \gamma$ $ \ll$ 1 is a small constant called `learning rate'. The learning rate in the static-pattern scenario is closely linked to the correlation coefficient ccorr2 in the continuous-time Hebb rule introduced in Eq. (10.3). In order to highlight the relation, let us assume that each pattern $ \vec{\xi}^{\mu}_{}$ is applied during an interval $ \Delta$t. For $ \Delta$t sufficiently small, we have $ \gamma$ = ccorr2 $ \Delta$t.

Figure 11.2: Elementary model. A. Patterns $ \vec{\xi}^{\mu}_{}$ are applied as a set of presynaptic firing rates $ \nu_{j}^{}$, i.e., $ \vec{\xi}^{\mu}_{j}$ = $ \nu_{j}^{{\rm pre}}$ for 1$ \le$j$ \le$N. B. The gain function of the postsynaptic neuron is taken as linear, i.e., $ \nu^{{\rm post}}_{}$ = h. It can be seen as a linearization of the sigmoidal gain function g(h).
\hbox{{\bf A} \hspace{55mm} {\bf B}}
\hbox{\hspace{5mm}
\includegraphics[width...
...pace{10mm}
\includegraphics[width=65mm]{Figs-ch-hebb-anal/Fig-linear-b.eps}
}

In a general rate model, the firing rate $ \nu^{{\text{post}}}_{}$ of the postsynaptic neuron is given by a nonlinear function of the total input

$\displaystyle \nu^{{\text{post}}}_{}$ = g$\displaystyle \left(\vphantom{\sum_i w_i \, \nu_i^{\text{pre}}}\right.$$\displaystyle \sum_{i}^{}$wi $\displaystyle \nu_{i}^{{\text{pre}}}$$\displaystyle \left.\vphantom{\sum_i w_i \, \nu_i^{\text{pre}}}\right)$  ; (11.6)

cf. Fig. 11.2B. For the sake of simplicity, we restrict our discussion in the following to a linear rate model with

$\displaystyle \nu^{{\text{post}}}_{}$ = $\displaystyle \sum_{i}^{}$wi $\displaystyle \nu_{i}^{{\text{pre}}}$ . (11.7)

Obviously, this is a highly simplified neuron model, but it will serve our purpose of gaining some insights in the evolution of synaptic weights.

If we combine the learning rule (11.5) with the linear rate model of Eq. (11.7) we find after the presentation of pattern $ \vec{\xi}^{\mu}_{}$

$\displaystyle \Delta$wi = $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$wj $\displaystyle \nu_{j}^{{\text{pre}}}$ $\displaystyle \nu_{i}^{{\text{pre}}}$ = $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$wj $\displaystyle \xi^{\mu}_{j}$ $\displaystyle \xi^{\mu}_{i}$ . (11.8)

The evolution of the weight vector $ \vec{w}\,$ = (w1,..., wN) is thus determined by the iteration

wi(n + 1) = wi(n) + $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$wj $\displaystyle \xi^{{\mu_n}}_{j}$ $\displaystyle \xi^{{\mu_n}}_{i}$ , (11.9)

where $ \mu_{n}^{}$ denotes the pattern that is presented during the nth time step.

We are interested in the long-term behavior of the synaptic weights. To this end we assume that the weight vector evolves along a more or less deterministic trajectory with only small stochastic deviations that result from the randomness at which new input patterns are chosen. This is, for example, the case if the learning rate is small so that a large number of patterns has to be presented in order to induce a substantial weight change. In such a situation it is sensible to consider the expectation value of the weight vector, i.e., the weight vector $ \left<\vphantom{ \vec w(n) }\right.$$ \vec{w}\,$(n)$ \left.\vphantom{ \vec w(n) }\right>$ averaged over the sequence ($ \vec{\xi}^{{\mu_1}}_{}$,$ \vec{\xi}^{{\mu_2}}_{}$,...,$ \vec{\xi}^{{\mu_n}}_{}$) of all patterns that so far have been presented to the network. From Eq. (11.9) we find

$\displaystyle \left<\vphantom{ w_i(n+1) }\right.$wi(n + 1)$\displaystyle \left.\vphantom{ w_i(n+1) }\right>$ = $\displaystyle \left<\vphantom{ w_i(n) }\right.$wi(n)$\displaystyle \left.\vphantom{ w_i(n) }\right>$ + $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$$\displaystyle \left<\vphantom{ w_j(n) \, \xi^{\mu_{n+1}}_j \, \xi^{\mu_{n+1}}_i }\right.$wj(n$\displaystyle \xi^{{\mu_{n+1}}}_{j}$ $\displaystyle \xi^{{\mu_{n+1}}}_{i}$$\displaystyle \left.\vphantom{ w_j(n) \, \xi^{\mu_{n+1}}_j \, \xi^{\mu_{n+1}}_i }\right>$    
  = $\displaystyle \left<\vphantom{ w_i(n) }\right.$wi(n)$\displaystyle \left.\vphantom{ w_i(n) }\right>$ + $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$$\displaystyle \left<\vphantom{ w_j(n) }\right.$wj(n)$\displaystyle \left.\vphantom{ w_j(n) }\right>$ $\displaystyle \left<\vphantom{ \xi^{\mu_{n+1}}_j \, \xi^{\mu_{n+1}}_i }\right.$$\displaystyle \xi^{{\mu_{n+1}}}_{j}$ $\displaystyle \xi^{{\mu_{n+1}}}_{i}$$\displaystyle \left.\vphantom{ \xi^{\mu_{n+1}}_j \, \xi^{\mu_{n+1}}_i }\right>$    
  = $\displaystyle \left<\vphantom{ w_i(n) }\right.$wi(n)$\displaystyle \left.\vphantom{ w_i(n) }\right>$ + $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$Cij $\displaystyle \left<\vphantom{ w_j(n) }\right.$wj(n)$\displaystyle \left.\vphantom{ w_j(n) }\right>$ . (11.10)

The angular brackets denote an ensemble average over the whole sequence of input patterns ($ \vec{\xi}^{{\mu_1}}_{}$,$ \vec{\xi}^{{\mu_2}}_{}$,...). The second equality is due to the fact that input patterns are chosen independently in each time step, so that the average over wj(n) and ($ \xi^{{\mu_{n+1}}}_{j}$ $ \xi^{{\mu_{n+1}}}_{i}$) can be factorized. In the final expression we have introduced the correlation matrix Cij,

Cij = $\displaystyle {\frac{{1}}{{p}}}$$\displaystyle \sum_{{\mu=1}}^{p}$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$ = $\displaystyle \left<\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right.$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$$\displaystyle \left.\vphantom{ \xi_i^\mu \, \xi_j^\mu }\right>_{\mu}^{}$ . (11.11)

Expression (11.10) can be written in a more compact form using matrix notation,

$\displaystyle \left<\vphantom{ \vec w(n+1) }\right.$$\displaystyle \vec{w}\,$(n + 1)$\displaystyle \left.\vphantom{ \vec w(n+1) }\right>$ = ($\displaystyle \bf 1\hspace{-0.28em}I$ + $\displaystyle \gamma$ C$\displaystyle \left<\vphantom{ \vec w(n) }\right.$$\displaystyle \vec{w}\,$(n)$\displaystyle \left.\vphantom{ \vec w(n) }\right>$ = ($\displaystyle \bf 1\hspace{-0.28em}I$ + $\displaystyle \gamma$ C)n+1 $\displaystyle \left<\vphantom{ \vec w(0) }\right.$$\displaystyle \vec{w}\,$(0)$\displaystyle \left.\vphantom{ \vec w(0) }\right>$ , (11.12)

where $ \vec{w}\,$(n) = $ \left(\vphantom{w_1(n), \ldots, w_N(n) }\right.$w1(n),..., wN(n)$ \left.\vphantom{w_1(n), \ldots, w_N(n) }\right)$ is the weight vector and $ \bf 1\hspace{-0.28em}I$ is the identity matrix.

If we express the weight vector in terms of the eigenvectors $ \vec{e}_{k}^{}$ of C,

$\displaystyle \left<\vphantom{ \vec w(n) }\right.$$\displaystyle \vec{w}\,$(n)$\displaystyle \left.\vphantom{ \vec w(n) }\right>$ = $\displaystyle \sum_{k}^{}$ak(n$\displaystyle \vec{e}_{k}^{}$ , (11.13)

we obtain an explicit expression for $ \left<\vphantom{ \vec w(n) }\right.$$ \vec{w}\,$(n)$ \left.\vphantom{ \vec w(n) }\right>$ for any given initial condition ak(0), viz.,

$\displaystyle \left<\vphantom{ \vec w(n) }\right.$$\displaystyle \vec{w}\,$(n)$\displaystyle \left.\vphantom{ \vec w(n) }\right>$ = $\displaystyle \sum_{k}^{}$(1 + $\displaystyle \lambda_{k}^{}$)n ak(0) $\displaystyle \vec{e}_{k}^{}$ . (11.14)

Since the correlation matrix is positive semi-definite all eigenvalues $ \lambda_{k}^{}$ are real and positive. Therefore, the weight vector is growing exponentially, but the growth will soon be dominated by the eigenvector with the largest eigenvalue, i.e., the first principal component,

$\displaystyle \left<\vphantom{ \vec w(n) }\right.$$\displaystyle \vec{w}\,$(n)$\displaystyle \left.\vphantom{ \vec w(n) }\right>$$\displaystyle \xrightarrow{n \to \infty}(1+\lambda_1)^n \, a_1(0) \, \vec e_1 \,;$ (11.15)

cf. Section 11.1.1. Recall that the output of the linear neuron model (11.7) is proportional to the projection of the current input pattern $ \vec{\xi}^{\mu}_{}$ on the direction $ \vec{w}\,$. For $ \vec{w}\,$ $ \propto$ $ \vec{e}_{1}^{}$, the output is therefore proportional to the projection on the first principal component of the input distribution. A Hebbian learning rule such as Eq. (11.8) is thus able to extract the first principal component of the input data.

From a data-processing point of view, the extraction of the first principle component of the input data set by a biologically inspired learning rule seems to be very compelling. There are, however, a few drawbacks and pitfalls. First, the above statement about the Hebbian learning rule is limited to the expectation value of the weight vector. We will see below that, if the learning rate is sufficiently low, then the actual weight vector is in fact very close to the expected one.

Second, while the direction of the weight vector moves in the direction of the principal component, the norm of the weight vector grows without bounds. We will see below in Section 11.1.3 that suitable variants of Hebbian learning allow us to control the length of the weight vector without changing its direction.

Third, principal components are only meaningful if the input data is normalized, i.e., distributed around the origin. This requirement is not consistent with a rate interpretation because rates are usually positive. This problem, however, can be overcome by learning rules such as the covariance rule of Eq. (10.10) that are based on the deviation of the rates from a certain mean firing rate. We will see in Section 11.2.4 that a spike-based learning rule can be devised that is sensitive only to deviations from the mean firing rate and can thus find the first principal component even if the input is not properly normalized.

Figure 11.3: Weight changes induced by the standard Hebb rule. Input patterns $ \vec{\xi}^{\mu}_{}$ $ \in$ $ \mathbb {R}$2 are marked as circles. The sequence of weight vectors $ \vec{w}\,$(1), $ \vec{w}\,$(2), ..., is indicated by crosses connected by a solid line. A. The weight vector evolves in the direction of the dominant eigenvector (arrow) of the correlation matrix. B. If the input patterns are normalized so that their center of mass is at the origin, then the dominant eigenvector of the correlation matrix coincides with the first principal component $ \vec{e}_{1}^{}$ of the data set.
\hbox{{\bf A} \hspace{60mm} {\bf B}} \hbox{\hspace{5mm}
\includegraphics[width=...
...ace{20mm}
\includegraphics[width=50mm]{Figs-ch-hebb-anal/w-hebb-shifted.eps} }


11.1.2.1 Self-averaging (*)

So far, we have derived the behavior of the expected weight vector, $ \langle$$ \vec{w}\,$$ \rangle$. Here we show that explicit averaging is not necessary provided that learning is slow enough. In this case, the weight vector is the sum of a large number of small changes. The weight dynamics is thus `self-averaging' and the weight vector $ \vec{w}\,$ can be well approximated by its expectation value $ \langle$$ \vec{w}\,$$ \rangle$.

We start from the formulation of Hebbian plasticity in continuous time,

$\displaystyle {{\text{d}}\over {\text{d}}t}$wi = ccorr2 $\displaystyle \nu^{{\text{post}}}_{}$ $\displaystyle \nu_{i}^{{\text{pre}}}$ ; (11.16)

cf. Eq. (10.3). Each pattern $ \vec{\xi}^{\mu}_{}$ is presented for a short period of duration $ \Delta$t. We assume that the weights change during the presentation by a small amount only, i.e., $ \int_{t}^{{t+\Delta t}}$[dwj(t')/dt'] dt' $ \ll$ wj(t). This condition can be met either by a short presentation time $ \Delta$t or by a small learning coefficient ccorr2. Under this condition, we can take the postsynaptic firing rate $ \nu^{{\text{post}}}_{}$(t) = $ \sum_{j}^{}$wj(t$ \nu_{i}^{{\text{pre}}}$ as constant for the duration of one presentation. The total weight change induced by the presentation of pattern $ \vec{\xi}^{\mu}_{}$ to first order in $ \Delta$t is thus

$\displaystyle \Delta$wi(t) = wi(t + $\displaystyle \Delta$t) - wi(t) = $\displaystyle \gamma$ $\displaystyle \sum_{j}^{}$wj(t$\displaystyle \xi^{\mu}_{j}$ $\displaystyle \xi^{\mu}_{i}$ + $\displaystyle \mathcal {O}$($\displaystyle \Delta$t2) . (11.17)

with $ \gamma$ = ccorr2 $ \Delta$t; cf. Eq. (11.8).

In the next time step a new pattern $ \vec{\xi}^{\nu}_{}$ is presented so that the weight is changed to

wi(t + 2$\displaystyle \Delta$t) = wi(t + $\displaystyle \Delta$t) + ccorr2 $\displaystyle \Delta$t $\displaystyle \sum_{j}^{}$wj(t + $\displaystyle \Delta$t)  $\displaystyle \xi_{j}^{\nu}$ $\displaystyle \xi_{i}^{\nu}$ + $\displaystyle \mathcal {O}$($\displaystyle \Delta$t2) . (11.18)

Since we keep only terms to first order in $ \Delta$t, we may set wj(t + $ \Delta$t) = wj(t) in the sum on the right-hand side of Eq. (11.18). Let us suppose that in the interval [t, t + p $ \Delta$t] each of the p patterns has been applied exactly once. Then, to first order in $ \Delta$t,

wi(t + p $\displaystyle \Delta$t) - wi(t) = ccorr2 $\displaystyle \Delta$t$\displaystyle \sum_{j}^{}$wj(t$\displaystyle \sum_{{\mu=1}}^{p}$$\displaystyle \xi_{i}^{\mu}$ $\displaystyle \xi_{j}^{\mu}$ + $\displaystyle \mathcal {O}$($\displaystyle \Delta$t2) . (11.19)

For ccorr2 $ \Delta$t $ \ll$ 1, all higher-order terms can be neglected. Division by p $ \Delta$t yields

$\displaystyle {w_{i} (t + p \, \Delta t) - w_{i}(t) \over p \, \Delta t}$ = ccorr2 $\displaystyle \sum_{j}^{}$wj(tCij . (11.20)

The left-hand side can be approximated by a differential operator dw/dt,

$\displaystyle {{\text{d}}\over {\text{d}}t}$wi(t) = ccorr2 $\displaystyle \sum_{j}^{}$wj(tCij . (11.21)

We thus recover our previous result that weights are driven by the correlations in the input but with the additional vantage that no explicit averaging step is necessary (Sanders and Verhulst, 1985).


11.1.3 Weight Normalization

We have seen in Section 11.1.2 that the simple learning rule (10.3) leads to exponentially growing weights. Since this is biologically not plausible, we must use a modified Hebbian learning rule that includes weight decrease and saturation; cf. Chapter 10.2. Particularly interesting are learning rules that lead to a normalized weight vector. Normalization is a desirable property since it leads to a competition between synaptic weights wij that converge on the same postsynaptic neuron i. Competition means that if a synaptic efficacy increases, it does so at the expense of other synapses that must decrease.

For a discussion of weight vector normalization two aspects are important, namely what is normalized and how the normalization is achieved. Learning rules can be designed to normalize either the sum of weights, $ \sum_{j}^{}$wij, or the quadratic norm, |$ \vec{w}\,$|2 = $ \sum_{j}^{}$wij2 (or any other norm on $ \mathbb {R}$N). In the first case, the weight vector is constrained to a plane perpendicular to the diagonal vector $ \vec{n}\,$ = (1,..., 1); in the second case it is constrained to a hyper-sphere; cf. Fig. 11.4.

Figure 11.4: Weight vector normalization. A. Normalization of the summed weights $ \sum_{j}^{}$wij = 1 constrains the weight vector $ \vec{w}\,$ to a hyper-plane perpendicular to the diagonal vector $ \vec{n}\,$ = (1, 1,..., 1)T. Hard bounds 0$ \le$wij$ \le$1 force the the weight vector to stay inside the shaded region. B. Normalization of the quadratic norm |$ \vec{w}\,$|2 = 1. The weight change $ \Delta$$ \vec{w}\,$(n) is perpendicular to the current weight vector $ \vec{w}\,$(n) so that the length of $ \vec{w}\,$ remains constant (Oja's learning rule).
\begin{minipage}{0.45\textwidth}
{\bf A}
\par\centerline{
\includegraphics[wid...
...nterline{
\includegraphics[width=0.7\textwidth]{Fig-norm-a.eps}} \end{minipage}

Second, the normalization of the weight vector can either be multiplicative or subtractive. In the former case all weights are multiplied by a common factor so that large weights wij are corrected by a larger amount than smaller ones. In the latter case a common constant is subtracted from each weight. Usually, subtractive normalization is combined with hard bounds 0$ \le$wij$ \le$wmax in order to avoid runaway of individual weights. Finally, learning rules may or may not fall into the class of local learning rules that we have considered in Chapter 10.2.

A systematic classification of various learning rules according to the above three criteria has been proposed by Miller and MacKay (1994). Here we restrict ourselves to two instances of learning with normalization properties which we illustrate in the examples below. We start with the subtractive normalization of the summed weights $ \sum_{j}^{}$wij and turn then to a discussion of Oja's rule as an instance of a multiplicative normalization of $ \sum_{j}^{}$wij2.


11.1.3.1 Example: Subtractive Normalization of $ \sum_{i}^{}$wi

In a subtractive normalization scheme the sum over all weights, $ \sum_{i}^{}$wi, can be kept constant by subtracting the average total weight change, N-1$ \sum_{i}^{}$$ \Delta$$ \tilde{w}_{i}^{}$, from each synapse after the weights have been updated according to a Hebbian learning rule with $ \Delta$$ \tilde{w}_{i}^{}$ = $ \gamma$$ \sum_{j}^{}$wj $ \xi_{j}^{\mu}$ $ \xi_{i}^{\mu}$. Altogether, the learning rule is of the form

$\displaystyle \Delta$wi = $\displaystyle \Delta$$\displaystyle \tilde{w}_{i}^{}$ - N-1$\displaystyle \sum_{j}^{}$$\displaystyle \Delta$$\displaystyle \tilde{w}_{j}^{}$    
  = $\displaystyle \gamma$$\displaystyle \Bigl($$\displaystyle \sum_{j}^{}$wj $\displaystyle \xi_{j}^{\mu}$ $\displaystyle \xi_{i}^{\mu}$ - N-1$\displaystyle \sum_{k}^{}$$\displaystyle \sum_{j}^{}$wj $\displaystyle \xi_{j}^{\mu}$ $\displaystyle \xi_{k}^{\mu}$$\displaystyle \Bigr)$ , (11.22)

where $ \Delta$$ \tilde{w}_{i}^{}$ denotes the weight change that is due to the pure Hebbian learning rule without the normalization. It can easily be verified that $ \sum_{i}^{}$$ \Delta$wi = 0 so that $ \sum_{i}^{}$wi = const. The temporal evolution of the weight vector $ \vec{w}\,$ is thus restricted to a hyperplane perpendicular to (1,..., 1) $ \in$ $ \mathbb {R}$N. Note that this learning rule is non-local because the change of weight depends on the activity of all presynaptic neurons.

In a similar way as in the previous section, we calculate the expectation of the weight vector $ \left<\vphantom{ \vec w(n) }\right.$$ \vec{w}\,$(n)$ \left.\vphantom{ \vec w(n) }\right>$, averaged over the sequence of input patterns ($ \vec{\xi}^{{\mu_1}}_{}$,$ \vec{\xi}^{{\mu_2}}_{}$,...),

$\displaystyle \left<\vphantom{ w_i(n+1) }\right.$wi(n + 1)$\displaystyle \left.\vphantom{ w_i(n+1) }\right>$ = $\displaystyle \left<\vphantom{ w_i(n) }\right.$wi(n)$\displaystyle \left.\vphantom{ w_i(n) }\right>$ + $\displaystyle \gamma$$\displaystyle \Bigl($$\displaystyle \sum_{j}^{}$Cij $\displaystyle \left<\vphantom{ w_j(n) }\right.$wj(n)$\displaystyle \left.\vphantom{ w_j(n) }\right>$ - N-1$\displaystyle \sum_{k}^{}$$\displaystyle \sum_{j}^{}$Ckj $\displaystyle \left<\vphantom{ w_j(n) }\right.$wj(n)$\displaystyle \left.\vphantom{ w_j(n) }\right>$$\displaystyle \Bigr)$ , (11.23)

or explicitly, using matrix notation

$\displaystyle \left<\vphantom{ \vec w(n) }\right.$$\displaystyle \vec{w}\,$(n)$\displaystyle \left.\vphantom{ \vec w(n) }\right>$ = [1 + $\displaystyle \gamma$ (C - $\displaystyle \bar{C}$)]n $\displaystyle \left<\vphantom{ \vec w(0) }\right.$$\displaystyle \vec{w}\,$(0)$\displaystyle \left.\vphantom{ \vec w(0) }\right>$ , (11.24)

with $ \bar{C}_{{ij}}^{}$ = N-1 $ \sum_{k}^{}$Ckj. The evolution of the weight vector is thus determined by eigenvectors of the matrix (C - $ \bar{C}$) that are in general different from those of the correlation matrix C. Hebbian learning with subtractive normalization is driven by the correlations of the input in the subspace orthogonal to the diagonal vector (1,..., 1). Though the sum of the weights stays constant individual weights keep growing. It is thus necessary to adopt an additional criterion to stop the learning process and to prevent that some components of the weight vector grow beyond all bounds. A subtractive weight normalization is usually combined with hard boundaries for the weights; cf. Section [*]. With these constraints, the weight vector converges to a final state where (almost) all weights are saturated at the upper or lower bound (Miller and MacKay, 1994); cf. Fig. 11.5A.

Figure 11.5: Similar plots as in Fig. 11.3 but with weight vector normalization. A. With subtractive normalization, the weight vector evolves along a line that is perpendicular to the diagonal vector (1, 1). Without additional constraints, the length of the weight vector grows without bounds. B. Oja's learning rule results in a quick convergence of the weight vector to the first principal component (arrow) of data set.
\hbox{{\bf A} \hspace{65mm} {\bf B}} \hbox{\hspace{5mm}
\includegraphics[width=...
... \hspace{20mm}
\includegraphics[width=50mm]{Figs-ch-hebb-anal/w-hebb-oja.eps}}


11.1.3.2 Example: Multiplicative Normalization of |$ \vec{w}\,$|

Normalization of the sum of the weights, $ \sum_{i}^{}$wi, needs an additional criterion to prevent individual weights from perpetual growth. A more elegant way is to require that the sum of the squared weights, i.e., the length of the weight vector, $ \sum_{i}^{}$wi2, remains constant. This restricts the evolution of the weight vector to a sphere in the N dimensional weight space. In addition, we can employ a multiplicative normalization scheme where all weights all multiplied by a common factor instead of subtracting a common constant. The advantage of multiplicative compared to subtractive normalization is that small weights will not change their sign during the normalization step.

In order to formalize the above idea we first calculate the `naïve' weight change $ \tilde{{\vec w}}$(n) in time step n according to the common Hebbian learning rule,

$\displaystyle \Delta$$\displaystyle \tilde{{\vec w}}$(n) = $\displaystyle \gamma$ [$\displaystyle \vec{w}\,$(n) . $\displaystyle \vec{\xi}^{\mu}_{}$$\displaystyle \vec{\xi}^{\mu}_{}$ . (11.25)

The update of the weights is accompanied by a normalization of the norm of the weight vector to unity, i.e.,

$\displaystyle \vec{w}\,$(n + 1) = $\displaystyle {\frac{{\vec w(n) + \Delta \tilde{\vec w}(n)}}{{\Vert\vec w(n) + \Delta \tilde{\vec w}(n)\Vert}}}$ (11.26)

If we assume that the weights are changed only by a very small amount during each step ( $ \gamma$ $ \ll$ 1), we can calculate the new weights $ \vec{w}\,$(n + 1) to first order in $ \gamma$,

$\displaystyle \vec{w}\,$(n + 1) = $\displaystyle \vec{w}\,$(n) + $\displaystyle \Delta$$\displaystyle \tilde{{\vec w}}$(n) - $\displaystyle \vec{w}\,$(n) [$\displaystyle \vec{w}\,$(n) . $\displaystyle \Delta$$\displaystyle \tilde{{\vec w}}$(n)] + $\displaystyle \mathcal {O}$($\displaystyle \gamma^{2}_{}$) . (11.27)

The `effective' weight change $ \Delta$$ \vec{w}\,$(n) including normalization to leading order in $ \gamma$ is thus

$\displaystyle \Delta$$\displaystyle \vec{w}\,$(n) = $\displaystyle \Delta$$\displaystyle \tilde{{\vec w}}$(n) - $\displaystyle \vec{w}\,$(n) [$\displaystyle \vec{w}\,$(n) . $\displaystyle \Delta$$\displaystyle \tilde{{\vec w}}$(n)] , (11.28)

which corresponds to the vector component of $ \Delta$$ \tilde{{\vec w}}$ that is orthogonal to the current weight vector $ \vec{w}\,$. This is exactly what we would have expected because the length of the weight vector must stay constant; cf. Fig. 11.4B.

We may wonder whether Eq. (11.28) is a local learning rule. In order to answer this question, we recall that the `naïve' weight change $ \Delta$$ \tilde{{w}}_{j}^{}$ = $ \gamma$ $ \nu^{{\rm post}}_{}$ $ \nu^{{\rm pre}}_{j}$ uses only pre- and postsynaptic information. Hence, we can rewrite Eq. (11.28) in terms of the firing rates,

$\displaystyle \Delta$wj = $\displaystyle \gamma$ $\displaystyle \nu^{{\rm post}}_{}$ $\displaystyle \nu_{j}^{}$ - $\displaystyle \gamma$ wj(n$\displaystyle \left(\vphantom{ \nu^{\rm post} }\right.$$\displaystyle \nu^{{\rm post}}_{}$$\displaystyle \left.\vphantom{ \nu^{\rm post} }\right)^{2}_{}$ . (11.29)

In the second term on the right-hand side we have made use of the linear neuron model, i.e., $ \nu^{{\rm post}}_{}$ = $ \sum_{k}^{}$wk $ \nu_{k}^{{\rm pre}}$. Since the weight change depends only on pre- and postsynaptic rates, Eq. (11.29), which is known as Oja's learning rule (Oja, 1982), is indeed local; cf. Eq. (10.11).

In order to see that Oja's learning rule selects the first principal component we show that the eigenvectors {$ \vec{e}_{1}^{}$,...,$ \vec{e}_{N}^{}$} of C are fixed points of the dynamics but that only the eigenvector $ \vec{e}_{1}^{}$ with the largest eigenvalue is stable. For any fixed weight vector $ \vec{w}\,$ we can calculate the expectation of the weight change in the next time step by averaging over the whole ensemble of input patterns {$ \vec{\xi}^{1}_{}$,$ \vec{\xi}^{2}_{}$,...}. With $ \langle$$ \Delta$$ \tilde{{\vec w}}$(n)$ \rangle$ = $ \gamma$ C $ \vec{w}\,$ we find from Eq. (11.28)

$\displaystyle \left<\vphantom{ \Delta\vec w }\right.$$\displaystyle \Delta$$\displaystyle \vec{w}\,$$\displaystyle \left.\vphantom{ \Delta\vec w }\right>$ = $\displaystyle \gamma$ C $\displaystyle \vec{w}\,$ - $\displaystyle \gamma$ $\displaystyle \vec{w}\,$ [$\displaystyle \vec{w}\,$ . C $\displaystyle \vec{w}\,$] , (11.30)

We claim that any eigenvector $ \vec{e}_{i}^{}$ of the correlation matrix C is a fixed point of Eq. (11.30). Indeed, if we substitute $ \vec{w}\,$ = $ \vec{e}_{i}^{}$ in the above equation we find that $ \left<\vphantom{ \Delta\vec w }\right.$$ \Delta$$ \vec{w}\,$$ \left.\vphantom{ \Delta\vec w }\right>$ = 0. In order to investigate the stability of this fixed point we consider a small perturbation $ \vec{w}\,$ = $ \vec{e}_{i}^{}$ + c $ \vec{e}_{j}^{}$ in the direction of $ \vec{e}_{j}^{}$. Here, | c| $ \ll$ 1 is the amplitude of the perturbation. If we substitute $ \vec{w}\,$ = $ \vec{e}_{i}^{}$ + c $ \vec{e}_{j}^{}$ into Eq. (11.30) we find

$\displaystyle \left<\vphantom{ \Delta\vec w }\right.$$\displaystyle \Delta$$\displaystyle \vec{w}\,$$\displaystyle \left.\vphantom{ \Delta\vec w }\right>$ = c $\displaystyle \gamma$ ($\displaystyle \lambda_{j}^{}$ - $\displaystyle \lambda_{i}^{}$$\displaystyle \vec{e}_{j}^{}$ + $\displaystyle \mathcal {O}$(c2) . (11.31)

The weight vector will thus evolve in the direction of the perturbation $ \vec{e}_{j}^{}$ if $ \lambda_{j}^{}$ > $ \lambda_{i}^{}$ so that initial perturbation will increase. In this case, $ \vec{e}_{i}^{}$ is unstable. On the other hand, if $ \lambda_{j}^{}$ < $ \lambda_{i}^{}$ the averaged weight change tends to decrease the perturbation and $ \vec{e}_{i}^{}$ is stable. Consequently, the eigenvector of C with the largest eigenvalue, viz., the first principle component, is the sole stable fixed point of the dynamics generated by the learning rule of Eq. (11.26). Figure 11.5B shows a simple example.


11.1.4 Receptive Field Development

Most neurons of the visual system respond only to stimulation from a narrow region within the visual field. This region is called the receptive field of that neuron. Depending on the precise position of a narrow bright spot within the receptive field the corresponding neuron can either show an increase or a decrease of the firing rate relative to its spontaneous activity at rest. The receptive field is subdivided accordingly into `ON' and `OFF' regions in order to further characterize neuronal response properties. Bright spots in an ON region increase the firing rate whereas bright spots in an OFF region inhibit the neuron.

Different neurons have different receptive fields, but as a general rule, neighboring neurons have receptive fields that `look' at about the same region of the visual field. This is what is usually called the retinotopic organization of the neuronal projections - neighboring points in the visual field are mapped to neighboring neurons of the visual system.

The visual system forms a complicated hierarchy of interconnected cortical areas where neurons show increasingly complex response properties from one layer to the next. Neurons from the lateral geniculate nucleus (LGN), which is the first neuronal relay of visual information after the retina, are characterized by so-called center-surround receptive fields. These are receptive fields that consist of two concentric parts, an ON region and an OFF region. LGN neurons come in two flavors, as ON-center and OFF-center cells. ON-center cells have a ON-region in the center of their receptive field that is surrounded by a circular OFF-region. In OFF-center cells the arrangement is the other way round; a central OFF-region is surrounded by an ON-region; cf. Fig. 11.6.

Neurons from the LGN project to the primary visual cortex (V1), which is the first cortical area involved in the processing of visual information. In this area neurons can be divided into `simple cells' and 'complex cells'. In contrast to LGN neurons, simple cells have asymmetric receptive fields which results in a selectivity with respect to the orientation of a visual stimulus. The optimal stimulus for a neuron with a receptive field such as that shown in Fig. 11.6D, for example, is a light bar tilted by about 45 degrees. Any other orientation would also stimulate the OFF region of the receptive field leading to a reduction of the neuronal response. Complex cells have even more intriguing properties and show responses that are, for example, selective for movements with a certain velocity and direction (Hubel, 1995).

Figure 11.6: Receptive fields (schematic). A, B. Circularly symmetric receptive field as typical for neurons in the LGN. ON-center cells (A) are excited by light spots (gray) falling into the center of the receptive field. In OFF-center cells (B) the arrangement of excitatory and inhibitory regions in the receptive field is reversed. C, D. Two examples of asymmetric receptive fields of simple cells in the primary visual cortex. The cells are best stimulated by a light bar oriented as indicated by the grey rectangle.
\begin{minipage}[t]{0.23\textwidth}
{\bf A}
\par\vspace{3mm}\hfill
\includegra...
...ncludegraphics[height=0.8\textwidth]{Figs-ch-hebb-anal/rf_d.eps}
\end{minipage}

It is still a matter of debate how the response properties of simple cells arise. The original proposal by Hubel and Wiesel (1962) was that orientation selectivity is a consequence of the specific wiring between LGN and V1. Several center-surround cells with slightly shifted receptive fields should converge on a single V1 neuron so as to produce the asymmetric receptive field of simple cells. Alternatively (or additionally), the intra-cortical dynamics can generate orientation selectivity by enhancing small asymmetries in neuronal responses; cf. Section 9.1.3. In the following, we pursue the first possibility and try to understand how activity-dependent processes during development can lead to the required fine-tuning of the synaptic organization of projections from the LGN to the primary visual cortex (Miller, 1995,1994; Miller et al., 1989; Linsker, 1986c,b,a; Wimbauer et al., 1997a,b; MacKay and Miller, 1990).

11.1.4.1 Model architecture

We are studying a model that consists of a two-dimensional layer of cortical neurons (V1 cells) and two layers of LGN neurons, namely one layer of ON-center cells and one layer of OFF-center cells; cf. Fig. 11.7A. In each layer, neurons are labeled by their position and projections between the neurons are given as a function of their positions. Intra-cortical projections, i.e., projections between cortical neurons, are denoted by wV1, V1($ \vec{x}_{1}^{}$,$ \vec{x}_{2}^{}$), where $ \vec{x}_{1}^{}$ and $ \vec{x}_{2}^{}$ are the position of the pre- and the postsynaptic neuron, respectively. Projections from ON-center and OFF-center LGN neurons to the cortex are denoted by wV1, ON($ \vec{x}_{1}^{}$,$ \vec{x}_{2}^{}$) and wV1, OFF($ \vec{x}_{1}^{}$,$ \vec{x}_{2}^{}$), respectively.

Figure 11.7: A. Wiring diagram between LGN and cortex (schematic). B. Axons from LGN cells project only to a small region of cortex. Synaptic contacts are therefore limited to a localized cluster of cortical neurons.
\hbox{{\bf A} \hspace{75mm} {\bf B}} \hbox{\hspace{10mm}
\includegraphics[width...
...degraphics[width=35mm]{Figs-ch-hebb-anal/Fig-arborization.eps}
\vspace{4mm}} }

In the following we are interested in the evolution of the weight distribution of projections from the LGN to the primary visual cortex. We thus take wV1, ON($ \vec{x}\,$,$ \vec{x}{^\prime}$) and wV1, OFF($ \vec{x}\,$,$ \vec{x}{^\prime}$) as the dynamic variables of the model. Intra-cortical projections are supposed be constant and dominated by short-range excitation, e.g.,

wV1, V1($\displaystyle \vec{x}_{1}^{}$,$\displaystyle \vec{x}_{2}^{}$) $\displaystyle \propto$ exp$\displaystyle \left(\vphantom{ -\frac{\Vert\vec x_1 - \vec x_2\Vert}{\sigma_{\text{V1,V1}}^2} }\right.$ - $\displaystyle {\frac{{\Vert\vec x_1 - \vec x_2\Vert}}{{\sigma_{\text{V1,V1}}^2}}}$$\displaystyle \left.\vphantom{ -\frac{\Vert\vec x_1 - \vec x_2\Vert}{\sigma_{\text{V1,V1}}^2} }\right)$ . (11.32)

As in the previous section we consider - for the sake of simplicity - neurons with a linear gain function. The firing rate $ \nu_{{\text{V1}}}^{}$($ \vec{x}\,$) of a cortical neuron at position $ \vec{x}\,$ is thus given by

\begin{multline}
\nu_{\text{V1}}(\vec x) =
\sum_{\vec x'} w_{\text{V1,ON}}(\v...
...\text{V1,V1}}(\vec x, \vec x') \,
\nu_{\text{V1}}(\vec x')
\,,
\end{multline}

where $ \nu_{{\text{ON/OFF}}}^{}$($ \vec{x}{^\prime}$) is the firing rate of a neuron in the ON/OFF layer of the LGN.

Due to the intra-cortical interaction the cortical activity $ \nu_{{\text{V1}}}^{}$ shows up on both sides of the equation. Since this is a linear equation it can easily be solved for $ \nu_{{\text{V1}}}^{}$. To do so we write $ \nu_{{\text{V1}}}^{}$($ \vec{x}\,$) = $ \sum_{{\vec x'}}^{}$$ \delta_{{\vec x,\vec x'}}^{}$$ \nu_{{\text{V1}}}^{}$($ \vec{x}{^\prime}$), where $ \delta_{{\vec x,\vec x'}}^{}$ is the Kronecker $ \delta$ that is one for $ \vec{x}\,$ = $ \vec{x}{^\prime}$ and vanishes otherwise. Equation (11.33) can thus be rewritten as

\begin{multline}
\sum_{\substack{\vec x'}}
[\delta_{\vec x,\vec x'} - w_{\tex...
...xt{V1,OFF}}(\vec x, \vec x') \,
\nu_{\text{OFF}}(\vec x')
\,.
\end{multline}

If we read the left-hand side as a multiplication of the matrix M($ \vec{x}\,$,$ \vec{x}{^\prime}$) $ \equiv$ [$ \delta_{{\vec x,\vec x'}}^{}$ - wV1, V1($ \vec{x}\,$,$ \vec{x}{^\prime}$)] and the vector $ \nu_{{\text{V1}}}^{}$($ \vec{x}{^\prime}$) we can define the inverse I of M by

$\displaystyle \sum_{{\vec x}}^{}$I($\displaystyle \vec{x}{^\prime}{^\prime}$,$\displaystyle \vec{x}\,$M($\displaystyle \vec{x}\,$,$\displaystyle \vec{x}{^\prime}$) = $\displaystyle \delta_{{\vec x'',\vec x'}}^{}$ (11.33)

and solve Eq. (11.34) for $ \nu_{{\text{V1}}}^{}$($ \vec{x}{^\prime}$). We find

$\displaystyle \nu_{{\text{V1}}}^{}$($\displaystyle \vec{x}{^\prime}{^\prime}$) = $\displaystyle \sum_{{\vec x'}}^{}$$\displaystyle \bar{{w}}_{{\text{V1,ON}}}^{}$($\displaystyle \vec{x}{^\prime}{^\prime}$,$\displaystyle \vec{x}{^\prime}$$\displaystyle \nu_{{\text{ON}}}^{}$($\displaystyle \vec{x}{^\prime}$) + $\displaystyle \sum_{{\vec x'}}^{}$$\displaystyle \bar{{w}}_{{\text{V1,OFF}}}^{}$($\displaystyle \vec{x}{^\prime}{^\prime}$,$\displaystyle \vec{x}{^\prime}$$\displaystyle \nu_{{\text{OFF}}}^{}$($\displaystyle \vec{x}{^\prime}$) , (11.34)

which relates the input $ \nu_{{\text{ON/OFF}}}^{}$ to the output via the `effective' weights

$\displaystyle \bar{{w}}_{{\text{V1,ON/OFF}}}^{}$($\displaystyle \vec{x}{^\prime}{^\prime}$,$\displaystyle \vec{x}{^\prime}$) $\displaystyle \equiv$ $\displaystyle \sum_{{\vec x}}^{}$I($\displaystyle \vec{x}{^\prime}{^\prime}$,$\displaystyle \vec{x}\,$wV1, ON/OFF($\displaystyle \vec{x}\,$,$\displaystyle \vec{x}{^\prime}$) . (11.35)

11.1.4.2 Plasticity

We expect that the formation of synapses between LGN and V1 is driven by correlations in the input. In the present case, these correlations are due to the retinotopic organization of projections from the retina to the LGN. Neighboring LGN neurons receiving stimulation from similar regions of the visual field are thus correlated to a higher degree than neurons that are more separated. If we assume that the activity of individual photoreceptors on the retina is uncorrelated and that each LGN neuron integrates the input from many of these receptors then the correlation of two LGN neurons can be calculated from the form of their receptive fields. For center-surround cells the correlation is a Mexican hat-shaped function of their distance (Miller, 1994; Wimbauer et al., 1997a), e.g.,

\begin{multline}
C_{\text{ON,ON}}(\vec x,\vec x') =
C_{\text{ON,ON}}(\Vert\ve...
...rac{\Vert\vec x- \vec x'\Vert^2}{c^2 \, \sigma^2}
\right )
\,,
\end{multline}

where c is a form factor that describes the depth of the modulation. CON, ON is the correlation between two ON-center type LGN neurons. For the sake of simplicity we assume that OFF-center cells have the same correlation, COFF, OFF = CON, ON. Correlations between ON-center and OFF-center cells, however, have the opposite sign, CON, OFF = COFF, ON = - CON, ON.

In the present formulation of the model each LGN cell can contact every neuron in the primary visual cortex. In reality, each LGN cell sends one axon to the cortex. Though this axon may split into several branches its synaptic contacts are restricted to small region of the cortex; cf. Fig. 11.7B. We take this limitation into account by defining an arborization function A($ \vec{x}\,$,$ \vec{x}{^\prime}$) that gives the a priori probability that a connection between a LGN cell at location $ \vec{x}\,$ and a cortical cell at $ \vec{x}{^\prime}$ is formed (Miller et al., 1989). The arborization is a rapidly decaying function of the distance, e.g.,

A($\displaystyle \vec{x}\,$,$\displaystyle \vec{x}{^\prime}$) = exp$\displaystyle \left(\vphantom{ - {\Vert\vec x-\vec x'\Vert^2 \over \sigma_{\text{V1,LGN}}^2} }\right.$ - $\displaystyle {\Vert\vec x-\vec x'\Vert^2 \over \sigma_{\text{V1,LGN}}^2}$$\displaystyle \left.\vphantom{ - {\Vert\vec x-\vec x'\Vert^2 \over \sigma_{\text{V1,LGN}}^2} }\right)$ . (11.36)

To describe the dynamics of the weight distribution we adopt a modified form of Hebb's learning rule that is completed by the arborization function,

$\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$wV1, ON/OFF($\displaystyle \vec{x}\,$,$\displaystyle \vec{x}{^\prime}$) = $\displaystyle \gamma$ A($\displaystyle \vec{x}\,$,$\displaystyle \vec{x}{^\prime}$$\displaystyle \nu_{{\text{V1}}}^{}$($\displaystyle \vec{x}\,$$\displaystyle \nu_{{\text{ON/OFF}}}^{}$($\displaystyle \vec{x}{^\prime}$) . (11.37)

If we use Eq. (11.34) and assume that learning is slow enough so that we can rely on the correlation functions to describe the evolution of the weights, we find

\begin{multline}
\frac{{\text{d}}}{{\text{d}}t} w_{\text{V1,ON}}(\vec x_1, \vec...
...', \vec x'') \,
\bigr ] \,
C_{\text{ON,ON}}(\vec x'',\vec x_2)
\end{multline}

and a similar equation for wV1, OFF.

Expression (11.41) is still a linear equation for the weights and nothing exciting can be expected. A prerequisite for pattern formation is competition between the synaptic weights. Therefore, the above learning rule is extended by a term wV1, ON/OFF($ \vec{x}\,$,$ \vec{x}{^\prime}$$ \nu_{{\text{V1}}}^{}$($ \vec{x}\,$)2 that leads to weight vector normalization and competition; cf. Oja's rule, Eq. (10.11).

11.1.4.3 Simulation results

Many of the standard techniques for nonlinear systems that we have already encountered in the context of neuronal pattern formation in Chapter 9 can also be applied to the present model (Wimbauer et al., 1998; MacKay and Miller, 1990). Here, however, we will just summarize some results from a computer simulation consisting of an array of 8×8 cortical neurons and two times 20×20 LGN neurons. Figure 11.8 shows a typical outcome of such a simulation. Each of the small rectangles shows the receptive field of the corresponding cortical neuron. A bright color means that the neuron responds with an increased firing rate to a bright spot at that particular position within its receptive field; dark colors indicate inhibition.

There are two interesting aspects. First, the evolution of the synaptic weights has lead to asymmetric receptive fields, which give rise to orientation selectivity. Second, the structure of the receptive fields of neighboring cortical neurons are similar; neuronal response properties thus vary continuously across the cortex. The neurons are said to form a map for, e.g., orientation.

The first observation, the breaking of the symmetry of LGN receptive fields, is characteristic for all pattern formation phenomena. It results from the instability of the homogeneous initial state and the competition between individual synaptic weights. The second observation, the smooth variation of the receptive fields across the cortex, is a consequence of the excitatory intra-cortical couplings. During the development, neighboring cortical neurons tend to be either simultaneously active or quiescent and due to the activity dependent learning rule similar receptive fields are formed.

Figure 11.8: Receptive fields (small squares) of 64 cortical neurons (large grid). Each small square shows the distribution of weights wV1, ON($ \vec{x}\,$,$ \vec{x}\,$ + $ \Delta$$ \vec{x}\,$) - wV1, OFF($ \vec{x}\,$,$ \vec{x}\,$ + $ \Delta$$ \vec{x}\,$), where $ \vec{x}\,$ is the position of the cortical neuron and $ \Delta$x the position of the white or black spot within the small rectangle [adapted from Wimbauer et al. (1998)].
\hbox{\hspace{20mm}
\includegraphics[width=80mm]{Figs-ch-hebb-anal/Stefan_new.eps} }


next up previous contents index
Next: 11.2 Learning in Spiking Up: 11. Learning Equations Previous: 11. Learning Equations
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.