The learning rate used for updating the weights of a self-ordering feature map
is determined by a process that injects some type of perturbation into the value
so that it is not simply monotonically decreased with each training epoch. For
example, the learning rate may be generated according to a pseudorandom process.
The result is faster convergence of the synaptic weights.