Abstract:
Slow feature analysis is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying input. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the learning of place cells in a model of the hippocampus.
In order to arrive at a biologically more realistic implementation of this learning paradigm, we consider how slow feature analysis could be realized with linear Poisson neurons. Surprisingly, we find analytically that the appropriate learning rule reproduces the typical STDP-learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely new possible explanation for the peculiar shape of the physiological STDP-learning window.