Need a better way to extract invariant features from data? This research introduces Slow Feature Analysis (SFA), a new unsupervised learning method for learning invariant or slowly varying features from vectorial input signals. It’s based on nonlinear expansion of the input signal and applying principal component analysis to this expanded signal and its time derivative. SFA is guaranteed to find the optimal solution within a function family and can extract numerous decor-related features ordered by invariance degree. It processes high-dimensional signals hierarchically and extracts complex features. SFA is applied to complex cell tuning properties (disparity, motion) and more complicated input-output functions are learned by repeated application. Results show that a hierarchical network of SFA modules can learn translation, size, rotation, and contrast invariance for one-dimensional objects, depending on the training stimulus. Only a few training objects suffice for good generalization. Performance degrades if the network learns multiple invariances simultaneously. The representation generated is suitable for object recognition, opening doors to machine learning.
Published in Neural Computation, this article fits within the journal's focus on neural networks, machine learning, and computational neuroscience. It introduces a novel algorithm for feature extraction, which aligns with the journal's emphasis on computational methods in understanding neural processing and pattern recognition.