Slow Feature Analysis: Unsupervised Learning of Invariances

Article Properties
  • Language
    English
  • Publication Date
    2002/04/01
  • Indian UGC (Journal)
  • Refrences
    21
  • Citations
    561
  • Laurenz Wiskott Computational Neurobiology Laboratory, Salk Institute for Biological Studies, San Diego, CA 92168, U.S.A.; Institute for Advanced Studies, D-14193, Berlin, Germany; and Innovationskolleg Theoretische Biologie, Institute for Biology, Humboldt-University Berlin, D-10115 Berlin, Germany,
  • Terrence J. Sejnowski Howard Hughes Medical Institute, The Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A., and Department of Biology, University of California at San Diego, La Jolla, CA 92037, U.S.A.
Abstract
Cite
Wiskott, Laurenz, and Terrence J. Sejnowski. “Slow Feature Analysis: Unsupervised Learning of Invariances”. Neural Computation, vol. 14, no. 4, 2002, pp. 715-70, https://doi.org/10.1162/089976602317318938.
Wiskott, L., & Sejnowski, T. J. (2002). Slow Feature Analysis: Unsupervised Learning of Invariances. Neural Computation, 14(4), 715-770. https://doi.org/10.1162/089976602317318938
Wiskott L, Sejnowski TJ. Slow Feature Analysis: Unsupervised Learning of Invariances. Neural Computation. 2002;14(4):715-70.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Need a better way to extract invariant features from data? This research introduces Slow Feature Analysis (SFA), a new unsupervised learning method for learning invariant or slowly varying features from vectorial input signals. It’s based on nonlinear expansion of the input signal and applying principal component analysis to this expanded signal and its time derivative. SFA is guaranteed to find the optimal solution within a function family and can extract numerous decor-related features ordered by invariance degree. It processes high-dimensional signals hierarchically and extracts complex features. SFA is applied to complex cell tuning properties (disparity, motion) and more complicated input-output functions are learned by repeated application. Results show that a hierarchical network of SFA modules can learn translation, size, rotation, and contrast invariance for one-dimensional objects, depending on the training stimulus. Only a few training objects suffice for good generalization. Performance degrades if the network learns multiple invariances simultaneously. The representation generated is suitable for object recognition, opening doors to machine learning.

Published in Neural Computation, this article fits within the journal's focus on neural networks, machine learning, and computational neuroscience. It introduces a novel algorithm for feature extraction, which aligns with the journal's emphasis on computational methods in understanding neural processing and pattern recognition.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled Multi-modal estimation of collinearity and parallelism in natural image sequences and was published in 2002. The most recent citation comes from a 2024 study titled Multi-modal estimation of collinearity and parallelism in natural image sequences . This article reached its peak citation in 2022 , with 71 citations.It has been cited in 215 different journals, 17% of which are open access. Among related journals, the Neural Computation cited this research the most, with 35 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year