Learning Overcomplete Representations

Article Properties
  • Language
    English
  • Publication Date
    2000/02/01
  • Indian UGC (Journal)
  • Refrences
    21
  • Citations
    461
  • Michael S. Lewicki Computer Science Dept. and Center for the Neural Basis of Cognition, Carnegie Mellon Univ., 115 Mellon Inst., 4400 Fifth Ave., Pittsburgh, PA 15213
  • Terrence J. Sejnowski Howard Hughes Medical Institute, Computational Neurobiology Laboratory, The Salk Institute, La Jolla, CA 92037, U.S.A.
Abstract
Cite
Lewicki, Michael S., and Terrence J. Sejnowski. “Learning Overcomplete Representations”. Neural Computation, vol. 12, no. 2, 2000, pp. 337-65, https://doi.org/10.1162/089976600300015826.
Lewicki, M. S., & Sejnowski, T. J. (2000). Learning Overcomplete Representations. Neural Computation, 12(2), 337-365. https://doi.org/10.1162/089976600300015826
Lewicki MS, Sejnowski TJ. Learning Overcomplete Representations. Neural Computation. 2000;12(2):337-65.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Can redundancy improve coding efficiency? This research explores the concept of overcomplete representations, where the number of basis vectors exceeds the dimensionality of the input signal. The study investigates an algorithm for learning an overcomplete basis by framing it as a probabilistic model of observed data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). The authors reveal that overcomplete bases can better approximate the underlying statistical distribution of data, leading to improved coding efficiency. This approach can be viewed as a generalization of independent component analysis, offering a method for Bayesian reconstruction of signals in the presence of noise and blind source separation when there are more sources than mixtures. The findings have implications for neural coding and signal processing, paving the way for more robust and flexible representation schemes in various applications.

Published in Neural Computation, this research is well aligned with the journal’s focus on theoretical and computational neuroscience. By proposing an algorithm for learning overcomplete representations and demonstrating its potential for improved coding efficiency, the study contributes to the journal's scope by addressing fundamental questions in neural coding and information processing.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled Blind source separation of more sources than mixtures using overcomplete representations and was published in 1999. The most recent citation comes from a 2024 study titled Blind source separation of more sources than mixtures using overcomplete representations . This article reached its peak citation in 2006 , with 34 citations.It has been cited in 206 different journals, 13% of which are open access. Among related journals, the IEEE Transactions on Signal Processing cited this research the most, with 27 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year