A Unifying Review of Linear Gaussian Models

Article Properties
  • Language
    English
  • Publication Date
    1999/02/01
  • Indian UGC (Journal)
  • Refrences
    33
  • Citations
    304
  • Sam Roweis Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91125, U.S.A.
  • Zoubin Ghahramani Department of Computer Science, University of Toronto, Toronto, Canada
Abstract
Cite
Roweis, Sam, and Zoubin Ghahramani. “A Unifying Review of Linear Gaussian Models”. Neural Computation, vol. 11, no. 2, 1999, pp. 305-4, https://doi.org/10.1162/089976699300016674.
Roweis, S., & Ghahramani, Z. (1999). A Unifying Review of Linear Gaussian Models. Neural Computation, 11(2), 305-345. https://doi.org/10.1162/089976699300016674
Roweis S, Ghahramani Z. A Unifying Review of Linear Gaussian Models. Neural Computation. 1999;11(2):305-4.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Discover the interconnectedness of various unsupervised learning techniques within a unified framework. This comprehensive review demonstrates how factor analysis, principal component analysis, mixtures of Gaussian clusters, Kalman filter models, and hidden Markov models can be seen as variations of a single basic generative model. The research seeks to consolidate disparate observations into a cohesive understanding of these models. By introducing a novel approach to linking discrete and continuous state models using a simple nonlinearity, this study achieves a unifying perspective. Through additional nonlinearities, it shows how independent component analysis also fits within this framework. The review highlights the implementation of factor analysis and mixtures of Gaussians in autoencoder neural networks. A new model for static data, known as sensible principal component analysis, is introduced, along with the concept of spatially adaptive observation noise. The paper provides pseudocode for inference and learning for all basic models, providing a valuable resource for researchers and practitioners in machine learning.

Published in Neural Computation, this review directly aligns with the journal's focus on theoretical and computational aspects of neural networks and machine learning. The paper contributes to the journal's coverage of unsupervised learning algorithms and their mathematical foundations. Its focus on linear Gaussian models and their unification is relevant to the computational neuroscience audience.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled Variational Learning for Switching State-Space Models and was published in 2000. The most recent citation comes from a 2024 study titled Variational Learning for Switching State-Space Models . This article reached its peak citation in 2012 , with 19 citations.It has been cited in 193 different journals, 16% of which are open access. Among related journals, the Neural Computation cited this research the most, with 16 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year