Variational Learning for Switching State-Space Models

Article Properties
  • Language
    English
  • Publication Date
    2000/04/01
  • Indian UGC (Journal)
  • Refrences
    29
  • Citations
    132
  • Zoubin Ghahramani Gatsby Computational Neuroscience Unit, University College London, London WC1N 3AR, U.K.
  • Geoffrey E. Hinton Gatsby Computational Neuroscience Unit, University College London, London WC1N 3AR, U.K.
Abstract
Cite
Ghahramani, Zoubin, and Geoffrey E. Hinton. “Variational Learning for Switching State-Space Models”. Neural Computation, vol. 12, no. 4, 2000, pp. 831-64, https://doi.org/10.1162/089976600300015619.
Ghahramani, Z., & Hinton, G. E. (2000). Variational Learning for Switching State-Space Models. Neural Computation, 12(4), 831-864. https://doi.org/10.1162/089976600300015619
Ghahramani Z, Hinton GE. Variational Learning for Switching State-Space Models. Neural Computation. 2000;12(4):831-64.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Can we create a better model for time series data? This paper introduces a new statistical model that segments time series data into regimes with approximately linear dynamics and learns the parameters of each regime. This model combines hidden Markov models and linear dynamical systems. The authors present a variational approximation that maximizes a lower bound on the log-likelihood, utilizing both forward and backward recursions. Tests on artificial and natural data suggest the viability of variational approximations for inference and learning in switching state-space models.

Published in Neural Computation, this paper aligns with the journal's emphasis on computational methods, machine learning, and neural networks. The research presents a novel statistical model and a variational approximation for time series analysis, contributing to the field of machine learning. Its relevance to neural computation and data analysis makes it suitable for the journal's readership.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled AN INTRODUCTION TO HIDDEN MARKOV MODELS AND BAYESIAN NETWORKS and was published in 2001. The most recent citation comes from a 2024 study titled AN INTRODUCTION TO HIDDEN MARKOV MODELS AND BAYESIAN NETWORKS . This article reached its peak citation in 2021 , with 15 citations.It has been cited in 95 different journals, 21% of which are open access. Among related journals, the IEEE Transactions on Signal Processing cited this research the most, with 8 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year