Navigating hidden Markov models with Bayesian networks: This tutorial introduces hidden Markov models (HMMs) through the lens of Bayesian networks, providing a fresh perspective on learning and inference. By framing HMMs within the Bayesian network framework, the authors facilitate the consideration of novel generalizations, such as models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. These models are used for speech recognition. While exact inference in these generalized HMMs is often intractable, the authors highlight the utility of approximate inference algorithms, including Markov chain sampling and variational methods. The paper provides guidance on applying such methods to these advanced HMM variants. This approach makes it possible to consider novel generalizations of hidden Markov models with multiple hidden state variables, multiscale representations, and mixed discrete and continuous variables. The review concludes with a discussion of Bayesian methods for model selection in generalized HMMs, offering a comprehensive guide for researchers and practitioners seeking to leverage the power of these probabilistic models in diverse applications.
Published in the International Journal of Pattern Recognition and Artificial Intelligence, this article aligns perfectly with the journal's scope. By introducing hidden Markov models through the lens of Bayesian networks, this article contributes to both pattern recognition and AI. The multiple citations over time indicates this tutorial is still relevant to this field.