Discover the interconnectedness of various unsupervised learning techniques within a unified framework. This comprehensive review demonstrates how factor analysis, principal component analysis, mixtures of Gaussian clusters, Kalman filter models, and hidden Markov models can be seen as variations of a single basic generative model. The research seeks to consolidate disparate observations into a cohesive understanding of these models. By introducing a novel approach to linking discrete and continuous state models using a simple nonlinearity, this study achieves a unifying perspective. Through additional nonlinearities, it shows how independent component analysis also fits within this framework. The review highlights the implementation of factor analysis and mixtures of Gaussians in autoencoder neural networks. A new model for static data, known as sensible principal component analysis, is introduced, along with the concept of spatially adaptive observation noise. The paper provides pseudocode for inference and learning for all basic models, providing a valuable resource for researchers and practitioners in machine learning.
Published in Neural Computation, this review directly aligns with the journal's focus on theoretical and computational aspects of neural networks and machine learning. The paper contributes to the journal's coverage of unsupervised learning algorithms and their mathematical foundations. Its focus on linear Gaussian models and their unification is relevant to the computational neuroscience audience.