Nonlinear Component Analysis as a Kernel Eigenvalue Problem

Article Properties
  • Language
    English
  • Publication Date
    1998/07/01
  • Indian UGC (Journal)
  • Refrences
    7
  • Citations
    3,359
  • Bernhard Schölkopf Max-Planck-Institut für biologische Kybernetik, 72076 Tübingen, Germany
  • Alexander Smola GMD First (Forschungszentrum Informationstechnik), 12489 Berlin, Germany
  • Klaus-Robert Müller GMD First (Forschungszentrum Informationstechnik), 12489 Berlin, Germany
Abstract
Cite
Schölkopf, Bernhard, et al. “Nonlinear Component Analysis As a Kernel Eigenvalue Problem”. Neural Computation, vol. 10, no. 5, 1998, pp. 1299-1, https://doi.org/10.1162/089976698300017467.
Schölkopf, B., Smola, A., & Müller, K.-R. (1998). Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 10(5), 1299-1319. https://doi.org/10.1162/089976698300017467
Schölkopf B, Smola A, Müller KR. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation. 1998;10(5):1299-31.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Can kernel methods unlock hidden patterns in complex data? This paper introduces a novel approach to nonlinear principal component analysis (PCA), a crucial technique in machine learning and pattern recognition. By leveraging integral operator kernel functions, the method efficiently computes principal components within high-dimensional feature spaces. The technique avoids the computational burden typically associated with nonlinear transformations. The heart of this research lies in its ability to extract relevant features from data that is inherently nonlinear. The use of kernel functions allows for the exploration of complex relationships, such as the space of all possible five-pixel products in images, without explicitly calculating these high-dimensional features. This method is particularly suited for applications where the underlying data structure is complex and difficult to model linearly. Experimental results validate the effectiveness of this approach, demonstrating its potential in pattern recognition tasks. The method offers a practical solution for extracting meaningful features from high-dimensional data, paving the way for improved performance in various machine learning applications, such as image analysis and data mining. The development of such methodologies may push forward the boundaries within artificial intelligence.

Published in Neural Computation, a leading journal in the field, this paper directly addresses the journal’s focus on theoretical and experimental research in neural networks and related computational approaches to neuroscience. The work contributes a novel method for nonlinear dimensionality reduction, a fundamental problem in neural computation and pattern recognition.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled Support vector machines and was published in 1998. The most recent citation comes from a 2024 study titled Support vector machines . This article reached its peak citation in 2021 , with 223 citations.It has been cited in 965 different journals, 16% of which are open access. Among related journals, the Neurocomputing cited this research the most, with 159 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year