Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation

Article Properties
  • Language
    English
  • Publication Date
    1999/08/01
  • Indian UGC (Journal)
  • Refrences
    8
  • Citations
    212
  • Michael Kearns AT&T Labs Research, Florham Park, NJ 07932, U.S.A.
  • Dana Ron Department of EE—Systems, Tel Aviv University, 69978 Ramat Aviv, Israel
Abstract
Cite
Kearns, Michael, and Dana Ron. “Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation”. Neural Computation, vol. 11, no. 6, 1999, pp. 1427-53, https://doi.org/10.1162/089976699300016304.
Kearns, M., & Ron, D. (1999). Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation. Neural Computation, 11(6), 1427-1453. https://doi.org/10.1162/089976699300016304
Kearns M, Ron D. Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation. Neural Computation. 1999;11(6):1427-53.
Journal Categories
Medicine
Internal medicine
Neurosciences
Biological psychiatry
Neuropsychiatry
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Technology
Mechanical engineering and machinery
Description

Ensuring reliability in machine learning algorithms. This paper introduces sanity-check bounds for the error of leave-one-out cross-validation. This cross-validation method provides an estimate of the generalization error, a crucial metric of algorithm performance. It demonstrates that the worst-case error of the leave-one-out estimate is not significantly worse than the training error estimate. Introducing a new notion of error stability, the authors expand the applicability of sanity-check bounds to a wider array of learning algorithms. The research also highlights the necessity of error stability for proving bounds and the dependence of these bounds on the Vapnik-Chervonenkis dimension of the hypothesis class. This research has a key function for *mathematics*.

Published in Neural Computation, this article is aligned with the journal's emphasis on theoretical aspects of neural networks and machine learning. By establishing sanity-check bounds for leave-one-out cross-validation, the paper addresses a theoretical challenge related to algorithm evaluation. The study's focus on algorithmic stability and error bounds contributes to the journal’s core theme of advancing the mathematical foundations of neural computation.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled 10.1162/153244302760200704 and was published in 2000. The most recent citation comes from a 2024 study titled 10.1162/153244302760200704 . This article reached its peak citation in 2021 , with 23 citations.It has been cited in 172 different journals, 17% of which are open access. Among related journals, the Remote Sensing cited this research the most, with 7 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year