Completely Derandomized Self-Adaptation in Evolution Strategies

Article Properties
  • Language
    English
  • Publication Date
    2001/06/01
  • Indian UGC (Journal)
  • Refrences
    11
  • Citations
    1,695
  • Nikolaus Hansen Technische Universität Berlin, Fachgebiet für Bionik, Sekr. ACK 1, Ackerstr. 71-76, 13355 Berlin, Germany,
  • Andreas Ostermeier Technische Universität Berlin, Fachgebiet für Bionik, Sekr. ACK 1, Ackerstr. 71-76, 13355 Berlin, Germany,
Abstract
Cite
Hansen, Nikolaus, and Andreas Ostermeier. “Completely Derandomized Self-Adaptation in Evolution Strategies”. Evolutionary Computation, vol. 9, no. 2, 2001, pp. 159-95, https://doi.org/10.1162/106365601750190398.
Hansen, N., & Ostermeier, A. (2001). Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation, 9(2), 159-195. https://doi.org/10.1162/106365601750190398
Hansen N, Ostermeier A. Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation. 2001;9(2):159-95.
Journal Categories
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Science
Mathematics
Instruments and machines
Electronic computers
Computer science
Computer software
Technology
Electrical engineering
Electronics
Nuclear engineering
Electronics
Computer engineering
Computer hardware
Description

Can evolution strategies be enhanced without randomness? This paper introduces two powerful methods for mutation distribution adaptation: derandomization and cumulation. It reviews the shortcomings of mutative strategy parameter control and explores two derandomization levels, establishing fundamental demands for self-adaptation in normal mutation distributions. The paper demonstrates that using arbitrary normal mutation distributions is equivalent to implementing a general linear problem encoding. This rigorously pursued approach results in a completely derandomized self-adaptation scheme, adapting arbitrary normal mutation distributions and meeting previously stated demands. Further improved by cumulation, utilizing an evolution path over single search steps, the developed Covariance Matrix Adaptation (CMA) evolution strategy exhibits local and global search properties, with performance comparable to existing methods on perfectly scaled functions. Simulations reveal significant speed improvements on badly scaled, non-separable functions, indicating that CMA dramatically enhances evolution strategies for complex optimization problems.

Published in Evolutionary Computation, this paper fits squarely within the journal's focus on computational methods inspired by natural evolution. The research on derandomized self-adaptation and covariance matrix adaptation directly addresses the journal's key themes of algorithm optimization and adaptive search strategies. The number of recent citations points to a substantial impact on the field.

Refrences
Citations
Citations Analysis
The first research to cite this article was titled Optimization of dynamic neural fields and was published in 2001. The most recent citation comes from a 2024 study titled Optimization of dynamic neural fields . This article reached its peak citation in 2022 , with 168 citations.It has been cited in 632 different journals, 16% of which are open access. Among related journals, the Applied Soft Computing cited this research the most, with 75 citations. The chart below illustrates the annual citation trends for this article.
Citations used this article by year