Can evolving neural network topologies lead to significant performance gains? This paper presents a method, NeuroEvolution of Augmenting Topologies (NEAT), that outperforms fixed-topology methods on challenging reinforcement learning tasks. NEAT demonstrates increased efficiency through a principled method of crossover of different topologies, protection of structural innovation using speciation, and incremental growth from minimal structure. A series of ablation studies confirm the necessity of each component to the system as a whole and to each other, resulting in significantly faster learning. NEAT also offers the possibility of evolving increasingly complex solutions over generations, strengthening the analogy with biological evolution. NEAT represents a significant contribution to Genetic Algorithms (GAs), showing how evolution can simultaneously optimize and complexify solutions, providing new avenues for evolving complex solutions over successive generations.
Published in Evolutionary Computation, this paper fits the journal's focus on advancing computational methods inspired by biological evolution. The NEAT method contributes to the field by demonstrating how evolving neural network topologies can enhance learning efficiency, aligning with the journal's core interests.