Can a game-theoretic approach improve machine learning? This paper explores the theoretical foundations of adaptive reweighting and combining algorithms (arcing), such as Adaboost, by framing prediction as a game. This approach provides new insights into how these algorithms reduce generalization error. The results provide new bounds for algorithms to date. The study formulates prediction as a game between two players: one selecting instances from a training set, and the other forming a convex combination of predictors. Existing arcing algorithms are shown to converge to good game strategies. A minimax theorem was an essential ingredient in the proofs. While Schapire, Freund, Bartlett, and Lee (1997) explained that Adaboost works in terms of its ability to produce high margins. The empirical comparison of Adaboost to the optimal arcing algorithm shows that their explanation is not complete. This suggests the need for further research into the mechanisms driving the success of arcing algorithms. This research contributes to algorithm and machine learning theory.
Published in Neural Computation, this paper on arcing algorithms is directly relevant to the journal's focus on theoretical and computational aspects of neural networks and machine learning. The work contributes to the journal's scope of publishing original research in the field. Neural Computation values theoretical innovations and algorithmic advancements.