Can support vector machines be improved by controlling the number of support vectors? This paper introduces a new class of support vector algorithms for both regression and classification tasks, characterized by a parameter ν that effectively controls the number of support vectors. This control is valuable in itself but also enables the elimination of the accuracy parameter ε (in regression) and the regularization constant C (in classification), simplifying the algorithm tuning. The core idea is to provide a method to control how many support vectors are used, offering both practical benefits and theoretical implications. Some theoretical results concerning the meaning and choice of ν are presented, along with experimental results demonstrating the algorithm's efficacy. This research enhances the versatility and efficiency of support vector machines, making them more adaptable to various machine learning applications.
Published in Neural Computation, this research aligns with the journal's focus on theoretical and practical advancements in neural networks and machine learning. The introduction of a parameter to control the number of support vectors contributes to the ongoing optimization of SVM algorithms. The numerous citations highlight the impact and relevance of this work.