Need more data for your AI models? This paper introduces SSGAN, a novel semantic similarity-based Generative Adversarial Network (GAN) designed for image augmentation in small-sample scenarios. Image sample augmentation refers to strategies for increasing sample size by modifying current data or synthesizing new data based on existing data. SSGAN aims to address the limitations of traditional GANs when applied to datasets with limited samples. The method incorporates a shallow pyramid-structured GAN backbone to enhance feature extraction and a feature selection module based on high-dimensional semantics to optimize the loss function. Evaluated on the "Flower" and "Animal" datasets, SSGAN outperforms classical GAN methods in FID and IS metrics, showing improvements of 18.6 and 1.4, respectively. The dataset augmented by SSGAN significantly enhances the performance of the classifier, achieving a 2.2% accuracy improvement compared to the best-known method. SSGAN demonstrates excellent generalization and robustness, providing a valuable tool for improving the performance of downstream learning tasks in small-sample image recognition challenges.
Appearing in Neural Processing Letters, this paper fits within the journal’s scope of publishing research on neural networks and related areas of computer science. The introduction of SSGAN, a semantic similarity-based GAN, directly addresses the journal’s focus on advancing image augmentation techniques, a key area in neural processing and machine learning.