[7] Deep InterBoost Networks for Small-sample Image Classification, S

Published in June 30, 2020

Recommended citation: Xiaoxu Li, Dongliang Chang, Zhanyu Ma\*, Zheng-Hua Tan, Jing-Hao Xue, Jie Cao, and Jun Guo. Deep InterBoost Networks for Small-sample Image Classification[J]. NEUROCOMPUTING, 2020.

Interboost

Abstract: Deep neural networks have recently shown excellent performance on numerous image classification tasks. These networks often need to estimate a large number of parameters and require much training data. When the amount of training data is small, however, a network with high flexibility quickly overfits the training data, resulting in a large model variance and poor generalization. To address this problem, we propose a new, simple yet effective ensemble method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two sets of complementary weights for training data, which are used for separately training two base networks of the same structure, and then the two sets of complementary weights are updated for refining the training of the networks through interaction between the two base networks previously trained. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Experimental results on four small-sample datasets, UIUC-Sports, LabelMe, 15Scenes and Caltech101, demonstrate that the proposed ensemble method outperforms existing ones. Moreover, results from the Wilcoxon signed-rank tests show that our method is statistically significantly better than the methods compared. Detailed analysis is also provided for an in-depth understanding of the proposed method. Keywords: Ensemble learning, Deep neural network, Small-sample image classification, Overfitting.

Download paper here
Download code here