[2] Dual Cross-Entropy Loss for Small-Sample Fine-Grained Vehicle Classification

Published in January 28, 2019

Recommended citation: Xiaoxu Li, Liyun Yu, Dongliang Chang, Zhanyu Ma\*, and Jie Cao. Dual cross-entropy loss for small-sample fine-grained vehicle classification[J]. IEEE Transactions on Vehicular Technology, 2019, 68(5): 4204-4212.

Abstract: Fine-grained vehicle classification is a challenging topic in computer vision due to the high intraclass variance and low interclass variance. Recently, considerable progress has been made in fine-grained vehicle classification due to the huge success of deep neural networks. Most studies of fine-grained vehicle classification based on neural networks, focus on the neural network structure to improve the classification performance. In contrast to existing works on fine-grained vehicle classification, we focus on the loss function of the neural network. We add a regularization term to the cross-entropy loss and propose a new loss function, Dual Cross-Entropy Loss. The regularization term places a constraint on the probability that a data point is assigned to a class other than its ground-truth class, which can alleviate the vanishing of the gradient when the value of the cross-entropy loss is close to zero. To demonstrate the effectiveness of our loss function, we perform two sets of experiments. The first set is conducted on a small-sample fine-grained vehicle classification dataset, the Stanford Cars-196 dataset. The second set is conducted on two small-sample datasets, the LabelMe dataset and the UIUC-Sports dataset, as well as on one large-sample dataset, the CIFAR-10 dataset. The experimental results show that the proposed loss function improves the fine-grained vehicle classification performance and has good performance on three other general image classification tasks.

Download paper here