Abstract: Considering that in neural network based on softmax cross entropy loss, the output probability is mainly based on linear computation of parameter vectors of each class in the last layer and hidden features in the layer of sample points. Therefore, the final output of neural network is effected by of the L2-norm of parameter vector of each class. Taking binary-class as an example, if the parameter vector of a class has a large L2-norm., decision boundary is close to another class with smaller L2-norm., so that sample points will be easily assigned to the class with large L2-norm. Based on it, this paper proposes a new softmax cross entropy loss, which adjusts the position of decision boundary so that it is not biased to any class. Experimental results on the LabelMe dataset and the UIUC-Sports dataset show that the proposed loss is superior to softmax cross entropy loss.