Y. Duan, J. Lu, and J. Zhou, UniformFace: Learning Deep Equidistributed Representation for Face Recognition, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3415-3424, 2019.

Abstract:

In this paper, we propose a new supervision objective named uniform loss to learn deep equidistributed representations for face recognition. Most existing methods aim to learn discriminative face features, encouraging large interclass distances and small intra-class variations. However, they ignore the distribution of faces in the holistic feature space, which may lead to severe locality and unbalance. With the prior that faces lie on a hypersphere manifold, we impose an equidistributed constraint by uniformly spreading the class centers on the manifold, so that the minimum distance between class centers can be maximized through complete exploitation of the feature space. To this end, we consider the class centers as like charges on the surface of hypersphere with inter-class repulsion, and minimize the total electric potential energy as the uniform loss. Extensive experimental results on the MegaFace Challenge I, IARPA Janus Benchmark A (IJB-A), Youtube Faces (YTF) and Labeled Faces in the Wild (LFW) datasets show the effectiveness of the proposed uniform loss.

Bibtex:

@inproceedings{duan2019uniformface,
  title={UniformFace: Learning Deep Equidistributed Representation for Face Recognition},
  author={Duan, Yueqi and Lu, Jiwen and Zhou, Jie},
  booktitle={CVPR},
  pages={3415--3424},
  year={2019}
}