Research on Face Attribute Editing Method Based on Deep Neural Network
DOI: 10.23977/jfsst.2021.010802 | Downloads: 7 | Views: 153
Caoshuai Kang 1
1 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
Corresponding AuthorCaoshuai Kang
With the development of computer vision technology, the application of face attributes editing has been expanded upon the real scene. Face attribute editing aims to change one or more attributes of an image, such as hair color, skin color, age, etc., and keep other attributes unchanged. The key of attribute editing is to maintain the high quality and accuracy of the target attribute image. Most methods focus on editing with or without certain attributes, and lack of adding or deleting attributes for specific templates, such as bangs with specific shapes. On the other hand, when multi-attribute is edited at the same time, the existing methods are difficult to decouple multi-attribute in feature space, which leads to problems in multi-attribute editing mode There are many defects, such as artifact, face irrelevant deformation, background change and so on. Aiming at these problems, this paper studies the face attribute editing method based on deep neural network.
KEYWORDSFace attribute editing, Deep neural network, Deep learning, Transfer learning
CITE THIS PAPER
Caoshuai Kang. Research on Face Attribute Editing Method Based on Deep Neural Network. Journal of Frontiers of Society, Science and Technology (2021) Vol. 1: 12-15. DOI: http://dx.doi.org/10.23977/jfsst.2021.010802
 Y. Taigman, M. Yang, M. Ranzato and L. Wolf(2014). DeepFace: Closing the Gap to Human-Level Performance in Face Verification. 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014, pp:1701-1708.
 F. Schroff, D. Kalenichenko and J. Philbin(2015). FaceNet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp:815-823.
 Goodfellow I, Pouget-Abadie J, Mirza M(2014). Generative adversarial nets. Advances in neural information processing systems[C]. 2014: 2672-2680.
 Zhu J Y, Park T, Isola P, et al(2014). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision[C]. 2017: 2223-2232.
 Wu P W, Lin Y J, Chang C H(2019). Relgan: Multi-domain image-to-image translation via relative attributes. Proceedings of the IEEE International Conference on Computer Vision[C]. 2019: 5914-5922.