Abstract

Face recognition systems generally store features (called embeddings) extracted from each face image during the enrollment stage, and then compare the extracted embeddings with the stored embeddings during the recognition stage. In this paper, we focus on the blackbox face reconstruction from facial embeddings stored in the face recognition database. We use a convolutional neural network (CNN) to reconstruct face images and train our network with a multi-term loss function. In particular, we use a different feature extractor trained for face recognition (which the adversary has the whitebox knowledge of it) to minimize the distance of embeddings extracted from the original and reconstructed face images. We evaluate our method in blackbox attacks against five state-of-the-art face recognition models on the MOBIO and LFW datasets. Our experimental results show that our proposed method outperforms previous face reconstruction methods in the literature. The source code of our experiments is publicly available to facilitate the reproducibility of our work.

Details