With the broad use of face recognition, its weakness gradually emerges that
it is able to be attacked. So, it is important to study how face recognition
networks are subject to attacks. In this paper, we focus on
通过自动生成对抗性图像的方式,本研究展示了面部认证系统在实际场景中对对抗性图像的脆弱性,并提出 AdvGen 作为一种自动化生成对抗网络来模拟打印和重放攻击,生成可以欺骗最新型 PAD 的对抗性图像,其攻击成功率高达 82.01%。本研究在四个数据集和十个最新型 PAD 上对 AdvGen 进行了广泛测试,并在真实的物理环境中进行了实验证明了攻击的有效性。