Model Inversion (MI) attacks aim to reconstruct private training data by
abusing access to machine learning models. Contemporary MI attacks have
achieved impressive attack performance, posing serious threats to privacy.
Meanwhile, all existing MI defense methods rely on regularization that is in
direct conflict with the training objective, resulting in notic