There are various costs for attackers to manipulate the features of security
classifiers. The costs are asymmetric across features and to the directions of
changes, which cannot be precisely captured by existing cost models based on
$L_p$-norm robustness. In this paper, we utilize such domain knowledge to
increase the attack cost of evading classifiers, spec