Models that are learned from real-world data are often biased because the
data used to train them is biased. This can propagate systemic human biases
that exist and ultimately lead to inequitable treatment of people, especially
minorities. To characterize bias in learned classifiers, e