conformal prediction (CP) is a framework to quantify uncertainty of machine
learning classifiers including deep neural networks. Given a testing example
and a trained classifier, CP produces a prediction set of candidate labels with
a user-specified coverage (i.e., true class label is