We study the problem of learning under arbitrary distribution shift, where
the learner is trained on a labeled set from one distribution but evaluated on
a different, potentially adversarially generated test distribution. We focus on
two frameworks: pq learning [Goldwasser, A. Kalai, Y