AbstractUsing huge training datasets can be costly and inconvenient. This article explores various
data distillation techniques that can reduce the amount of data required to successfully train deep networks. Inspired by recent ideas, we suggest new
→