dataset distillation aims to minimize the time and memory needed for training deep networks on large datasets, by creating a small set of synthetic images that has a similar generalization performance to that of
Dataset Distillation technique using learned prior of deep generative models and a new optimization algorithm improves cross-architecture generalization by synthesizing few synthetic images from a large dataset.