dataset distillation (DD) offers a compelling approach in computer vision,
with the goal of condensing extensive datasets into smaller synthetic versions
without sacrificing much of the model performance. In this paper, we continue
to study the methods for DD, by addressing its concept
Dataset Distillation technique using learned prior of deep generative models and a new optimization algorithm improves cross-architecture generalization by synthesizing few synthetic images from a large dataset.