Data-free knowledge distillation (DFKD) has recently been attracting increasing attention from research communities, attributed to its capability compress a model only using synthetic data. Despite the encouraging results achieved, state-of-the-art DFKD methods still suffer inefficiency of data synthesis, making data-free training process extremely time-consuming and thus inapplicable for large...