An ultimate objective in continual learning is to preserve knowledge learned preceding tasks while new tasks. To mitigate forgetting prior knowledge, we propose a novel distillation technique that takes into the account manifold structure of latent/output space neural network achieve this, approximate data up-to its first order, hence benefiting from linear subspaces model and maintain concepts...