Abstract Neural networks have emerged as powerful and versatile tools in the field of deep learning. As complexity task increases, so do size architectural network, causing compression techniques to become a focus current research. Parameter truncation can provide significant reduction memory computational complexity. Originating from model order framework, Discrete Empirical Interpolation Meth...