This paper discusses the estimation of generalization gap, difference between performance and training performance, for overparameterized models including neural networks. We first show that a functional variance, key concept in defining widely-applicable information criterion, characterizes gap even settings where conventional theory cannot be applied. As computational cost variance is expensi...