We analyze multilayer neural networks in the asymptotic regime of simultaneously (a) large network sizes and (b) numbers stochastic gradient descent training iterations. rigorously establish limiting behavior output. The limit procedure is valid for any number hidden layers, it naturally also describes loss. ideas that we explore are to take limits each layer sequentially characterize evolution...