In this note we study lower bounds on the empirical minimization algorithm. To explain the basic set up of this algorithm, let (Ω, μ) be a probability space and set X to be a random variable taking values in Ω, distributed according to μ. We are interested in the function learning (noiseless) problem, in which one observes n independent random variables X1, . . . , Xn distributed according to μ...