This lecture concerns an approach to statistical learning problems in the nonparametric setting. Suppose we are given n i.i.d. copies of a random variable (X,Y), where X is an instance and Y is a label, -1 or 1. We define a classifier h as a function with values -1 and 1 and we denote a class of classifiers by H. For the case that X is one dimensional and for some parametric cases of H such as the classifiers with K thresholds, we estimate the parameters by the minimizer of the classification error in the sample. We show the asymptotic distribution and the rate of convergence of the empirical risk minimizer which is cube root n for the case that K is fixed. If K is not fixed, we consider a penalty on K and minimize the penalized loss function. Then we find the rate of convergence of the minimizer.