- LoginAccess your ApplicationOr learn more about our programmes and applyAccess MyINSEAD
Eliseeff A., Pontil M., Evgeniou T. (2005). Stability of Randomized Learning Algorithms Journal of Machine Learning Research, 5(1), pp. 55-79.
The authors extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learning algorithms to the case of randomized algorithms.They give formal definitions of stability for randomized algorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability.The setup they develop for this purpose can also be used for generally studying randomized learning algorithms. They then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learning algorithms.