Normalized Convergence in Stochastic Optimization
Abstract
A new concept of (normalized) convergence of random variables is introduced. The normalized convergence in preserved under Lipschitz transformations. This convergence follows from the convergence in mean and itself implies the convergence in probability. If a sequence of random variables satisfies a Limit theorem then it in a normalized convergent sequence. The introduced concept is applied to the convergence rate study of a statistical approach in stochastic optimization.