Almost sure convergence of dropout algorithms for neural networks

Research output: Contribution to journalArticleAcademic

30 Downloads (Pure)


We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that, over the years, have spawned from Dropout (Hinton et al., 2012). Modeling that neurons in the brain may not fire, dropout algorithms consist in practice of multiplying the weight matrices of a NN component-wise by independently drawn random matrices with $\{0,1\}$-valued entries during each iteration of the Feedforward-Backpropagation algorithm. This paper presents a probability theoretical proof that for any NN topology and differentiable polynomially bounded activation functions, if we project the NN's weights into a compact set and use a dropout algorithm, then the weights converge to a unique stationary set of a projected system of Ordinary Differential Equations (ODEs). We also establish an upper bound on the rate of convergence of Gradient Descent (GD) on the limiting ODEs of dropout algorithms for arborescences (a class of trees) of arbitrary depth and with linear activation functions.
Original languageEnglish
Article number2002.02247v1
Number of pages20
Publication statusPublished - 6 Feb 2020

Bibliographical note

20 pages, 2 figures


  • math.OC
  • cs.LG
  • math.PR


Dive into the research topics of 'Almost sure convergence of dropout algorithms for neural networks'. Together they form a unique fingerprint.

Cite this