Almost sure convergence of dropout algorithms for neural networks

    Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademic

    15 Downloads (Pure)

    Samenvatting

    We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that, over the years, have spawned from Dropout (Hinton et al., 2012). Modeling that neurons in the brain may not fire, dropout algorithms consist in practice of multiplying the weight matrices of a NN component-wise by independently drawn random matrices with $\{0,1\}$-valued entries during each iteration of the Feedforward-Backpropagation algorithm. This paper presents a probability theoretical proof that for any NN topology and differentiable polynomially bounded activation functions, if we project the NN's weights into a compact set and use a dropout algorithm, then the weights converge to a unique stationary set of a projected system of Ordinary Differential Equations (ODEs). We also establish an upper bound on the rate of convergence of Gradient Descent (GD) on the limiting ODEs of dropout algorithms for arborescences (a class of trees) of arbitrary depth and with linear activation functions.
    Originele taal-2Engels
    Artikelnummer2002.02247v1
    Aantal pagina's20
    TijdschriftarXiv
    StatusGepubliceerd - 6 feb 2020

    Bibliografische nota

    20 pages, 2 figures

    Vingerafdruk Duik in de onderzoeksthema's van 'Almost sure convergence of dropout algorithms for neural networks'. Samen vormen ze een unieke vingerafdruk.

    Citeer dit