Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks

Albert Senen-Cerda, Jaron Sanders

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

3 Citaten (Scopus)
1 Downloads (Pure)

Samenvatting

We analyze the convergence rate of gradient flows on objective functions induced by Dropout and Dropconnect, when applying them to shallow linear Neural Networks(NN)-which can also be viewed as doing matrix factorization using a particular regularizer. Dropout algorithms such as these are thus regularization techniques that use 0,1-valued random variables to filter weights during training in order to avoid coadaptation of features. By leveraging a recent result on nonconvex optimization and conducting a careful analysis of the set of minimizers as well as the Hessian of the loss function, we are able to obtain (i) a local convergence proof of the gradient flow and (ii) a bound on the convergence rate that depends on the data, the dropout probability, and the width of the NN. Finally, we compare this theoretical bound to numerical simulations, which are in qualitative agreement with the convergence bound and match it when starting sufficiently close to a minimizer.

Originele taal-2Engels
Artikelnummer32
Aantal pagina's53
TijdschriftProceedings of the ACM on Measurement and Analysis of Computing Systems
Volume6
Nummer van het tijdschrift2
DOI's
StatusGepubliceerd - jun. 2022

Bibliografische nota

Publisher Copyright:
© 2022 Owner/Author.

Vingerafdruk

Duik in de onderzoeksthema's van 'Asymptotic Convergence Rate of Dropout on Shallow Linear Neural Networks'. Samen vormen ze een unieke vingerafdruk.

Citeer dit