On the influence of stochastic roundoff errors and their bias on the convergence of the gradient descent method with low-precision floating-point computation

Onderzoeksoutput: WerkdocumentPreprintProfessioneel

14 Downloads (Pure)

Samenvatting

When implementing the gradient descent method in low precision, the employment of stochastic rounding schemes helps to prevent stagnation of convergence caused by the vanishing gradient effect. Unbiased stochastic rounding yields zero bias by preserving small updates with probabilities proportional to their relative magnitudes. This study provides a theoretical explanation for the stagnation of the gradient descent method in low-precision computation. Additionally, we propose two new stochastic rounding schemes that trade the zero bias property with a larger probability to preserve small gradients. Our methods yield a constant rounding bias that, on average, lies in a descent direction. For convex problems, we prove that the proposed rounding methods typically have a beneficial effect on the convergence rate of gradient descent. We validate our theoretical analysis by comparing the performances of various rounding schemes when optimizing a multinomial logistic regression model and when training a simple neural network with an 8-bit floating-point format.
Originele taal-2Engels
StatusGepubliceerd - 24 feb. 2022

Trefwoorden

  • cs.LG
  • cs.NA
  • math.NA
  • stat.ML

Vingerafdruk

Duik in de onderzoeksthema's van 'On the influence of stochastic roundoff errors and their bias on the convergence of the gradient descent method with low-precision floating-point computation'. Samen vormen ze een unieke vingerafdruk.

Citeer dit