Efficient and effective training of sparse recurrent neural networks

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)
20 Downloads (Pure)


Recurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.
Original languageEnglish
Pages (from-to)9625-9636
Number of pages12
JournalNeural Computing and Applications
Issue number15
Early online date26 Jan 2021
Publication statusPublished - Aug 2021


  • Dynamic sparse training
  • Long short-term memory
  • Recurrent highway networks
  • Sparse recurrent neural networks


Dive into the research topics of 'Efficient and effective training of sparse recurrent neural networks'. Together they form a unique fingerprint.

Cite this