Adaptive and Sparse Neural Networks

  • Elena Mocanu (Speaker)

Activity: Talk or presentation typesInvited talkScientific


Through the success of deep learning in various domains, Artificial Neural Networks (ANNs) are among the most used artificial intelligence methods nowadays. In this talk two different adaptive neural networks perspectives are detailed. The first part will focus on extending Deep Reinforcement Learning capabilities to perform multiple actions simultaneously. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, with the aim of conceiving an on-line optimization for the scheduling of electricity consuming devices in residential buildings and aggregations of buildings. Still, due to obvious computational limitations, this approach can not be successfully implemented in low-resources devices, such as smart meters. Thus, in the second part, a new method for scalable training of ANNs with adaptive sparse connectivity is presented. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that ANNs, too, should not have fully-connected layers. Following an evolutionary and adaptive approach, we propose a novel algorithm which evolves an initial sparse topology of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces ANN fully-connected layers with sparse ones before training, reducing quadratically the parameters number, at no decrease in accuracy. To the end, we argue that this will open the deep learning applications at low-resource devices level. We hope that our approach will enable ANNs having billions of neurons and evolved topologies to handle complex real-world tasks that are intractable using state-of-the-art methods.
Period24 May 2018
Held atCentrum voor Wiskunde en Informatica, Netherlands