Learning with delayed synaptic plasticity

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their inner configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and a reinforcement signal received from the environment. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse and keep track of the activation of the neurons while the network performs a certain task for an episode. Delayed reinforcement signals are provided after each episode, based on the performance of the network relative to its performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm which does not incorporate domain knowledge introduced with the NATs.
LanguageEnglish
Title of host publicationThe Genetic and Evolutionary Computation Conference
PublisherarXiv.org
Number of pages10
StatePublished - 2019
Event2019 Genetic and Evolutionary Computation Conference -
Duration: 13 Jul 201917 Jul 2019

Conference

Conference2019 Genetic and Evolutionary Computation Conference
Period13/07/1917/07/19

Fingerprint

Neurons
Plasticity
Chemical activation
Reinforcement
Neural networks
Genetic algorithms
Data storage equipment

Cite this

Yaman, A., Iacca, G., Mocanu, D., Fletcher, G., & Pechenizkiy, M. (2019). Learning with delayed synaptic plasticity. In The Genetic and Evolutionary Computation Conference [1903.09393v2] arXiv.org.
Yaman, Anil ; Iacca, Giovanni ; Mocanu, Decebal ; Fletcher, George ; Pechenizkiy, Mykola. / Learning with delayed synaptic plasticity. The Genetic and Evolutionary Computation Conference. arXiv.org, 2019.
@inproceedings{fd123e5f3e2343cb9e46542a953ddab8,
title = "Learning with delayed synaptic plasticity",
abstract = "The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their inner configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and a reinforcement signal received from the environment. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse and keep track of the activation of the neurons while the network performs a certain task for an episode. Delayed reinforcement signals are provided after each episode, based on the performance of the network relative to its performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm which does not incorporate domain knowledge introduced with the NATs.",
author = "Anil Yaman and Giovanni Iacca and Decebal Mocanu and George Fletcher and Mykola Pechenizkiy",
year = "2019",
language = "English",
booktitle = "The Genetic and Evolutionary Computation Conference",
publisher = "arXiv.org",

}

Yaman, A, Iacca, G, Mocanu, D, Fletcher, G & Pechenizkiy, M 2019, Learning with delayed synaptic plasticity. in The Genetic and Evolutionary Computation Conference., 1903.09393v2, arXiv.org, 2019 Genetic and Evolutionary Computation Conference, 13/07/19.

Learning with delayed synaptic plasticity. / Yaman, Anil; Iacca, Giovanni; Mocanu, Decebal; Fletcher, George; Pechenizkiy, Mykola.

The Genetic and Evolutionary Computation Conference. arXiv.org, 2019. 1903.09393v2.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Learning with delayed synaptic plasticity

AU - Yaman,Anil

AU - Iacca,Giovanni

AU - Mocanu,Decebal

AU - Fletcher,George

AU - Pechenizkiy,Mykola

PY - 2019

Y1 - 2019

N2 - The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their inner configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and a reinforcement signal received from the environment. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse and keep track of the activation of the neurons while the network performs a certain task for an episode. Delayed reinforcement signals are provided after each episode, based on the performance of the network relative to its performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm which does not incorporate domain knowledge introduced with the NATs.

AB - The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their inner configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and a reinforcement signal received from the environment. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse and keep track of the activation of the neurons while the network performs a certain task for an episode. Delayed reinforcement signals are provided after each episode, based on the performance of the network relative to its performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm which does not incorporate domain knowledge introduced with the NATs.

UR - https://arxiv.org/abs/1903.09393

M3 - Conference contribution

BT - The Genetic and Evolutionary Computation Conference

PB - arXiv.org

ER -

Yaman A, Iacca G, Mocanu D, Fletcher G, Pechenizkiy M. Learning with delayed synaptic plasticity. In The Genetic and Evolutionary Computation Conference. arXiv.org. 2019. 1903.09393v2.