Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Downloads (Pure)

Abstract

Using task-specific components within a neural network in continual learning (CL) is a compelling strategy to address the stability-plasticity dilemma in fixed-capacity models without access to past data. Current methods focus only on selecting a sub-network for a new task that reduces forgetting of past tasks. However, this selection could limit the forward transfer of relevant past knowledge that helps in future learning. Our study reveals that satisfying both objectives jointly is more challenging when a unified classifier is used for all classes of seen tasks–class-Incremental Learning (class-IL)–as it is prone to ambiguities between classes across tasks. Moreover, the challenge increases when the semantic similarity of classes across tasks increases. To address this challenge, we propose a new CL method, named AFAF (Code is available at: https://github.com/GhadaSokar/AFAF. ), that aims to Avoid Forgetting and Allow Forward transfer in class-IL using fix-capacity models. AFAF allocates a sub-network that enables selective transfer of relevant knowledge to a new task while preserving past knowledge, reusing some of the previously allocated components to utilize the fixed-capacity, and addressing class-ambiguities when similarities exist. The experiments show the effectiveness of AFAF in providing models with multiple CL desirable properties, while outperforming state-of-the-art methods on various challenging benchmarks with different semantic similarities.

Original languageEnglish
Title of host publicationMachine Learning and Knowledge Discovery in Databases
Subtitle of host publicationEuropean Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III
EditorsMassih-Reza Amini, Stéphane Canu, Asja Fischer, Tias Guns, Petra Kralj Novak, Grigorios Tsoumakas
Place of PublicationCham
PublisherSpringer
Pages85-101
Number of pages17
ISBN (Electronic)978-3-031-26409-2
ISBN (Print)978-3-031-26408-5
DOIs
Publication statusPublished - 17 Mar 2023
Event22nd Joint European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2022 - Grenoble, France
Duration: 19 Sept 202223 Sept 2022

Publication series

NameLecture Notes in Computer Science (LNCS)
Volume13715
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349
NameLecture Notes in Artificial Intelligence (LNAI)
Volume13715
ISSN (Print)2945-9133
ISSN (Electronic)2945-9141

Conference

Conference22nd Joint European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2022
Country/TerritoryFrance
CityGrenoble
Period19/09/2223/09/22

Keywords

  • Class-incremental learning
  • Continual learning
  • Sparse training
  • Stability plasticity dilemma

Fingerprint

Dive into the research topics of 'Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks'. Together they form a unique fingerprint.

Cite this