Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification

Gonzalo Nápoles, Yamisleydi Salgueiro, Isel Grau, Maikel León Espinosa

Research output: Contribution to journalArticleAcademic

Abstract

Machine learning solutions for pattern classification problems are nowadays widely deployed in society and industry. However, the lack of transparency and accountability of most accurate models often hinders their meaningful and safe use. Thus, there is a clear need for developing explainable artificial intelligence mechanisms. There exist model-agnostic methods that summarize feature contributions, but their interpretability is limited to specific predictions made by black-box models. An open challenge is to develop models that have intrinsic interpretability and produce their own explanations, even for classes of models that are traditionally considered black boxes like (recurrent) neural networks. In this paper, we propose an LTCN-based model for interpretable pattern classification of structured data. Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process. For supporting the interpretability without affecting the performance, the model incorporates more flexibility through a quasi-nonlinear reasoning rule that allows controlling nonlinearity. Besides, we propose a recurrence-aware decision model that evades the issues posed by unique fixed points while introducing a deterministic learning method to compute the learnable parameters. The simulations show that our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
Original languageEnglish
Article number2107.03423
Number of pages12
JournalarXiv
Volume2021
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern Classification'. Together they form a unique fingerprint.

Cite this