Abstract
Event-driven architectures have been shown to provide low-power, low-latency artificial neural network (ANN) inference. This is especially beneficial on Edge devices, particularly when combined with sparse execution. Recurrent neural networks (RNNs) are ANNs that emulate memory. Their recurrent connection enables the reuse of previous output for the generation of new output. However, when trying to use RNNs in a sparse context on event-driven architectures, novel challenges in synchronization and the usage of sparse data are encountered. In this work, these challenges are systematically analyzed, and mechanisms to overcome them are proposed. Experimental results of a monocular depth estimation use case on the NeuronFlow architecture show that sparsity in RNNs can be exploited effectively on event-driven architectures.
Original language | English |
---|---|
Title of host publication | Proceedings of the 24th International Workshop on Software and Compilers for Embedded Systems, SCOPES 2021 |
Editors | Sander Stuijk |
Publisher | Association for Computing Machinery, Inc |
Pages | 17-22 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-4503-9166-5 |
DOIs | |
Publication status | Published - 1 Nov 2021 |
Event | 24th International Workshop on Software and Compilers for Embedded Systems, SCOPES 2021 - Virtual, Online, Eindhoven, Netherlands Duration: 1 Nov 2021 → 2 Nov 2021 Conference number: 24 https://scopesconf.org/scopes-21/ |
Conference
Conference | 24th International Workshop on Software and Compilers for Embedded Systems, SCOPES 2021 |
---|---|
Abbreviated title | SCOPES 2021 |
Country/Territory | Netherlands |
City | Eindhoven |
Period | 1/11/21 → 2/11/21 |
Internet address |
Funding
The authors acknowledge Funda??o para a Ci?ncia e a Tecnologia (FCT) for financial support (grant numbers SFRH/BD/44108/2008, SFRH/BD/78639/2011 and SFRH/BPD/91397/2012 and projects PTDC/EQU-ERQ/102771/2008 and project UID/QUI/00100/2013).
Keywords
- asynchronous
- event-driven
- monocular depth estimation
- neural networks
- recurrent neural networks
- sparsity