BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC

Research output: Contribution to journalArticleAcademic

20 Downloads (Pure)

Abstract

Recently, accelerators for extremely quantized deep neural network (DNN) inference with operand widths as low as 1-bit have gained popularity due to their ability to largely cut down energy cost per inference. In this paper, a flexible SoC with mixed-precision support is presented. Contrary to the current trend of fixed-datapath accelerators, this architecture makes use of a flexible datapath based on a Transport-Triggered Architecture (TTA). The architecture is fully programmable using C. The accelerator has a peak energy efficiency of 35/67/405 fJ/op (binary, ternary, and 8-bit precision) and a throughput of 614/307/77 GOPS, which is unprecedented for a programmable architecture.
Original languageEnglish
Article number2211.11331
Number of pages7
JournalarXiv
Volume2022
DOIs
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision Transport-Triggered NN SoC'. Together they form a unique fingerprint.

Cite this