Low-and Mixed-Precision Inference Accelerators

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

1 Citation (Scopus)
1 Downloads (Pure)

Abstract

With the surging popularity of edge computing, the need to efficiently perform neural network inference on battery-constrained IoT devices has greatly increased. While algorithmic developments enable neural networks to solve increasingly more complex tasks, the deployment of these networks on edge devices can be problematic due to the stringent energy, latency, and memory requirements. One way to alleviate these requirements is by heavily quantizing the neural network, i.e., lowering the precision of the operands. By taking quantization to the extreme, e.g., by using binary values, new opportunities arise to increase the energy efficiency. Several hardware accelerators exploiting the opportunities of low-precision inference have been created, all aiming at enabling neural network inference at the edge. In this chapter, design choices and their implications on the flexibility and energy efficiency of several accelerators supporting extremely quantized networks are reviewed.
Original languageEnglish
Title of host publicationEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Subtitle of host publicationHardware Architectures
EditorsSudeep Pasricha, Muhammad Shafique
Place of PublicationCham
PublisherSpringer
Pages63-88
Number of pages26
ISBN (Electronic)978-3-031-19568-6
ISBN (Print)978-3-031-19567-9
DOIs
Publication statusPublished - 1 Oct 2023

Fingerprint

Dive into the research topics of 'Low-and Mixed-Precision Inference Accelerators'. Together they form a unique fingerprint.

Cite this