Samenvatting

Tensor Cores (TCUs) are specialized units first introduced by NVIDIA in the Volta microarchitecture in order to accelerate matrix multiplications for deep learning and linear algebra workloads. While these units have proved to be capable of providing significant speedups for specific applications, their programmability remains difficult for the average user. In this paper, we extend the Halide DSL and compiler with the ability to utilize these units when generating code for a CUDA based NVIDIA GPGPU. To this end, we introduce a new scheduling directive along with custom lowering passes that automatically transform a Halide AST in order to be able to generate code for the TCUs. We evaluate the generated code and show that it can achieve over 5X speedup compared to Halide manual schedules without TCU support, while it remains within 20% of the NVIDIA cuBLAS implementations for mixed precision GEMM and within 10% of manual CUDA implementations with WMMA intrinsics.

Originele taal-2Engels
TitelProceedings of the 23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020
RedacteurenSander Stuijk
UitgeverijAssociation for Computing Machinery, Inc
Pagina's36-41
Aantal pagina's6
ISBN van elektronische versie9781450371315
DOI's
StatusGepubliceerd - 25 mei 2020
Evenement23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020 - St. Goar, Duitsland
Duur: 25 mei 202026 mei 2020

Congres

Congres23rd International Workshop on Software and Compilers for Embedded Systems, SCOPES 2020
LandDuitsland
StadSt. Goar
Periode25/05/2026/05/20

Vingerafdruk Duik in de onderzoeksthema's van 'Programming tensor cores from an image processing DSL'. Samen vormen ze een unieke vingerafdruk.

Citeer dit