Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

M. Verschoor, A.C. Jalba

Research output: Contribution to journalArticleAcademicpeer-review

33 Citations (Scopus)
4 Downloads (Pure)


The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of mappings for the SpMV operation on modern programmable GPUs using the Block Compressed Sparse Row (BCSR) format. Further, we show that reordering matrix blocks substantially improves the performance of the SpMV operation, especially when small blocks are used, so that our method outperforms existing state-of-the-art approaches, in most cases. Finally, a thorough analysis of the performance of both SpMV and CG methods is performed, which allows us to model and estimate the expected maximum performance for a given (unseen) problem. Keywords: Conjugate Gradient method; Sparse-Matrix Vector multiplication; Block Compressed Sparse Row format; Performance analysis; Performance estimation; Multiple GPUs
Original languageEnglish
Pages (from-to)552-575
Number of pages24
JournalParallel Computing
Issue number10-11
Publication statusPublished - 2012


Dive into the research topics of 'Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs'. Together they form a unique fingerprint.

Cite this