Abstract
The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of mappings for the SpMV operation on modern programmable GPUs using the Block Compressed Sparse Row (BCSR) format. Further, we show that reordering matrix blocks substantially improves the performance of the SpMV operation, especially when small blocks are used, so that our method outperforms existing state-of-the-art approaches, in most cases. Finally, a thorough analysis of the performance of both SpMV and CG methods is performed, which allows us to model and estimate the expected maximum performance for a given (unseen) problem.
Keywords: Conjugate Gradient method; Sparse-Matrix Vector multiplication; Block Compressed Sparse Row format; Performance analysis; Performance estimation; Multiple GPUs
Original language | English |
---|---|
Pages (from-to) | 552-575 |
Number of pages | 24 |
Journal | Parallel Computing |
Volume | 38 |
Issue number | 10-11 |
DOIs | |
Publication status | Published - 2012 |