Evaluating automatically parallelized versions of the support vector machine

V. Codreanu, B. Dröge, D. Williams, B. Yasar, P. Yang, B. Liu, F. Dong, O. Surinta, L.R.B. Schomaker, J.B.T.M. Roerdink, M.A. Wiering

Research output: Contribution to journalArticleAcademicpeer-review

10 Citations (Scopus)

Abstract

The support vector machine (SVM) is a supervised learning algorithm used for recognizing patterns in data. It is a very popular technique in machine learning and has been successfully used in applications such as image classification, protein classification, and handwriting recognition. However, the computational complexity of the kernelized version of the algorithm grows quadratically with the number of training examples. To tackle this high computational complexity, we have developed a directive-based approach that converts a gradient-ascent based training algorithm for the CPU to an efficient graphics processing unit (GPU) implementation. We compare our GPU-based SVM training algorithm to the standard LibSVM CPU implementation, a highly optimized GPU-LibSVM implementation, as well as to a directive-based OpenACC implementation. The results on different handwritten digit classification datasets demonstrate an important speed-up for the current approach when compared to the CPU and OpenACC versions. Furthermore, our solution is almost as fast and sometimes even faster than the highly optimized CUBLAS-based GPU-LibSVM implementation, without sacrificing the algorithm's accuracy.

Original languageEnglish
Pages (from-to)2274-2294
Number of pages21
JournalConcurrency and Computation : Practice & Experience
Volume28
Issue number7
DOIs
Publication statusPublished - 1 May 2016

Keywords

  • automatic parallelization
  • GPU
  • handwritten digit recognition
  • machine learning
  • support vector machine

Fingerprint

Dive into the research topics of 'Evaluating automatically parallelized versions of the support vector machine'. Together they form a unique fingerprint.

Cite this