Parallel Computing of Support Vector Machines: A Survey

S. Tavara

ACM Computing Surveys 2019, 51:6, 123.

The immense amount of data created by digitalization requires parallel computing for machine-learning methods. While there are many parallel implementations for support vector machines (SVMs), there is no clear suggestion for every application scenario. Many factor—including optimization algorithm, problem size and dimension, kernel function, parallel programming stack, and hardware architecture—impact the efficiency of implementations. It is up to the user to balance trade-offs, particularly between computation time and classification accuracy. In this survey, we review the state-of-the-art implementations of SVMs, their pros and cons, and suggest possible avenues for future research.

Supplementary information is available at https://schlieplab.org/Supplements/ParSVMSurvey2018/.

DOI: 10.1145/3280989.

Further publications by Shirin Tavara.