The comparison of three Iterative Re-Weighted Least Square (IRWLS) methods for training SVMs proposed by Morales and Vazquez [2016, 2017].
Here, we highlight the interesting findings of the IRWLS methods compared to the standard software such as LibSVM, SVMLight, PS-SVM, and each other, if possible.
The description of each column is as follows,
- "Algorithm" indicates the corresponding algorithm's name or approach.
- "Compared to LibSVM" shows the performance of the corresponding algorithm compared to LibSVM.
- "Compared to SVMLight" describes the performance of the corresponding algorithm compared to SVMLight.
- "Compared to PS-SVM" explains the performance of the corresponding algorithm compared to PS-SVM.
- "Compared to each other" compares the performance of the corresponding algorithm to other algorithms in the table.
Note the table can be sorted by clicking on the title of each column.
PSIRWLS (Parallel Semi-parametric IRWLS) |
- Less efficiency using 500 centroids and one core
- 98% less runtime using 50 centroids and 16 nodes
- Up to 87% less runtime using 50 centroids and only one core
- Slightly lower accuracy
- Obtains fewer number of SVs
- The number of SVs and runtime scale better in terms of the number of training samples
|
- Less efficiency using 500 centroids and only one core
- 99% less runtime using 50 centroids and 16 cores
- The number of SVs and runtime scale better in terms of the number of training samples
|
- Parallel Cholesky factorization in PSIRWLS is faster than the parallel block matrix in PS-SVM
- 67% less runtime using 16 nodes and up to 47% less runtime using 8 cores
- Scales better in terms of the number of cores
- Linear speedup using 16 cores
|
- PSIRWLS has the best speedup compared to PIRWLS and PS-SVM
- Has the best scalability
- All perform non-linear classifications
|
Morales and Vazquez [2016] |
PIRWLS (Parallel IRWLS) |
- Less efficient for solving full SVMs using only one core
- Up to 78% less runtime using 16 cores
- Obtains fewer number of SVs
- The number of SVs and runtime scale better in terms of the number of training samples
|
- Not efficient using only one core
- Up to 97% less runtime using 16 cores
- The runtime and the number of SVs scale better in terms of the number of training samples
|
- PS-SVM outperforms PIRWLS using 1, 8, and 16 cores
- Obtains a larger number of SVs
|
- PSIRWLS outperforms PIRWLS using 1, 8, and 16 cores
- PSIRWLS obtains fewer SVs
- Semi-parametric SVMs (PSIRWLS) is more efficient than solving full SVMs
- All perform non-linear classifications
|
Morales and Vazquez [2016] |
LIBIRWLS (LIBrary IRWLS) |
- Outperforms LibSVM using only one core
- Up to 83% less runtime with comparative accuracy
- Obtains fewer SVs
- Scales better with the number of training samples
|
- Outperforms SVMLight using only one core
- Up to 98% less runtime with comparative accuracy
- Obtains fewer SVs
- Scale better with the number of training samples
|
- PS-SVM with 50 centroids outperforms LIBIRWLS with slightly lower accuracy
|
- PSIRWLS with 50 centroids outperforms LIBIRWLS using 1 and 16 cores
- All perform non-linear classifications
|
Morales and Vazquez [2017] |