Gpu acceleration of logistic regression with cuda

Loading...
Thumbnail Image

Date

2011-11

Journal Title

Journal ISSN

Volume Title

Publisher

Computer Science & Engineering Society c/o Department of Computer Science and Engineering, University of Moratuwa.

Abstract

Logistic regression (LR) is a widely used machine learning algorithm. It is regarded unsuitably slow for high dimensional problems compared to other machine learning algorithms such as SVM, decision trees and Bayes classifier. In this paper we utilize the data parallel nature of the algorithm to implement it on NVidia GPUs. We have implemented this GPU-based LR on the newest generation GPU with Compute Unified Device Architecture (CUDA). Our GPU implementation is based on BFGS optimization method. This implementation was extended to multiple GPU and cluster environment. This paper describes the performance gain while using GPU environment.

Description

Keywords

Machine learning, Classification, CUDA, Logistic regression, GPGPU

Citation

******

DOI