A Three-Step Performance Automatic Tuning Strategy using Statistical Model for OpenCL Implementation of Krylov Subspace Methods
スポンサーリンク
概要
- 論文の詳細を見る
In this work, we propose a three-step performance automatic tuning strategy that will help the developers to write applications with self-adaptive performance. We are using OpenCL and Krylov Subspace Methods as our programming language and test problems respectively. By applying machine learning techniques, we build our statistical performance models of a specific runtime environment through data collected from experiments executed automatically. These models are used for searching computational performance related optimal tuning parameters. Finally we further optimize choices of these parameters using the iterative feature of Krylov Subspace Method. The choices of tuning parameters and statistical modeling strategies are crucial to the performance of our tuning strategy. In the paper, we evaluated the statistical models that we build for autotuning. The results show that the accuracy of SVM classification model can be as high as 100% and 94.32% for training dataset and test dataset of SpMV and as high as 100% and 96.21% for training dataset and test dataset of SAXPY.
- 2012-03-19
著者
-
Reiji Suda
Presently With Crest Jst
-
Cong Li
Department Of Computer Science Graduate School Of Information Science And Technology At University O
-
Reiji Suda
Department of Computer Science, Graduate School of Information Science and Technology at University
-
Reiji Suda
Department Of Computer Science Graduate School Of Information Science And Technology At University O
-
Reiji Suda
Department o f Computer Science, Graduate School of In formation Science and Technology, University of Tokyo
関連論文
- An execution time prediction analytical model for GPU with instruction-level and thread-level parallelism awareness
- A precise measurement tool for power dissipation of CUDA kernels
- A Three-Step Performance Automatic Tuning Strategy using Statistical Model for OpenCL Implementation of Krylov Subspace Methods
- Efficient Monte Carlo Optimization with ATMathCoreLib
- Evaluation of Impact of Noise on Collective Algorithms in Repeated Computation Cycle
- The Future of Accelerator Programming: Abstraction, Performance or Can We Have Both?