Our problem is to accurately solve linear systems of modest dimensions (typically, the number of variables equals 32) on a general purpose graphics processing unit. The linear systems originate from the application of Newton’s method on polynomial systems of (moderately) large degrees. Newton’s method is applied as a corrector in a path following method, so the linear systems are solved in sequence and not simultaneously. One solution path may require the solution of thousands of linear systems. In previous work we reported good speedups with our implementation to evaluate and differentiate polynomial systems on the NVIDIA Tesla C2050. Although the cost of evaluation and differentiation often dominates the cost of linear system solving, because of the limited bandwidth of the communication between CPU and GPU, we cannot afford to send the linear system to the CPU for solving.
Because of large degrees, the Jacobian matrix may contain extreme values, requiring extended precision, leading to a significant overhead. This overhead of multiprecision arithmetic is an additional motivation to develop a massively parallel algorithm. To allow overdetermined linear systems we solve linear systems in the least squares sense, computing the QR decomposition of the matrix by the modified Gram-Schmidt algorithm. We describe our implementation of the modified Gram-Schmidt orthogonalization method for the NVIDIA Tesla C2050, using double double and quad double arithmetic. Our experimental results show that the achieved speedups are sufficiently high to compensate for the overhead of one extra level of precision.
Based on our computational experiments, our conclusions are twofold:
- Despite the low dimensions, we experimentally show that the extra cost of multiprecision arithmetic can be compensated by a GPU.
- Combined with projected speedups of our massively parallel evaluation and differentiation implementation , the results pave the way for a path tracker that runs entirely on a GPU.
In this paper we apply the regular notion of speedup and compare the timings on the GPU with one core on the CPU for the same computations. When considering different levels of precision, a common practice is to compare the accuracy obtained at different precisions. Running independent calculations at different levels of precision is a pleasingly parallel computation which comes almost for free.
Using a massively parallel algorithm for the modified Gram-Schmidt orthogonalization on a NVIDIA Tesla C2050 Computing Processor we can compensate for the cost of one extra level of precision, even already for modest dimensions.