This paper shows two examples of how the analysis of option pricing problems can lead to computational methods efficiently implemented in parallel. These computational methods outperform “general purpose” methods (i.e., for example, Monte Carlo, finite differences methods). The GPU implementation of two numerical algorithms to price two specific derivatives (continuous barrier options and realized variance options) is presented. These algorithms are implemented in CUDA subroutines ready to run on Graphics Processing Units (GPUs) and their performance is studied. The realization of these subroutines is motivated by the extensive use of the derivatives considered in the financial markets to hedge or to take risk and by the interest of financial institutions in the use of state of the art hardware and software to speed up the decision process.
The performance of these algorithms is measured using the (CPU/GPU) speed up factor, that is using the ratio between the (wall clock) times required to execute the code on a CPU and on a GPU. The choice of the reference CPU and GPU used to evaluate the speed up factors presented is stated. The outstanding performance of the algorithms developed is due to the mathematical properties of the pricing formulae used and to the ad hoc software implementation. In the case of realized variance options when the computation is done in single precision the comparisons between CPU and GPU execution times gives speed up factors of the order of a few hundreds. For barrier options, the corresponding speed up factors are of about fifteen, twenty.
We propose parallel algorithms implemented in CUDA subroutines ready to run on Graphics Processing Units (GPUs) to price two kinds of financial derivatives, that is: continuous barrier options and realized variance options. The really satisfactory parallel performances of these subroutines are mainly due to two reasons: the mathematical treatment of the problems considered that leads to pricing formulae very well suited for GPU parallel computing and the flexibility of CUDA language that makes possible to implement parallel algorithms to exploit these formulae. The analysis of the parallel performance on a GPU of the CUDA implementation of these algorithms shows that the use of GPU computing can be of great benefit in financial applications. The use of parallel computing has gained importance in finance in the last decades due to the fact that the speed of the decision making process is a key success factor in the financial markets operations. The decision making process in the financial institutions (banks, insurance companies, hedge funds, …) often involves the evaluation of a huge number of contracts of the same type varying the values assigned to some of the parameters that define the contracts and/or the models used to price them (such as, for example, strike price, maturity time, volatility, drift). This is a feature of the decision making process in the financial institutions very well suited for parallel and/or distributed computing. Moreover often the evaluation of an individual contract is done using Monte Carlo methods or reduces to the evaluation of an integral via numerical quadrature. These kinds of computation are also very well suited for parallel and /or distributed computing.
The CUDA subroutines to price barrier options and realized variance options can be downloaded from the website http://www.econ.univpm.it/recchioni/finance/w13. A more general reference to the work in mathematical finance of some of the authors and of their coauthors is the website http://www.econ.univpm.it/recchioni/finance/.
Lorella Fatone, Marco Giacinti, Francesca Mariani, Maria Cristina Recchioni and Francesco Zirilli. Parallel option pricing on GPU: barrier options and realized variance option. Journal of Supercomputing, 2012. [DOI: 10.1007/s11227-012-0813-7]