GPU Implementation of an Automatic Target Detection and Classification Algorithm for Hyperspectral Image Analysis
The detection of (moving or static) targets in remotely sensed hyperspectral images often requires real-time responses for swift decisions that depend upon high computing performance of algorithm analysis. The automatic target detection and classification algorithm (ATDCA) has been widely used for this purpose. In this letter, we develop several optimizations for accelerating the computational performance of ATDCA. The first one focuses on the use of the Gram–Schmidt orthogonalization method instead of the orthogonal projection process adopted by the classic algorithm. The second one is focused on the development of a new implementation of the algorithm on commodity graphics processing units (GPUs). The proposed GPU implementation properly exploits the GPU architecture at low level, including shared memory, and provides coalesced accesses to memory that lead to very significant speedup factors, thus taking full advantage of the computational power of GPUs. The GPU implementation is specifically tailored to hyperspectral imagery and the special characteristics of this kind of data, achieving real-time performance of ATDCA for the first time in the literature. The proposed optimizations are evaluated not only in terms of target detection accuracy but also in terms of computational performance using two different GPU architectures by NVIDIA: Tesla C1060 and GeForce GTX 580, taking advantage of the performance of operations in single-precision floating point. Experiments are conducted using hyperspectral data sets collected by three different hyperspectral imaging instruments. These results reveal considerable acceleration factors while retaining the same target detection accuracy for the algorithm.
In this letter, we have developed the first real-time implementation of an ATDCA, implemented with GS orthogonalization, on GPU architectures. The proposed implementation has been specifically tailored to specific aspects involved in hyperspectral data processing and makes advanced use of the GPU architecture including considerations such as the arrangement of the data in the GPU local and shared memories in order to ensure coalesced memory accesses and low memory-transfer times. Although the results obtained with a variety of hyperspectral images are very encouraging, GPUs are still far from being exploited in real missions due to power consumption and radiation tolerance issues to be addressed in future developments. Future work will also explore how to merge different kernels used to reduce kernel-invoking time.
López, S., Plaza, A. and Sarmiento, R. GPU Implementation of an Automatic Target Detection and Classification Algorithm for Hyperspectral Image Analysis. IEEE Geoscience and Remote Sensing Society. 2012. [doi: 10.1109/LGRS.2012.2198790] [Free PDF]