Appro, a leading provider of supercomputing solutions, today announces the deployment of Appro HyperPower™ Clusters, based on the Appro CPU/GPU GreenBlade System to provide Lawrence Livermore National Laboratory (LLNL) Computing Center with a new visualization cluster called “Edge” geared to support data analysis and visualization projects. The clusters will also be used by code developers who are porting simulation codes to run on GPUs. Appro will showcase the HyperPower™ Cluster Technology at the SC 2010 Conference and Exposition in New Orleans, LA November 15-18, 2010 at booth # 1939.
LLNL is a National Nuclear Security Administration (NNSA) laboratory with a mission to advance and apply science and technology to ensure the safety, security, and reliability of the U.S. nuclear deterrent; reduce or counter threats to national and global security; enhance the energy and environmental security of the nation and strengthen the nation’s economic competitiveness.
“LLNL scientists required a platform with the latest GPU technology in order to take advantage of the performance increases available to visualization tools and other application codes. Visualization specialists are dealing with multi-terabyte data sets with tens of billions of zones, thousands of files per time step, and hundreds of time steps,” said Becky Springmeyer, Computational Systems and Software Environment Lead of the Advanced Simulation and Computing program at LLNL. “Post-processing tasks are heavily I/O bound, so specialized visualization servers that optimize I/O rather than CPU speed are better suited for this work, which will be now enabled through the “Edge” cluster.”
“The inclusion of GPU boards provides a critical technology for the increasingly complicated visualization and data analysis applications needed to support petascale simulations and beyond,” said Bert Still, Exascale Computing Research Project Leader for LLNL. “They also provide a test bed for our code developers to assess the path forward to exascale computing.”
The Appro HyperPower clusters, based on the Appro GreenBlade System, CPU/GPU nodes consist of 6 racks, total of 216 CPU nodes based on Six-Core Intel® Xeon® processors combined with 208 NVIDIA Tesla GPUs nodes. It offers a total size of 29 Teraflops of computing power and up to 20Terabytes of memory to support the increased level of I/O operations inherent in the use of interactive data analysis tools. Compute and graphic nodes in the system are interconnected using QDR Infiniband fabric managed by TOSS, Cluster Operating System.
“Providing access to a machine with CUDA (compute unified device architecture) support is important to the laboratory’s research programs,” said Trent D’Hooge, Cluster Integration Lead. “This system will be the first data analysis cluster that has GPUs with ECC support and increased double-precision floating point performance.”
“Appro is proud to be able to provide Lawrence Livermore National Laboratory with a powerful GPU cluster for its visualization and exascale software development computing projects,” said John Lee, VP of Advanced Technology Solutions for Appro. “This cluster solution demonstrates Appro’s continued growth for hybrid computing deployments requiring higher memory for I/O bandwidth needed for efficient data analysis and complex visualization tasks.”