Accelereyes announced the new release of Jacket v2.3. This new version of Jacket brings even greater performance improvements through GPU computing for MATLAB codes. (Click here to download v2.3) With v2.3, new support has been added for CUDA 5.0. This newer version of CUDA enables computation on the latest Kepler K20 GPUs of the NVIDIA Tesla product line.
This is a must-have release for all Jacket users. The performance improvements are generally felt across the board:
- PINV added GFOR support and performance enhanced 10x (switched from SVD to QR approach, see more)
- Jacket HPC supports all operating systems (Windows,Linux,Mac) and no longer requires jacketd
- installer now includes FlexLM utilities to manage Jacket HPC and Concurrent Network Licenses (lmgrd, lmutil, …)
- Linux installer size decreased (faster Jacket JMC startup)
- Performance of FFT improved
- MIN and MAX indexed output is double precision if input double precision
- GINFO now returns both (a) number of GPUS present and (b) number of GPUS licensed, and avoids printing addons if none present
- gselect(‘unused’) returns least utilized GPU for coordinating Jacket MGL
- SUM, MIN, MAX, ANY and ALL inside GFOR for scalar inputs
- licensing error for MRDIVIDE
- POWER inside GFOR for scalar exponents.
- avoid unecessarily GPU initialization in multi-GPU scenario (save memory) (SPT-719)
- fixed memory leak when pushing MATLAB data to device (SPT-679)
- Jacket JMC allows Jacket DLA, Jacket MGL, Jacket SLA, etc. (SPT-719)
- IMERODE memory leak (SPT-521)
- Jacket HPC network licensing correctly uses license files instead of requiring environment variables (SPT-658)
- detect and warn if arithmetic array larger than 2 GB (unsupported: JKT-2538)
Existing Jacket codes can be accelerated further merely by downloading and upgrading to this new v2.3 release. No additional code modifications are required. Full release notes are here. Along with this new Jacket v2.3 release, AccelerEyes offers a variety of technical computing resources, including monthly GPU computing webinars.
[via Accelereyes blog]