Nvidia has presented during GPU Technology Conference 2016 their new supercomputer DGX-1 designed for deep learning applications, e.g. artificial intelligence and neural networks. This powerful machine is based on Tesla P100 graphics modules which provide the performance of 250 traditional x86 servers in deep learning applications.
The main computing power of Nvidia DGX-1 supercomputer is provided by eight Tesla P100 graphics cards with 16 GB of HBM2 RAM each. Therefore, DGX-1 supercomputer has 28672 Nvidia CUDA cores. The fast data transfer between GPUs and CPUs is provided by NVLink Hybrid Cube Mesh link which is 5 to 12 times faster than the traditional PCIe Gen3 interconnect. Nvidia has decided also to use two 16-core Intel Xeon E5- 2698 v3 CPUs which are clocked at 2.3 GHz (basic frequency) or 3.6 GHz in Turbo Mode. Of course, Hyper-Threading technology is also supported, so 64 threads are available. Nvidia DGX-1 has also 512 GB of 2133 MHz DDR4 LRDIMM. The whole supercomputer has 170 TFLOPS of computing power.
What else? The storage is provided by four 2 TB SSD drives connected in RAID 0 mode. Of course, you need also fast network connectivity interface for such kind of supercomputer, so Nvidia applied dual 10 Gb interface and Quad InfiniBand 100 Gb. According to the published data, the whole system provides 768 GB/s data transfer speed.
The Nvidia DGX-1 supercomputer will be available in June, but only in United States. The rest of Nvidia customers will have to wait until the Q3. The price is very high, because it costs $129 000.