TYAN Exhibits Artificial Intelligence and Deep Learning Optimized Server Platforms at GTC 2018
Delivering maximum performance including the upcoming 2U Thunder HX TA88-B71017 with 8 NVIDIA Tesla V100 SXM2 GPUs, and maximum GPU density server platforms with NVIDIA Tesla V100 32GB for Machine Learning, Artificial Intelligence, and Deep Neural Networks
San Jose, Calif. – GPU Technology Conference – Mar 27, 2018 – TYAN®, an industry-leading server platform design manufacturer and subsidiary of MiTAC Computing Technology Corporation, is showcasing a wide range of server platforms with support for NVIDIA® Tesla® V100, V100 32GB, P40, P4 PCIe and V100 SXM2 GPU accelerators that target Machine Learning, Artificial Intelligence, Deep Neural Networks, and Inference applications at the GPU Technology Conference (GTC) in San Jose, Calif., through March 29.
HPC workloads benefit tremendously from high throughput communication between GPU accelerators in a server. Workloads common in Machine Learning and Artificial Intelligence require frequent memory transfers between GPUs and perform best with minimal latency and maximum device to device bandwidth. GPUs packaged with NVIDIA NVLink™ interconnect technology have a total of 150GB/s unidirectional (300GB/s bidirectional) bandwidth between GPU accelerators - nearly 10 times the bandwidth of GPU accelerators packaged in standard PCIe form factors. TYAN’s Thunder HX TA88-B7107 takes full advantage of the NVIDIA NVLink technology, offering eight NVIDIA Tesla V100 SXM2 GPU accelerators packed within a 2U server enclosure. With four PCIe x16 slots available for high-speed networking and 24 DIMM slots supporting up to 3TB of system RAM, the TA88-B7107 is the highest performance GPU server option available.
TYAN is also exhibiting standard PCIe GPU servers with support for the new NVIDIA Tesla V100 32 GB with double the memory capacity, P40, and P4 PCIe GPU accelerators. This includes a pair of 4U server systems - the Thunder HX FT77D-B7109 with support for up to eight GPUs for massively parallel workloads such as scientific computing and large-scale facial recognition, and the Thunder HX FA77-B7119 with support for up to ten GPUs within a single server enclosure is ideal for running multiple jobs in parallel in a virtualized environment.
The Intel® Xeon® Scalable Processor-based Thunder HX GA88-B5631 and AMD EPYC™ processor-based Transport HX GA88-B8021 both feature support for up to 4 NVIDIA Tesla V100 32 GB GPUs within a 1U server and are the industry's highest density GPU servers available on the market. Both platforms offer an additional PCIe x16 slot next to the GPU cards to accommodate high-speed networking adapters up to 100Gb/s such as EDR InfiniBand or 100 Gigabit Ethernet. These platforms are ideal for Artificial Intelligence, Machine Learning, and Deep Neural Network workloads. Additionally, the GA88-B8021 can support up to six NVIDIA Tesla P4 GPU accelerators for inferencing applications.
"AI is transforming every industry by enabling more accurate decisions to be made based on the massive amounts of data being collected. By providing an efficient GPU computing platform to our customers, TYAN’s leading portfolio of GPU server platforms are based on the latest NVIDIA Tesla technology and are optimized to deliver faster overall performance, greater efficiency, and lower energy and cost per unit of computation for the AI revolution,” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's TYAN Business Unit.
"The NVIDIA GPU computing platform is the engine for modern AI, accelerating all major deep learning frameworks,” said Paresh Kharya, Product Marketing Manager for the Accelerated Computing Group at NVIDIA. "Tesla V100 32GB GPUs now available in TYAN servers provide twice the memory capacity to drive up to 50% faster results on deeper and more accurate AI models."
TYAN GTC 2018 Exhibits- 2U/8-GPU Thunder HX TA88-B7107: 2U dual-socket Intel Xeon Scalable Processor-based platform with support for up to eight NVIDIA Tesla V100 SXM2 GPU accelerators, 24 DDR4 DIMM slots, and two 2.5" NVMe U.2 drives
- 4U/10-GPU Thunder HX FA77-B7119: 4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to ten NVIDIA Tesla GPU accelerators, 24 DDR4 DIMM slots, 11 PCIe x16 slots, and 14 2.5" hot-swap SATA 6Gb/s devices, four of the bays can support NVMe U.2 drives
- 4U/8-GPU Thunder HX FT77D-B7109: 4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to eight NVIDIA Tesla GPU accelerators, 24 DDR4 DIMM slots, 9 PCIe x16 slots, and 14 2.5" hot-swap SATA 6Gb/s devices, four of the bays can support NVMe U.2 drives
- 1U/4-GPU Thunder HX GA88-B5631: 1U single-socket Intel Xeon Scalable Processor-based platform with support for up to four NVIDIA Tesla GPU accelerators, 12 DDR4 DIMM slots, five PCIe x16 slots, and two 2.5" hot-swap SATA 6Gb/s devices
- 1U/4-GPU Transport HX GA88-B8021: 1U single-socket AMD EPYC processor-based platform supports for up to four NVIDIA Tesla P100 or V100 GPU accelerators or six Tesla P4 GPU accelerators, 16 DDR4 DIMM slots, five PCIe x16 slots, and two 2.5" hot-swap SATA 6Gb/s devices
- 4U/1-GPU Thunder SX FA100-B7118: 4U dual-socket Intel Xeon Scalable Processor-based platform supports a single NVIDIA Tesla P4 GPU accelerator with a PCIe x16 connection, 16 DDR4 DIMM slots, and 100 3.5" hot-swap SATA 6Gb/s devices