IBM Power 9 Servers
The Best Server for Enterprise AI
IBM® Power System™ Accelerated Compute Server (AC922) delivers unprecedented performance for modern HPC, analytics, and artificial intelligence (AI). Enterprises can now deploy data-intensive workloads, like deep learning frameworks and accelerated databases, with confidence.
AC922 enables the cutting-edge AI innovation data scientists desire, with the dependability IT requires. This is IT infrastructure redesigned for enterprise AI.
Faster I/O – up to 5.6x more I/O bandwidth than x86 servers
The AC922 includes a variety of next-generation I/O architectures, including: PCIe Gen4, CAPI 2.0, OpenCAPI and NVLINK. These interconnects provide 2 to 5.6 times as much bandwidth for today’s data-intensive workloads compared with the PCIe3 Bus Gen3 found in x86 servers.
The best GPUs – 2-6 NVIDIA® Tesla® V100 GPUs with NVLink
The AC922 pairs POWER9 CPUs and NVIDIA Tesla V100 with NVLink GPUs which delivers up to 5.6x times the performance for each pairing. This is the only server capable of delivering this I/O performance between CPUs and GPUs. This provides massive throughput capability for HPC, deep learning and AI workloads.
Extraordinary CPUs – 2x POWER9 CPUs, designed for AI
While blazingly fast on their own, POWER9 CPUs truly excel in their ability to fully maximize the performance of everything around them. Built for the AI era, the POWER9 supports up to 5.6x more I/O and 2x more threads than its x86 contemporaries. The POWER9 is available on configurations with anywhere between 16 and up to 44 cores in the AC922 server.
Simplest AI architecture – Share RAM across CPUs & GPUs
AI models easily outgrow GPU memory capacity in most x86 servers. CPU to GPU coherence in the AC922 addresses these concerns by allowing accelerated applications to leverage system memory as GPU memory. This coherence also simplifies programming by eliminating data movement and locality requirements. Additionally, by leveraging the 5.6x faster NVLink interconnect, sharing memory between CPUs and GPUs doesn’t bottleneck down to PCIe 3 speeds, as it would on x86 servers.
Enterprise-ready – PowerAI DL frameworks with IBM support
PowerAI DL frameworks simplifies deep-learning deployment and performance. Unlocks a new, simpler end-to-end toolchain for AI users. Proven AI performance and scalability enable you to start with one node, then scale to a rack or thousands of nodes with near linear scaling efficiency.
Next Gen PCIe – PCIe Gen4 2x faster vs PCIe Gen3 in x86
The AC922 is the industry’s first server to feature the next generation of the industry standard PCIe interconnect. PCIe generation 4 delivers approximately 2x the data bandwidth of the PCIe generation 3 interconnect found in x86 servers. Built for the world’s biggest AI challenges.
The AC922 is the backbone of the CORAL Summit supercomputer, on track to deliver 200+ PetaFlops of HPC and 3 ExaFlops of AI as a service performance. With its efficiency and ease of AI deployment, it’s also ideally-suited to address your organization’s AI aspirations.
The Best Server for Enterprise AI
IBM PowerAI deep learning frameworks
PowerAI helps accelerate this journey to cognitive computing by bringing together a collection of the most popular open source frameworks for deep learning, along with supporting software and libraries in a single installable package.
IBM Power Advanced Compute (AC) AC922 Server
Modern AI, HPC and analytics workloads are driving an ever-growing set of data intensive challenges that can only be met with accelerated infrastructure.