Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Developing AI and HPC solutions? Check out the new AMD ROCm 6.2 release

Featured content

Developing AI and HPC solutions? Check out the new AMD ROCm 6.2 release

The latest release of AMD’s free and open software stack for developing AI and HPC solutions delivers 5 important enhancements. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

If you develop AI and HPC solutions, you’ll want to know about the most recent release of AMD ROCm software, version 6.2.

ROCm, in case you’re unfamiliar with it, is AMD’s free and open software stack. It’s aimed at developers of artificial intelligence and high-performance computing (HPC) solutions on AMD Instinct accelerators. It's also great for developing AI and HPC solutions on AMD Instinct-powered servers from Supermicro. 

First introduced in 2016, ROCm open software now includes programming models, tools, compilers, libraries, runtimes and APIs for GPU programming.

ROCm version 6.2, announced recently by AMD, delivers 5 key enhancements:

  • Improved vLLM support 
  • Boosted memory efficiency & performance with Bitsandbytes
  • New Offline Installer Creator
  • New Omnitrace & Omniperf Profiler Tools (beta)
  • Broader FP8 support

Let’s look at each separately and in more detail.

LLM support

To enhance the efficiency and scalability of its Instinct accelerators, AMD is expanding vLLM support. vLLM is an easy-to-use library for the large language models (LLMs) that power Generative AI.

ROCm 6.2 lets AMD Instinct developers integrate vLLM into their AI pipelines. The benefits include improved performance and efficiency.

Bitsandbytes

Developers can now integrate Bitsandbytes with ROCm for AI model training and inference, reducing their memory and hardware requirements on AMD Instinct accelerators. 

Bitsandbytes is an open source Python library that enables LLMs while boosting memory efficiency and performance. AMD says this will let AI developers work with larger models on limited hardware, broadening access, saving costs and expanding opportunities for innovation.

Offline Installer Creator

The new ROCm Offline Installer Creator aims to simplify the installation process. This tool creates a single installer file that includes all necessary dependencies.

That makes deployment straightforward with a user-friendly GUI that allows easy selection of ROCm components and versions.

As the name implies, the Offline Installer Creator can be used on developer systems that lack internet access.

Omnitrace and Omniperf Profiler

The new Omnitrace and Omniperf Profiler Tools, both now in beta release, provide comprehensive performance analysis and a streamlined development workflow.

Omnitrace offers a holistic view of system performance across CPUs, GPUs, NICs and network fabrics. This helps developers ID and address bottlenecks.

Omniperf delivers detailed GPU kernel analysis for fine-tuning.

Together, these tools help to ensure efficient use of developer resources, leading to faster AI training, AI inference and HPC simulations.

FP8 Support

Broader FP8 support can improve the performance of AI inferencing.

FP8 is an 8-bit floating point format that provides a common, interchangeable format for both AI training and inference. It lets AI models operate and perform consistently across hardware platforms.

In ROCm, FP8 support improves the process of running AI models, particularly in inferencing. It does this by addressing key challenges such as the memory bottlenecks and high latency associated with higher-precision formats. In addition, FP8's reduced precision calculations can decrease the latency involved in data transfers and computations, losing little to no accuracy.  

ROCm 6.2 expands FP8 support across its ecosystem, from frameworks to libraries and more, enhancing performance and efficiency.

Do More:

Watch the related video podcast:

Featured videos


Follow


Related Content

HBM: Your memory solution for AI & HPC

Featured content

HBM: Your memory solution for AI & HPC

High-bandwidth memory shortens the information commute to keep pace with today’s powerful GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

As AI powered by GPUs transforms computing, conventional DDR memory can’t keep up.

The solution? High-bandwidth memory (HBM).

HBM is memory chip technology that essentially shortens the information commute. It does this using ultra-wide communication lanes.

An HBM device contains vertically stacked memory chips. They’re interconnected by microscopic wires known as through-silicon vias, or TSVs for short.

HBM also provides more bandwidth per watt. And, with a smaller footprint, the technology can also save valuable data-center space.

Here’s how: A single HBM stack can contain up to eight DRAM modules, with each module connected by two channels. This makes an HBM implementation of just four chips roughly equivalent to 30 DDR modules, and in a fraction of the space.

All this makes HBM ideal for workloads that utilize AI and machine learning, HPC, advanced graphics and data analytics.

Latest & Greatest

The latest iteration, HBM3, was introduced in 2022, and it’s now finding wide application in market-ready systems.

Compared with the previous version, HBM3 adds several enhancements:

  • Higher bandwidth: Up to 819 GB/sec., up from HBM2’s max of 460 GB/sec.
  • More memory capacity: 24GB per stack, up from HBM2’s 8GB
  • Improved power efficiency: Delivering more data throughput per watt
  • Reduced form factor: Thanks to a more compact design

However, it’s not all sunshine and rainbows. For one, HBM-equipped systems are more expensive than those fitted out with traditional memory solutions.

Also, HBM stacks generate considerable heat. Advanced cooling systems are often needed, adding further complexity and cost.

Compatibility is yet another challenge. Systems must be designed or adapted to HBM3’s unique interface and form factor.

In the Market

As mentioned above, HBM3 is showing up in new products. That very definitely includes both the AMD Instinct MI300A and MI300X series accelerators.

The AMD Instinct MI300A accelerator combines a CPU and GPU for running HPC/AI workloads. It offers HBM3 as the dedicated memory with a unified capacity of up to 128GB.

Similarly, the AMD Instinct MI300X is a GPU-only accelerator designed for low-latency AI processing. It contains HBM3 as the dedicated memory, but with a higher capacity of up to 192GB.

For both of these AMD Instinct MI300 accelerators, the peak theoretical memory bandwidth is a speedy 5.3TB/sec.

The AMD Instinct MI300X is also the main processor in Supermicro’s AS -8125GS-TNMR2, an H13 8U 8-GPU system. This system offers a huge 1.5TB of HBM3 memory in single-server mode, and an even huger 6.144TB at rack scale.

Are your customers running AI with fast GPUs, only to have their systems held back by conventional memory? Tell them to check out HBM.

Do More:

 

Featured videos


Follow


Related Content

Tech Explainer: What is CXL — and how can it help you lower data-center latency?

Featured content

Tech Explainer: What is CXL — and how can it help you lower data-center latency?

High latency is a data-center manager’s worst nightmare. Help is here from an open-source solution known as CXL. It works by maintaining “memory coherence” between the CPU’s memory and memory on attached devices.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Latency is a crucial measure for every data center. Because latency measures the time it takes for data to travel from one point in a system or network to another, lower is generally better. A network with high latency has slower response times—not good.

Fortunately, the industry has come up with an open-source solution that provides a low-latency link between processors, accelerators and memory devices such as RAM and SSD storage. It’s known as Compute Express Link, or CXL for short.

CXL is designed to solve a couple of common problems. Once a processor uses up the capacity of its direct-attached memory, it relies on an SSD. This introduces a three-order-of-magnitude latency gap that can hurt both performance and total cost of ownership (TCO).

Another problem is that multicore processors are starving for memory bandwidth. This has become an issue because processors have been scaling in terms of cores and frequencies faster than their main memory channels. The resulting deficit leads to suboptimal use of the additional processor cores, as the cores have to wait for data.

CXL overcomes these issues by introducing a low-latency, memory cache coherent interconnect. CXL works for processors, memory expansion and AI accelerators such as the AMD Instinct MI300 series. The interconnect provides more bandwidth and capacity to processors, which increases efficiency and enables data-center operators to get more value from their existing infrastructure.

Cache-coherence refers to IT architecture in which multiple processor cores share the same memory hierarchy, yet retain individual L1 caches. The CXL interconnect reduces latency and increases performance throughout the data center.

The latest iteration of CXL, version 3.1, adds features to help data centers keep up with high-performance computational workloads. Notable upgrades include new peer-to-peer direct memory access, enhancements to memory pooling, and CXL Fabric improvements.

3 Ways to CXL

Today, there are three main types of CXL devices:

  • Type 1: Any device without integrated local memory. CXL protocols enable these devices to communicate and transfer memory capacity from the host processor.
  • Type 2: These devices include integrated memory, but also share CPU memory. They leverage CXL to enable coherent memory-sharing between the CPU and the CXL device.
  • Type 3: A class of devices designed to augment existing CPU memory. CXL enables the CPU to access external sources for increased bandwidth and reduced latency.

Hardware Support

As data-center architectures evolve, more hardware manufacturers are supporting CXL devices. One such example is Supermicro’s All-Flash EDSFF and NVM3 servers.

Supermicro’s cutting-edge appliances are optimized for resource-intensive workloads, including data-center infrastructure, data warehousing, hyperscale/hyperconverged and software-defined storage. To facilitate these workloads, Supermicro has included support for up to eight CXL 2.0 devices for advanced memory-pool sharing.

Of course, CXL can be utilized only on server platforms designed to support communication between the CPU, memory and CXL devices. That’s why CXL is built into the 4th gen AMD EPYC server processors.

These AMD EPYC processors include up to 96 ‘Zen 4’ 5nm cores. Each core includes 32MB per CCD of L3 cache, as well as up to 12 DDR5 channels supporting as much as 12TB of memory.

CXL memory expansion is built into the AMD EPYC platform. That makes these CPUs ideally suited for advanced AI and GenAI workloads.

Crucially, AMD also includes 256-bit AES-XTS and secure multikey encryption. This enables hypervisors to encrypt address space ranges on CXL-attached memory.

The Near Future of CXL

Like many add-on devices, CXL devices are often connected via the PCI Express (PCIe) bus. However, implementing CXL over PCIe 5.0 in large data centers has some drawbacks.

Chief among them is the way its memory pools remain isolated from each other. This adds latency and hampers significant resource-sharing.

The next generation of PCIe, version 6.0, is coming soon and will offer a solution. CXL for PCIe6.0 will offer twice as much throughput as PCIe 5.0.

The new PCIe standard will also add new memory-sharing functionality within the transaction layer. This will help reduce system latency and improve accelerator performance.

CXL is also leading to the start of disaggregated computing. There, resources that reside in different physical enclosures can be available to several applications.

Are your customers suffering from too much latency? The solution could be CXL.

Do More:

 

 

Featured videos


Follow


Related Content

Meet AMD's new Alveo V80 Compute Accelerator Card

Featured content

Meet AMD's new Alveo V80 Compute Accelerator Card

AMD’s new Alveo V80 Compute Accelerator Card has been designed to overcome performance bottlenecks in compute-intensive workloads that include HPC, data analytics and network security.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are you or your customers looking for an accelerator for memory-bound applications with large data sets that require FPGA hardware adaptability? If so, then check out the new AMD Alveo V80 Compute Accelerator Card.

It was introduced by AMD at ISC High Performance 2024, an event held recently in Hamburg, Germany.

The thinking behind the new component is that for large-scale data processing, raw computational power is only half the equation. You also need lots of memory bandwidth.

Indeed, AMD’s new hardware adaptable accelerator is purpose-built to overcome performance bottlenecks for compute-intensive workloads with large data sets common to HPC, data analytics and network security applications. It’s powered by AMD’s 7nm Versal HBM Series adaptive system-on-chip (SoC).

Substantial gains

AMD says that compared with the previous-generation Alveo U55C, the new Alveo V80 offers up to 2x the memory bandwidth, 2x the PCIe bandwidth, 2x the logic density, and 4x the network bandwidth (820GB/sec.).

The card also features 4x200G networking, PCIe Gen4 and Gen5 interfaces, and DDR4 DIMM slots for memory expansion.

Appropriate workloads for the new AMD Alveo V80 include HPC, data analytics, FinTech/Blockchain, network security, computational storage, and AI compute.

In addition, the AMD Alveo V80 can scale to hundreds of nodes over Ethernet, creating compute clusters for HPC applications that include genomic sequencing, molecular dynamics and sensor processing.

Developers, too

A production board in a PCIe form factor, the AMD Alveo V80 is designed to offer a faster path to production than designing your own PCIe card.

Indeed, for FPGA developers, the V80 is fully enabled for traditional development via the Alveo Versal Example Design (AVED), which is available on Github.

This example design provides an efficient starting point using a pre-built subsystem implemented on the AMD Versal adaptive SoC. More specifically, it targets the new AMD Alveo V80 accelerator.

Supermicro offering

The new AMD accelerator is already shipping in volume, and you can get it from either AMD or an authorized distributor.

In addition, you can get the Alveo V80 already integrated into a partner-provided server.

Supermicro is integrating the new AMD Alveo V80 with its AMD EPYC processor-powered A+ servers. These include the Supermicro AS-4125GS-TNRT, a compact 4U server for deployments where compute density and memory bandwidth are critical.

Early user

AMD says one early customer for the new accelerator card is the Commonwealth Scientific Industrial Research Organization (CSIRO), the national research organization of Australia.

CSIRO plans to upgrade an older setup with 420 previous-generation AMD Alveo U55C accelerator cards, replacing them with the new Alveo V80.

 Because the new part is so much more powerful than its predecessor, the organization expects to reduce the number of cards it needs by two-thirds. That, in turn, should shrink the data-center footprint required and lower system costs.

If those sound like benefits you and your customers would find attractive, check out the AMD Alveo V80 links below.

Do more:

 

Featured videos


Follow


Related Content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Featured content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Supermicro and Vast Data are jointly offering an AMD-based turnkey solution that promises to simplify and accelerate AI and data pipelines.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro and Vast Data are collaborating to deliver a turnkey, full-stack solution for creating and expanding AI deployments.

This joint solution is aimed at hyperscalers, cloud service providers (CSPs) and large, data-centric enterprises in fintech, adtech, media and entertainment, chip design and high-performance computing (HPC).

Applications that can benefit from the new joint offering include enterprise NAS and object storage; high-performance data ingestion; supercomputer data access; scalable data analysis; and scalable data processing.

Vast, founded in 2016, offers a software data platform that enterprises and CSPs use for data-intensive computing. The platform is based on a distributed systems architecture, called DASE, that allows a system to run read and write operations at any scale. Vast’s customers include Pixar, Verizon and Zoom.

By collaborating with Supermicro, Vast hopes to extend its market. Currently, Vast sells to infrastructure providers at a variety of scales. Some of its largest customers have built 400 petabyte storage systems, and a few are even discussing systems that would store up to 2 exabytes, according to John Mao, Vast’s VP of technology alliances.

Supermicro and Vast have engaged with many of the same CSPs separately, supporting various parts of the solution. By formalizing this collaboration, they hope to extend their reach to new customers while increasing their sell-through to current customers.

Vast is also looking to the Supermicro alliance to expand its global reach. While most of Vast’s customers today are U.S.-based, Supermicro operates in over 100 countries worldwide. Supermicro also has the infrastructure to integrate, test and ship 5,000 fully populated racks per month from its manufacturing plants in California, Netherlands, Malaysia and Taiwan.

There’s also a big difference in size. Where privately held Vast has about 800 employees, publicly traded Supermicro has more than 5,100.

Rack solution

Now Vast and Supermicro have developed a new converged system using Supermicro’s Hyper A+ servers with AMD EPYC 9004 processors. The solution combines 2 separate Vast servers. 

This converged system is well suited to large service providers, where the typical Supermicro-powered Vast rack configuration will start at about 2PB, Mao adds.

Rack-scale configurations can cut costs by eliminating the need for single-box redundancy. This converged design makes the system more scalable and more cost-efficient.

Under the hood

One highlight of the joint project: It puts Vast’s DASE architecture on Supermicro’s  industry-standard servers. Each server will have both the compute and storage functions of a Vast cluster.

At the same time, the architecture is disaggregated via a high-speed Ethernet NVMe fabric. This allows each node to access all drives in the cluster.

The Vast platform architecture uses a series of what the company calls an EBox. Each EBox, in turn, contains 2 kinds of storage servers in a container environment: CNode (short for Compute Node) and DNode (short for Data Node). In a typical EBox, one CNode interfaces with client applications and writes directly to two DNode containers.

In this configuration, Supermicro’s storage servers can act as a hardware building block to scale Vast to hundreds of petabytes. It supports Vast’s requirement for multiple tiers of solid-state storage media, an approach that’s unique in the industry.

CPU to GPU

At the NAB Show, held recently in Las Vegas, Supermicro’s demos included storage servers, each powered by a single-socket AMD EPYC 9004 Series processor.

With up to 128 PCIe Gen 5 lanes, the AMD processor empowers the server to connect more SSDs via NVMe with a single CPU. The Supermicro storage server also lets users move data directly from storage to GPU memory supporting Nvidia’s GPU Direct storage protocol, essentially bypassing a GPU cluster’s CPU using RDMA.

If you or your customers are interested in the new Vast solution, get in touch with your local Supermicro sales rep or channel partner. Under the terms of the new partnership, Supermicro is acting as a Vast integrator and OEM. It’s also Vast’s only rack-scale partner.

Do more:

 

Featured videos


Follow


Related Content

AMD and Supermicro: Pioneering AI Solutions

Featured content

AMD and Supermicro: Pioneering AI Solutions

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bringing AMD Instinct to the Forefront

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions. Their shared vision? To enable organizations to unlock the full potential of AI workloads, from training massive language models to accelerating complex simulations.

The AMD Instinct MI300 Series: Changing The AI Acceleration Paradigm

At the heart of this collaboration lies the AMD Instinct MI300 Series—a family of accelerators designed to redefine performance boundaries. These accelerators combine high-performance AMD EPYC™ 9004 series CPUs with the powerful AMD InstinctTM MI300X GPU accelerators and 192GB of HBM3 memory, creating a formidable force for AI, HPC, and technical computing.

Supermicro’s H13 Generation of GPU Servers

Supermicro’s H13 generation of GPU Servers serves as the canvas for this technological masterpiece. Optimized for leading-edge performance and efficiency, these servers integrate seamlessly with the AMD Instinct MI300 Series. Let’s explore the highlights:

8-GPU Systems for Large-Scale AI Training:

  • Supermicro’s 8-GPU servers, equipped with the AMD Instinct MI300X OAM accelerator, offer raw acceleration power. The AMD Infinity Fabric™ Links enable up to 896GB/s of peak theoretical P2P I/O bandwidth, while the 1.5TB HBM3 GPU memory fuels large-scale AI models.
  • These servers are ideal for LLM Inference and training language models with trillions of parameters, minimizing training time and inference latency, lowering the TCO and maximizing throughput.

Benchmarking Excellence

But what about real-world performance? Fear not! Supermicro’s ongoing testing and benchmarking efforts have yielded remarkable results. The continued engagement between AMD and Supermicro performance teams enabled Supermicro to test pre-release ROCm versions with the latest performance optimizations and publicly released optimization like Flash Attention 2 and vLLM. The Supermicro AMD-based system AS -8125GS-TNMR2 showcases AI inference prowess, especially on models like Llama-2 70B, Llama-2 13B, and Bloom 176B. The performance? Equal to or better than AMD’s published results from the Dec. 6 Advancing AI event.

Image - Blog - AMD and Supermicro Pioneering AI Solutions

Charles Liang’s Vision

In the words of Charles Liang, President and CEO of Supermicro:

“We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators. Our proven architecture allows for fully integrated liquid cooling solutions, giving customers a competitive advantage.”

Conclusion

The AMD-Supermicro partnership isn’t just about hardware and software stacks; it’s about pushing boundaries, accelerating breakthroughs, and shaping the future of AI. So, as we raise our virtual glasses, let’s toast to innovation, collaboration, and the relentless pursuit of performance and excellence.

Featured videos


Follow


Related Content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Featured content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Supermicro is now letting you validate, test and benchmark AI workloads on its AMD-based H13 systems right from your browser. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has added new AI-workload-optimized GPU systems to its popular H13 JumpStart program. This means you and your customers can validate, test and benchmark AI workloads on a Supermicro H13 system right from your PC’s browser.

The JumpStart program offers remote sessions to fully configured Supermicro systems with SSH, VNC, and web IPMI. These systems feature the latest AMD EPYC 9004 Series Processors with up to 128 ‘Zen 4c’ cores per socket, DDR5 memory, PCIe 5.0, and CXL 1.1 peripherals support.

In addition to previously available models, Supermicro has added the H13 4U GPU System with dual AMD EPYC 9334 processors and Nvidia L40S AI-focused universal GPUs. This H13 configuration is designed for heavy AI workloads, including applications that leverage machine learning (ML) and deep learning (DL).

3 simple steps

The engineers at Supermicro know the value of your customer’s time. So, they made it easy to initiate a session and get down to business. The process is as simple as 1, 2, 3:

  • Select a system: Go to the main H13 JumpStart page, then scroll down and click one of the red “Get Access” buttons to browse available systems. Then click “Select Access” to pick a date and time slot. On the next page, select the configuration and press “Schedule” and then “Confirm.”
  • Sign In: log in with a Supermicro SSO account to access the JumpStart program. If you or your customers don’t already have an account, creating a new account is both free and easy.
  • Initiate secure access: When the scheduled time arrives, begin the session by visiting the JumpStart page. Each server will include documentation and instructions to help you get started quickly.

So very secure

Security is built into the program. For instance, the server is not on a public IP address. Nor is it directly addressable to the Internet. Supermicro sets up the jump server as a proxy, and this provides access to only the server you or your customer are authorized to test.

And there’s more. After your JumpStart session ends, the server is manually secure-erased, the BIOS and firmware are re-flashed, and the OS is reinstalled with new credentials. That way, you can be sure any data you’ve sent to the H13 system will disappear once the session ends.

Supermicro is serious about its security policies. However, the company still warns users to keep sensitive data to themselves. The JumpStart program is meant for benchmarking, testing and validation only. In their words, “processing sensitive data on the demo server is expressly prohibited.”

Keep up with the times

Supermicro’s expertly designed H13 systems are at the core of the JumpStart program, with new models added regularly to address typical workloads.

In addition to the latest GPU systems, the program also features hardware focused on evolving data center roles. This includes the Supermicro H13 CloudDC system, an all-in-one rackmount platform for cloud data centers. Supermicro CloudDC systems include single AMD EPYC 9004 series processors and up to 10 hot-swap NVMe/SATA/SAS drives.

You can also initiate JumpStart sessions on Supermicro Hyper Servers. These multi-use machines are optimized for tasks including cloud, 5G core, edge, telecom and hyperconverged storage.

Supermicro Hyper Servers included in the company’s JumpStart program offer single or dual processor configurations featuring AMD EPYC 9004 processors and up to 8TB of DDR5 memory in a 1U or 2U form factor.

Helping your customers test and validate a Supermicro H13 system for AI is now easy. Just get a JumpStart.

Do more:

 

Featured videos


Follow


Related Content

AMD CTO: ‘AI across our entire portfolio’

Featured content

AMD CTO: ‘AI across our entire portfolio’

In a presentation for industry analysts, AMD chief technology officer Mark Papermaster laid out the company’s vision for artificial intelligence everywhere — from PC and edge endpoints to the largest hypervisor servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The current buildout of the artificial intelligence infrastructure is an event as big as the original launch of the internet.

AI, now mainly an expense, will soon be monetized. Thousands of AI applications are coming.

And AMD plans to embed AI across its entire product portfolio. That will include components and software on everything from PCs and edge sensors to the largest servers used by the big cloud hypervisors.

These were among the comments of Mark Papermaster, AMD’s executive VP and CTO, during a recent fireside chat hosted by stock research firm Arete Research. During the hour-long virtual presentation, Papermaster answered questions from moderator Brett Simpson of Arete and attending stock analysts. Here are the highlights.

The overall AI market

AMD has said it believes the total addressable market (TAM) for AI through 2027 is $400 billion. “That surprised a lot of people,” Papermaster said, but AMD believes a huge AI infrastructure is needed.

That will begin with the major hyperscalers. AWS, Google Cloud and Microsoft Azure are among those looking at massive AI buildouts.

But there’s more. AI is not only in the domain of these massive clusters. Individual businesses will be looking for AI applications that can drive productivity and enhance the customer experience.

The models for these kinds of AI systems are typically smaller. They can be run on smaller clusters, too, whether on-premises or in the cloud.

AI will also make its way into endpoint devices. They’ll include PCs, embedded devices, and edge sensors.

Also, AI is more than just compute. AI systems also require robust memory, storage and networking.

“We’re thrilled to bring AI across our entire product portfolio,” Papermaster said.

Looking at the overall AI market, AMD expects to see a compound annual growth rate of 70%. “I know that seems huge,” Papermaster said. “But we are investing to capture that growth.”

AI pricing

Pricing considerations need to take into account more than just the price of a GPU, Papermaster argued. You really have to look at the total cost of ownership (TCO).

The market is operating with an underlying premise: Demand for AI compute is insatiable. That will drive more and more compute into a smaller area, delivering more efficient power per FLOP, the most common measure of AI compute performance.

Right now, the AI compute model is dominated by a single player. But AMD is now bringing the competition. That includes the recently announced MI300 accelerator. But as Papermaster pointed out, there’s more, too. “We have the right technology for the right purpose,” he said.

That includes using not only GPUs, but also (where appropriate) CPUs. These workloads can include AI inference, edge computing, and PCs. In this way, user organizations can better manage their overall CapEx spend.

As moderator Simpson reminded him, Papermaster is fond of saying that customers buy road maps. So naturally he was asked about AMD’s plans for the AI future. Papermaster mainly deferred, saying more details will be forthcoming. But he also reminded attendees that AMD’s investments in AI go back several years and include its ROCm software enablement stack.

Training vs. inference

Training and inference are currently the two biggest AI workloads. Papermaster believes we’ll see the AI market bifurcate along their two lines.

Training depends on raw computational power in a vast cluster. For example, the popular ChatGPT generative AI tool uses a model with over a trillion parameters. That’s where AMD’s MI300 comes into play, Papermaster said, “because it scales up.”

This trend will continue, because for large language models (LLMs), the issue is latency. How quickly can you get a response? That requires not only fast processors, but also equally fast memory.

More specific inferencing applications, typically run after training is completed, are a different story, Papermaster said, adding: “Essentially, it’s ‘I’ve trained my model; now I want to organize it.’” These workloads are more concise and less demanding of both power and compute, meaning they can run on more affordable GPU-CPU combinations.

Power needs for AI

User organizations face a challenge: While running an AI system requires a lot of power, many data centers are what Papermaster called “power-gated.” In other words, they’re unable to drive up compute capacity to AI levels using current technology.

AMD is on the case. In 2020, the company committed itself to driving a 30x improvement in power efficiency for its products by 2025. Papermaster said the company is still on track to deliver that.

To do so, he added, AMD is thinking in terms of “holistic design.” That means not just hardware, but all the way through an application to include the entire stack.

One promising area involves AI workloads that can use AI approximation. These are applications that, unlike HPC workloads, do not need incredible levels of accuracy. As a result, performance is better for lower-precision arithmetic than it is for high-precision. “Not all AI models are created equally,” Papermaster said. “You’ll need smaller models, too.”

AMD is among those who have been surprised by the speed of AI adoption. In response, AMD has increased its projection of AI sales this year from $2 billion to $3.5 billion, what Papermaster called the fastest ramp AMD has ever seen.

Do more:

 

Featured videos


Follow


Related Content

AMD Instinct MI300 Series: Take a deeper dive in this advanced technology

Featured content

AMD Instinct MI300 Series: Take a deeper dive in this advanced technology

Take a look at the innovative technology behind the new AMD Instinct MI300 Series accelerators.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Earlier this month, AMD took the wraps off its highly anticipated AMD Instinct MI300 Series of generative AI accelerators and data-center acceleration processing units (APUs). During the announcement event, AMD president Victor Peng said the new components had been “designed with our most advanced technologies.”

Advanced technologies indeed. With the AMD Instinct MI300 Series, AMD is writing a brand-new chapter in the story of AI-adjacent technology.

Early AI developments relied on the equivalent of a hastily thrown-together stock car constructed of whichever spare parts happened to be available at the time. But those days are over.

Now the future of computing has its very own Formula 1 race car. It’s extraordinarily powerful and fine-tuned to nanometer tolerances.

A new paradigm

At the heart of this new accelerator series is AMD’s CDNA 3 architecture. This third generation employs advanced packaging that tightly couples CPUs and GPUs to bring high-performance processing to AI workloads.

AMD’s new architecture also uses 3D packaging technologies that integrate up to 8 vertically stacked accelerator complex dies (XCDs) and four I/O dies (IODs) that contain system infrastructure. The various systems are linked via AMD Infinity Fabric technology and are connected to 8 stacks of high-bandwidth memory (HBM).

High-bandwidth memory can provide far more bandwidth and yet much lower power consumption compared with the GDDR memory found in standard GPUs. Like many of AMD’s notable innovations, its HBM employs a 3D design.

In this case, the memory modules are stacked vertically to shorten the distance the data needs to travel. This also allows for smaller form factors.

AMD has implemented the HMB using a unified memory architecture. This is an increasingly popular design in which a single array of main-memory modules supports both the CPU and GPU simultaneously, speeding tasks and applications.

Unified memory is more efficient than traditional memory architecture. It offers the advantage of faster speeds along with lower power consumption and ambient temperatures. Also, data need not be copied from one set of memory to another.

Greater than the sum of its parts

What really makes AMD CDNA 3 unique is its chiplet-based architecture. The design employs a single logical processor that contains a dozen chiplets.

Each chiplet, in turn, is fabricated for either compute or memory. To communicate, all the chiplets are connected via the AMD Infinity Fabric network-on-chip.

The primary 5nm XCDs contain the computational elements of the processor along with the lowest levels of the cache hierarchy. Each XCD includes a shared set of global resources, including the scheduler, hardware queues and 4 asynchronous compute engines (ACE).

The 6nm IODs are dedicated to the memory hierarchy. These chiplets carry a newly redesigned AMD Infinity Cache and an HBM3 interface to the on-package memory. The AMD Infinity Cache boosts generational performance and efficiency by increasing cache bandwidth and reducing the number of off-chip memory accesses.

Scaling ever upward

System architects are constantly in the process of designing and building the world’s largest exascale-class supercomputers and AI systems. As such, they are forever reaching for more powerful processors capable of astonishing feats.

The AMD CDNA 3 architecture is an obvious step in the right direction. The new platform takes communication and scaling to the next level.

In particular, the advent of AMD’s 4th Gen Infinity Architecture Fabric offers architects a new level of connectivity that could help produce a supercomputer far more powerful than anything we have access to today.

It’s reasonable to expect that AMD will continue to iterate its new line of accelerators as time passes. AI research is moving at a breakneck pace, and enterprises are hungry for more processing power to fuel their R&D.

What will researchers think of next? We won’t have to wait long to find out.

Do more:

 

Featured videos


Follow


Related Content

Supermicro debuts 3 GPU servers with AMD Instinct MI300 Series APUs

Featured content

Supermicro debuts 3 GPU servers with AMD Instinct MI300 Series APUs

The same day that AMD introduced its new AMD Instinct MI300 series accelerators, Supermicro debuted three GPU rackmount servers that use the new AMD accelerated processing units (APUs). One of the three new systems also offers energy-efficient liquid cooling.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro didn’t waste any time.

The same day that AMD introduced its new AMD Instinct MI300 series accelerators, Supermicro debuted three GPU rackmount servers that use the new AMD accelerated processing units (APUs). One of the three new systems also offers energy-efficient liquid cooling.

Here’s a quick look, plus links for more technical details:

Supermicro 8-GPU server with AMD Instinct MI300X: AS -8125GS-TNMR2

This big 8U rackmount system is powered by a pair of AMD EPYC 9004 Series CPUs and 8 AMD Instinct MI300X accelerator GPUs. It’s designed for training and inference on massive AI models with a total of 1.5TB of HBM3 memory per server node.

The system also supports 8 high-speed 400G networking cards, which provide direct connectivity for each GPU; 128 PCIe 5.0 lanes; and up to 16 hot-swap NVMe drives.

It’s an air-cooled system with 5 fans up front and 5 more in the rear.

Quad-APU systems with AMD Instinct MI300A accelerators: AS -2145GH-TNMR and AS -4145GH-TNMR

These two rackmount systems are aimed at converged HPC-AI and scientific computing workloads.

They’re available in the user’s choice of liquid or air cooling. The liquid-cooled version comes in a 2U rack format, while the air-cooled version is packaged as a 4U.

Either way, these servers are powered by four AMD Instinct MI300A accelerators, which combine CPUs and GPUs in an APU. That gives each server a total of 96 AMD ‘Zen 4’ cores, 912 compute units, and 512GB of HBM3 memory. Also, PCIe 5.0 expansion slots allow for high-speed networking, including RDMA to APU memory.

Supermicro says the liquid-cooled 2U system provides a 50%+ cost savings on data-center energy. Another difference: The air-cooled 4U server provides more storage and an extra 8 to 16 PCIe acceleration cards.

Do more:

 

Featured videos


Follow


Related Content

Pages