Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD’s new ROCm 6.3 makes GPU programming even better

Featured content

AMD’s new ROCm 6.3 makes GPU programming even better

AMD recently introduced version 6.3 of ROCm, its open software stack for GPU programming. New features included expanded OS support and other optimizations.

Learn More about this topic
  • Applications:
  • Featured Technologies:

There’s a new version of AMD ROCm, the open software stack designed to enable GPU programming from low-level kernel all the way up to end-user applications.  

The latest version, ROCm 6.3, adds features that include expanded operating system support, an open-source toolkit and more.

Rock On

AMD ROCm provides the tools for HIP (the heterogeneous-computing interface for portability), OpenCL and OpenMP. These include compilers, APIs, libraries for high-level functions, debuggers, profilers and runtimes.

ROCm is optimized for Generative AI and HPC applications, and it’s easy to migrate existing code into. Developers can use ROCm to fine-tune workloads, while partners and OEMs can integrate seamlessly with AMD to create innovative solutions.

The latest release builds on ROCm 6, which AMD introduced last year. Version 6 added expanded support for AMD Instinct MI300A and MI300X accelerators, key AI support features, optimized performance, and an expanded support ecosystem.

The senior VP of AMD’s AI group, Vamsi Boppana, wrote in a recent blog post: “Our vision is for AMD ROCm to be the industry’s premier open AI stack, enabling choice and rapid innovation.”

New Features

Here’s some of what’s new in AMD ROCm 6.3:

  • rocJPEG: A high-performance JPEG decode SDK for AMD GPUs.
  • ROCm compute profiler and system profiler: Previously known as Omniperf and Omnitrace, these have been renamed to reflect their new direction as part of the ROCm software stack.
  • Shark AI toolkit: This open-source toolkit is for high-performance serving of GenAI and  LLMs. Initial release includes support for the AMD Instinct MI300.
  • PyTorch 2.4 support: PyTorch is a machine learning library used for applications such as computer vision and natural language processing. Originally developed by Meta AI, it’s now part of the Linux Foundation umbrella.
  • Expanded OS support: This includes added support for Ubuntu 24.04.2 and 22.04.5; RHEL 9.5; and Oracle Linux 8.10. In addition, ROCm 6.3.1 includes support for both Debian 12 and the AMD Instinct MI325X accelerator.
  • Documentation updates: ROCm 6.3 offers clearer, more comprehensive guidance for a wider variety of use cases and user needs.

Super for Supermicro

Developers can use ROCm 6.3 to create tune workloads and create solutions for Supermicro GPU systems based on AMD Instinct MI300 accelerators.

Supermicro offers three such systems:

Are your customers building AI and HPC systems? Then tell them about the new features offered by AMD ROCm 6.3.

Do More:

 

 

Featured videos


Follow


Related Content

2024: A look back at the year’s best

Featured content

2024: A look back at the year’s best

Let's look back at 2024, a year when AI was everywhere, AMD introduced its 5th Gen EPYC processors, and Supermicro led with liquid cooling.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You couldn't call 2024 boring.

If anything, the year was almost too exciting, too packed with important events, and moving much too fast.

Looking back, a handful of 2024’s technology events stand out. Here are a few of our favorite things.

AI Everywhere

In March AMD’s chief technology officer, Mark Papermaster, made some startling predictions that turned out to be absolutely true.

Speaking at an investors’ event sponsored by Arete Research, Papermaster said, “We’re thrilled to bring AI across our entire product portfolio.” AMD has indeed done that, offering AI capabilities from PCs to servers to high-performance GPU accelerators.

Papermaster also said the buildout of AI is an event as big as the launch of the internet. That certainly sounds right.

He also said AMD believes the total addressable market for AI through 2027 to be $400 billion. If anything, that was too conservative. More recently, consultants Bain & Co. predicted that figure will reach $780 billion to $990 billion.

Back in March, Papermaster said AMD had increased its projection for full-year AI sales from $2 billion to $3.5 billion. That’s probably too low, too.

AMD recently reported revenue of $3.5 billion for its data-center group for just the third quarter alone. The company attributed at least some of the group’s 122% year-on-year increase to the strong ramp of AMD Instinct GPU shipments.

5th Gen AMD EPYC Processors

October saw AMD introduce the fifth generation of its powerful line of EPYC server processors.

The 5th Gen AMD EPYC processors use the company’s new ‘Zen 5’ core architecture. It includes over 25 SKUs offering anywhere from 8 to 192 cores. And the line includes a model—the AMD EPYC 9575F—designed specifically to work with GPU-powered AI solutions.

The market has taken notice. During the October event, AMD CEO Lisa Su told the audience that nearly one in three servers worldwide (34%) are now powered by AMD EPYC processors. And Supermicro launched its new H14 line of servers that will use the new EPYC processors.

Supermicro Liquid Cooling

As servers gain power to add AI and other compute-intensive capabilities, they also run hotter. For data-center operators, that presents multiple challenges. One big one is cost: air conditioning is expensive. What’s more, AC may be unable to cool the new generation of servers.

Supermicro has a solution: liquid cooling. For some time, the company has offered liquid cooling as a data-center option.

In November the company took a new step in this direction. It announced a server that comes with liquid cooling only.

The server in question is the Supermicro 2U 4-node FlexTwin, model number AS -2126FT-HE-LCC. It’s a high-performance, hot-swappable, high-density compute system designed for HPC workloads.

Each 2U system comprises 4 nodes, and each node is powered by dual AMD EPYC 9005 processors. (The previous-gen AMD EPYC 9004s are supported, too.)

To keep cool, the FlexTwin server uses a direct-to-chip (D2C) cold plate liquid cooling setup. Each system also runs 16 counter-rotating fans. Supermicro says this cooling arrangement can remove up to 90% of server-generated heat.

AMD Instinct MI325X Accelerator

A big piece of AMD’s product portfolio for AI is its Instinct line of accelerators. This year the company promised to maintain a yearly cadence of new Instinct models.

Sure enough, in October the company introduced the AMD Instinct MI325X Accelerator. It’s designed for Generative AI performance and working with large language models (LLMs). The system offers 256GB of HBM3E memory and up to 6TB/sec. of memory bandwidth.

Looking ahead, AMD expects to formally introduce the line’s next member, the AMD Instinct MI350, in the second half of next year. AMD has said the new accelerator will be powered by a new AMD CDNA 4 architecture, and will improve AI inferencing performance by up to 35x compared with the older Instinct MI300.

Supermicro Edge Server

A lot of computing now happens at the edge, far beyond either the office or corporate data center.

Even more edge computing is on tap. Market watcher IDC predicts double-digit growth in edge-computing spending through 2028, when it believes worldwide sales will hit $378 billion.

Supermicro is on it. At the 2024 MWC, held in February in Barcelona, the company introduced an edge server designed for the kind of edge data centers run by telcos.

Known officially as the Supermicro A+ Server AS -1115SV-WTNRT, it’s a 1U short-depth server powered by a single AMD EPYC 8004 processor with up to 64 cores. That’s edgy.

Happy Holidays from all of us at Performance Intensive Computing. We look forward to serving you in 2025.

Check out these related blog posts:

 

Featured videos


Follow


Related Content

Faster is better. Supermicro with 5th Gen AMD is faster

Featured content

Faster is better. Supermicro with 5th Gen AMD is faster

Supermicro servers powered by the latest AMD processors are up to 9 times faster than a previous generation, according to a recent benchmark.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When it comes to servers, faster is just about always better.

With faster processors, workloads get completed in less time. End users get their questions answered sooner. Demanding high-performance computing (HPC) and AI applications run more smoothly. And multiple servers get all their jobs done more rapidly.

And if you’ve installed, set up or managed one of these faster systems, you’ll look pretty smart.

That’s why the latest benchmark results from Supermicro are so impressive, and also so important.

The tests show that Supermicro servers powered by the latest AMD processors are up to 9 times faster than a previous generation. These systems can make your customer happy—and make you look good.

SPEC Check

The benchmark in question are those of the Standard Performance Evaluation Corp., better known as SPEC. It’s a nonprofit consortium that sets benchmarks for running complete applications.

Supermicro ran its servers on SPEC’s CPU 2017 benchmark, a suite of 43 benchmarks that measures and compare compute-intensive performance. All of them stress a system’s CPU, memory subsystem and compiler—emphasizing all three of these components working together, not just the processor.

To provide a comparative measure of integer and floating-point compute-intensive performance, the benchmark uses two main metrics. The first is speed, or how much time a server needs to complete a single task. The second is throughput, in which the server runs multiple concurrent copies.

The results are given as comparative scores. In general, higher is better.

Super Server

The server tested was the Supermicro H14 Hyper server, model number AS 2126HS-TN. It’s powered by dual AMD EPYC 9965 processors and loaded with 1.5TB of memory.

This server has been designed for applications that include HPC, cloud computing, AI inferencing and machine learning.

In the floating-point measure, the new server, when compared with a SMC server powered by an earlier-gen AMD EPYC 7601, was 8x faster.

In the Integer Rate measure, compared with a circa 2018 SMC server, it’s almost 9x faster.

Impressive results. And remember, when it comes to servers, faster is better.

Do More:

 

Featured videos


Follow


Related Content

Tech Explainer: Why does PCIe 5.0 matter? And what’s coming next?

Featured content

Tech Explainer: Why does PCIe 5.0 matter? And what’s coming next?

PCIe 5.0 connects high-speed components to servers and PCs. Versions 6 & 7, coming soon, will deliver even higher speeds for tomorrow’s AI workloads.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You’ve no doubt heard of PCIe 5.0. But what is it exactly? And why does it matter?

As the name and number imply, PCIe 5.0 is the fifth generation of the Peripheral Component Interconnect Express interface standard. PCIe essentially sets the rules for connecting high-speed components such as GPUs, networking cards and storage devices to servers, desktop PCs and other devices.

To be sure, these components could be connected via a number of other interface standards, such as USB-C and SATA.

But PCIe 5.0 alone offers extremely high bandwidth and low latency. That makes it a better choice for mission-critical enterprise IT operations and resource-intensive AI workloads.

Left in the Dust

The 5th generation of PCIe was released in May 2019, bringing significant improvements over PCIe 4.0. These include:

  • Increased Bandwidth. PCIe 5.0 has a maximum throughput of 32 giga-transfers per second (GT/s)—effectively double the bandwidth of its predecessor. In terms of data transfer, 32 GT/s translates to around 4 GB of data throughput per lane in each direction. That allows for a total of 64 GB/s in a 16-lane PCIe-based GPU. That’s perfect for modern GPU-dependent workflows such as AI-inferencing.
  • Lower Latency. Keeping latency as low as possible is crucial for applications like gaming, high-performance computing (HPC) and AI workloads. High latency can inhibit data retrieval and processing, which in turn hurts both application performance and the user experience. The latency of PCIe 5.0 varies depending on multiple factors, including network connectivity, attached devices and workloads. But it’s safe to assume an average latency of around 100 nanoseconds (ns) — roughly 50% less than PCIe 4.0. And again, with latency, lower is better.
  • Enhanced Data-Center Features. Modern data-center operations are among the most demanding. That’s especially true for IT operations focused on GenAI, machine learning and telecom. So it’s no surprise that PCIe 5.0 includes several features focused on enhanced operations for data centers. Among the most notable is increased bandwidth and faster data access for NVMe storage devices. PCIe 5.0 also includes features that enhance power management and efficiency.

Leveraging PCIe 5

AMD is a front-runner in the race to help enterprises cope with modern AI workloads. And the company has been quick to take advantage of PCIe 5.0’s performance improvements. Take, for example, the AMD Instinct MI325X Accelerator.

This system is a leading-edge accelerator module for generative AI, inference, training and HPC. Each discrete AMD Instinct MI325X offers a 16-lane PCIe Gen 5 host interface and seven AMD Infinity Fabric links for full connectivity between eight GPUs in a ring.

By leveraging a PCIe 5.0 connection, AMD’s accelerator can offer I/O-to-host-CPU and scale-out network bandwidths of 128 GB/sec.

AMD is also using PCIe on its server processors. The new 5th generation AMD EPYC server processors take advantage of PCIe 5.0’s impressive facility. Specifically, the AMD EPYC 9005 Series processors support 128 PCIe 5 I/O lanes in a single-socket server. For dual-socket servers, support increases to 160 lanes.

Supermicro is another powerful force in enterprise IT operations. The company’s behemoth H14 8-GPU system (model number AS-8126GS-TNMR2) leverages AMD EPYC processors and AMD Instinct accelerators to help enterprises deploy the largest AI and large language models (LLMs).

The H14’s standard configuration includes eight PCIe 5.0 x16 low-profile slots and two full-height slots. Users can also opt for a PCIe expansion kit, which adds two additional PCIe 5.0 slots. That brings the grand total to an impressive 12 PCIe 5.0 16-lane expansion slots.

PCIe 6.0 and Beyond

PCIe 5.0 is now entering its sixth year of service. That’s not a long time in the grand scheme of things. But the current version might feel ancient to IT staff who need to eke out every shred of bandwidth to support modern AI workloads.

Fortunately, a new PCIe generation is in the works. The PCIe 6.0 specification, currently undergoing testing and development, will offer still more performance gains over its predecessor.

PCI-SIG, an organization committed to developing and enhancing the PCI standard, says the 6.0 platform’s upgrades will include:

  • A data rate of up to 64 GT/sec., double the current rate and providing a maximum bidirectional bandwidth of up to 256 GB/sec for x16 lanes
  • Pulse Amplitude Modulation with 4 levels (PAM4)
  • Lightweight Forward Error Correct (FEC) and Cyclic Redundancy Check (CRC) to mitigate the bit error rate increase associated with PAM4 signaling
  • Backwards compatibility with all previous generations of PCIe technology

There’s even a next generation after that, PCIe 7.0. This version could be released as soon as 2027, according to the PCI-SIG. That kind of speed makes sense considering the feverish rate at which new technology is being developed to enable and expand AI operations.

It’s not yet clear how accurate those release dates are. But one thing’s for sure: You won’t have to wait long to find out.

Do More:

 

Featured videos


Follow


Related Content

Supermicro JumpStart remote test site adds latest 5th Gen AMD EPYC processors

Featured content

Supermicro JumpStart remote test site adds latest 5th Gen AMD EPYC processors

Register now to test the Supermicro H14 2U Hyper with dual AMD EPYC 9965 processors from the comfort and convenience of your office.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro’s JumpStart remote test site will soon let you try out a server powered by the new 5th Gen AMD EPYC processors from any location you choose.

The server is the Supermicro H14 2U Hyper with dual AMD EPYC 9965 processors. It will be available for remote testing on the Supermicro JumpStart site starting on Dec. 2. Registration is open now.

The JumpStart site lets you use a Supermicro server solution online to validate, test and benchmark your own workloads, or those of your customers. And using JumpStart is free.

All test systems on JumpStart are fully configured with SSH (the Secure Socket Shell network protocol); VNC (Virtual Network Computing remote-access software); and Web IPMI (the Intelligent Platform Management Interface). During your test, you can open one session of each.

Using the Supermicro JumpStart remote testing site is simple:

Step 1: Select the system you want to test, and the time slot when you want to test it.

Step 2: At the scheduled time, login to the JumpStart site using your Supermicro single sign-on (SSO) account. If you don’t have an account yet, create one and then use it to login to JumpStart. (Creating an account is free.)

Step 3: Use the JumpStart site to validate, test and benchmark your workloads!

Rest assured, Supermicro will protect your privacy. Once you’re done testing a system on JumpStart, Supermicro will manually erase the server, reflash the BIOS and firmware, and re-install the OS with new credentials.

Hyper power

The AMD-powered server recently added to JumpStart is the Supermicro H14 2U Hyper, model number AS -2126HS-TN. It’s powered by dual AMD EPYC 9965 processors. Each of these CPUs offers 192 cores and a maximum boost clock of 3.7 GHz.

This Supermicro server also features 3.8TB of storage and 1.5TB of memory. The system is built in the 2U rackmount form factor.

Are you eager to test this Supermicro server powered by the latest AMD EPYC CPUs? JumpStart is here to help you.

Do More:

 

Featured videos


Follow


Related Content

Supermicro FlexTwin now supports 5th gen AMD EPYC CPUs

Featured content

Supermicro FlexTwin now supports 5th gen AMD EPYC CPUs

FlexTwin, part of Supermicro’s H14 server line, now supports the latest AMD EPYC processors — and keeps things chill with liquid cooling.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering about the server of the future? It’s available for order now from Supermicro.

The company recently added support for the latest 5th Gen AMD EPYC 9005 Series processors on its 2U 4-node FlexTwin server with liquid cooling.

This server is part of Supermicro’s H14 line and bears the model number AS -2126FT-HE-LCC. It’s a high-performance, hot-swappable and high-density compute system.

Intended users include oil & gas companies, climate and weather modelers, manufacturers, scientific researchers and research labs. In short, anyone who requires high-performance computing (HPC).

Each 2U system comprises four nodes. And each node, in turn, is powered by a pair of 5th Gen AMD EPYC 9005 processors. (The previous-gen AMD EPYC 9004 processors are supported, too.)

Memory on this Supermicro FlexTwin maxes out at 9TB of DDR5, courtesy of up to 24 DIMM slots. Expansions connect via PCIe 5.0, with one slot per node the standard and more available as an option.

The 5th Gen AMD EPYC processors, introduced last month, are designed for data center, AI and cloud customers. The series launched with over 25 SKUs offering up to 192 cores and all using AMD’s new “Zen 5” or “Zen 5c” architectures.

Keeping Cool

To keep things chill, the Supermicro FlexTwin server is available with liquid cooling only. This allows the server to be used for HPC, electronic design automation (EDA) and other demanding workloads.

More specifically, the FlexTwin server uses a direct-to-chip (D2C) cold plate liquid cooling setup, and each system also runs 16 counter-rotating fans. Supermicro says this cooling arrangement can remove up to 90% of server-generated heat.

The server’s liquid cooling also covers the 5th gen AMD processors’ more demanding cooling requirements; they’re rated at up to 500W of thermal design power (TDP). By comparison, some members of the previous, 4th gen AMD EPYC processors have a default TDP as low as 200W.

Build & Recycle

The Supermicro FlexTwin server also adheres to the company’s “Building Block Solutions” approach. Essentially, this means end users purchase these servers by the rack.

Supermicro says its Building Blocks let users optimize for their exact workload. Users also gain efficient upgrading and scaling.

Looking even further into the future, once these servers are ready for an upgrade, they can be recycled through the Supermicro recycling program.

In Europe, Supermicro follows the EU’s Waste Electrical and Electronic Equipment (WEEE) Directive. In the U.S., recycling is free in California; users in other states may have to pay a shipping charge.

Put it all together, and you’ve got a server of the future, available to order today.

Do More:

 

Featured videos


Follow


Related Content

Tech Explainer: What is the AMD “Zen” core architecture?

Featured content

Tech Explainer: What is the AMD “Zen” core architecture?

Originally launched in 2017, this CPU architecture now delivers high performance and efficiency with ever-thinner processes.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The recent release of AMD’s 5th generation processors—formerly codenamed Turin—also heralded the introduction of the company’s “Zen 5” core architecture.

“Zen” is AMD’s name for a design ethos that prioritizes performance, scalability and efficiency. As any CTO will tell you, these 3 aspects are crucial for success in today’s AI era.

AMD originally introduced its “Zen” architecture in 2017 as part of a broader campaign to steal market share and establish dominance in the all-important enterprise IT space.

Subsequent generations of the “Zen” design have markedly increased performance and efficiency while delivering ever-thinner manufacturing processes.

Now and Zen

Since the “Zen” core’s original appearance in AMD Ryzen 1000-series processors, the architecture’s design philosophy has maintained its focus on a handful of vital aspects. They include:

  • A modular design. Known as Infinity Fabric, it facilitates efficient connectivity among multiple CPU cores and other components. This modular architecture enhances scalability and performance, both of which are vital for modern enterprise IT infrastructure.
  • High core counts and multithreading. Both are common to EPYC and Ryzen CPUs built using the AMD “Zen” core architecture. Simultaneous multithreading enables each core to process 2 threads. In the case of EPYC processors, this makes AMD’s CPUs ideal for multithreaded workloads that include Generative AI, machine learning, HPC and Big Data.
  • Advanced manufacturing processes. These allow faster, more efficient communication among individual CPU components, including multithreaded cores and multilevel caches. Back in 2017, the original “Zen” architecture was manufactured using a 14-nanometer (nm) process. Today’s new “Zen 5” and “Zen 5c” architectures (more on these below) reduce the lithography to just 4nm and 3nm, respectively.
  • Enhanced efficiency. This enables IT staff to better manage complex enterprise IT infrastructure. Reducing heat and power consumption is crucial, too, both in data centers and at the edge. The AMD “Zen” architecture makes this possible by offering enterprise-grade EPYC processors that offer up to 192 cores, yet require a maximum thermal design power (TDP) of only 500W.

The Two-Fold Path

The latest, fifth generation “Zen” architecture is divided into two segments: “Zen 5” and “Zen 5c.”

“Zen 5” employs a 4-nanometer (nm) manufacturing process to deliver up to 128 cores operating at up to 4.1GHz. It’s optimized for high per-core performance.

“Zen 5c,” by contrast, offers a 3nm lithography that’s reserved for AMD EPYC 96xx, 97xx, 98xx, and 99xx series processors. It’s optimized for high density and power efficiency.

The most powerful of these CPUs—the AMD EPYC 9965—includes an astonishing 192 cores, a maximum boost clock speed of 3.7GHz, and an L3 cache of 384MB.

Both “Zen 5” and “Zen 5c” are key components of the 5th gen AMD EPYC processors introduced earlier this month. Both have also been designed to achieve double-digit increases in instructions per clock cycle (IPC) and equip the core with the kinds of data handling and processing power required by new AI workloads.

Supermicro’s Satori

AMD isn’t the only brand offering bold, new tech to harried enterprise IT managers.

Supermicro recently introduced its new H14 servers, GPU-accelerated systems and storage servers powered by AMD EPYC 9005 Series processors and AMD Instinct MI325X Accelerators. A number of these servers also support the new AMD “Turin” CPUs.

The new product line features updated versions of Supermicro’s vaunted Hyper system, Twin multinode servers, and AI-inferencing GPU systems. All are now available with the user’s choice of either air or liquid cooling.

Supermicro says its collection of purpose-built powerhouses represents one of the industry’s most extensive server families. That should be welcome news for organizations intent on building a fleet of machines to meet the highly resource-intensive demands of modern AI workloads.

By designing its next-generation infrastructure around AMD 5th Generation components, Supermicro says it can dramatically increase efficiency by reducing customers’ total data-center footprints by at least two-thirds.

Enlightened IT for the AI Era

While AMD and Supermicro’s advances represent today’s cutting-edge technology, tomorrow is another story entirely.

Keeping up with customer demand and the dizzying pace of AI-based innovation means these tech giants will soon return with more announcements, tools and design methodologies. AMD has already promised a new accelerator, the AMD Instinct MI350, will be formally announced in the second half of 2025.

As far as enterprise CTOs are concerned, the sooner, the better. To survive and thrive amid heavy competition, they’ll need an evolving array of next-generation technology. That will help them reduce their bottom lines even as they increase their product offerings—a kind of technological nirvana.

Do More:

Watch a related video: 

Featured videos


Follow


Related Content

Research Roundup: AI and data centers, cybersec spending, AI for competitive advantage & sales

Featured content

Research Roundup: AI and data centers, cybersec spending, AI for competitive advantage & sales

Catch up on the latest IT industry market research and surveys. 

Learn More about this topic
  • Applications:

The rapid adoption of artificial intelligence is putting new stress on data centers. Cybersecurity spending is growing faster than expected. Business leaders say AI is a competitive advantage. And AI could even help salespeople meet their quotas.

That’s some of the latest from top IT research and polling organizations. And here’s your roundup.

AI Needs More Juice

So much AI, so few data centers. That’s one of the more surprising side effects of the AI explosion.

Demand for data centers is rising. Also rising are data centers’ electric bills, says market watcher IDC.

All data centers use a lot of electric power. Add AI to the mix, and demand for juice rockets even higher.

That’s important because electricity already accounts for nearly half (46%) of the average enterprise data center’s total operational cost, and even more (60%) for the average service-provider’s data center, IDC says.

IDC now predicts that AI data center energy consumption will rise by a compound average growth rate (CAGR) of nearly 45% from now through 2028, when it will reach a global total of 146.2 terawatt hours.

Further, IDC expects overall global data center electricity consumption to more than double between 2023 and 2028, reaching 847 terawatt hours. That’s equivalent to a five-year CAGR of 19.5%.

Cybersec Spending: Up!

Cybersecurity spending rose nearly 10% in the second quarter of this year, reaching a worldwide total of $21.1 billion, according to industry analysts Canalys.

That fast rate of growth left Canalys surprised. It had expected both closer scrutiny of cyber budgets and slower contracts signings due to uncertainty about the economy.

Instead, vendors focused on cross-selling their platforms. Canalys says the top 12 cyber providers collectively accounted for more than half (53.2%) of total spending in Q2.

“Vendors are positioning their cybersecurity platforms to reduce customers’ complexity by consolidating redundant and legacy point products, says Canalys chief analyst Matthew Ball. “But this also reduces organizations’ resilience by increasing dependency on fewer vendors.”

Looking ahead, Canalys expects even bigger growth in spending on cyber services (as opposed to cyber technology). For the full year 2024, Canalys predicts cyber-services spending to grow by nearly 13% year-on-year, reaching a global total of $163.3 billion.

AI: The New Competitive Advantage

Nearly 7 in 10 business leaders (68%) say their organizations’ competitive advantage now depends on making the best use of artificial intelligence. So finds a new poll conducted by Forrester on behalf of credit-reporting site Experian.

In the survey, roughly 6 in 10 respondents (62%) also said their top AI use case is analyzing alternative data sources with Generative AI.

But business leaders are also looking for faster results. More than half the respondents (55%) said developing and deploying AI and machine-learning models takes them too much time.

The survey, conducted earlier this year, reached 1,320 business leaders in 10 countries across the EMEA and Asia-Pacific regions.

AI for Sales? Yes, Please

Add sales to the list of jobs that can be enhanced with AI. A new forecast from researchers at Gartner posits that sellers who partner effectively with AI tools are 3.7 times more likely to meet their quotas than are those who don’t use AI.

The forecast is based on Gartner’s recent survey of more than 1,025 B2B sellers.

Gartner also says that in response, senior sales officers will need to prepare their staff for a world with AI. That could include training salespeople with new AI skills, setting new sales priorities, and refining compensation and even career paths.

One possible snag: In Gartner’s survey, nearly three-quarters of the salespeople (72%) said they’re already overwhelmed by the number of skills required for their job. And fully half (50%) said they’re similarly overwhelmed by the amount of technology needed.

Watch the related video podcast:

Featured videos


Follow


Related Content

The AMD Instinct MI300X Accelerator draws top marks from leading AI benchmark

Featured content

The AMD Instinct MI300X Accelerator draws top marks from leading AI benchmark

In the latest MLPerf testing, the AMD Instinct MI300X Accelerator with ROCm software stack beat the competition with strong GenAI inference performance. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

New benchmarks using the AMD Instinct MI300X Accelerator show impressive performance that surpasses the competition.

This is great news for customers operating demanding AI workloads, especially those underpinned by large language models (LLMs) that require super-low latency.

Initial platform tests using MLPerf Inference v4.1 measured AMD’s flagship accelerator against the Llama 2 70B benchmark. This test is an indication for real-world applications, including natural language processing (NLP) and large-scale inferencing.

MLPerf is the industry’s leading benchmarking suite for measuring the performance of machine learning and AI workloads from domains that include vision, speech and NLP. It offers a set of open-source AI benchmarks, including rigorous tests focused on Generative AI and LLMs.

Gaining high marks from the MLPerf Inference benchmarking suite represents a significant milestone for AMD. It positions the AMD Instinct MI300X accelerator as a go-to solution for enterprise-level AI workloads.

Superior Instincts

The results of the LLaMA2-70B test are particularly significant. That’s due to the benchmark’s ability to produce an apples-to-apples comparison of competitive solutions.

In this benchmark, the AMD Instinct MI300X was compared with NVIDIA’s H100 Tensor Core GPU. The test concluded that AMD’s full-stack inference platform was better than the H100 at achieving high-performance LLMs, a workload that requires both robust parallel computing and a well-optimized software stack.

The testing also showed that because the AMD Instinct MI300X offers the largest GPU memory available—192GB of HBM3 memory—it was able to fit the entire LLaMA2-70B model into memory. Doing so helped to avoid network overhead by preventing model splitting. This, in turn, maximized inference throughput, producing superior results.

Software also played a big part in the success of the AMD Instinct series. The AMD ROCm software platform accompanies the AMD Instinct MI300X. This open software stack includes programming models, tools, compilers, libraries and runtimes for AI solution development on the AMD Instinct MI300 accelerator series and other AMD GPUs.

The testing showed that the scaling efficiency from a single AMD Instinct MI300X, combined with the ROCm software stack, to a complement of eight AMD Instinct accelerators was nearly linear. In other words, the system’s performance improved proportionally by adding more GPUs.

That test demonstrated the AMD Instinct MI300X’s ability to handle the largest MLPerf inference models to date, containing over 70 billion parameters.

Thinking Inside the Box

Benchmarking the AMD Instinct MI300X required AMD to create a complete hardware platform capable of addressing strenuous AI workloads. For this task, AMD engineers chose as their testbed the Supermicro AS -8125GS-TNMR2, a massive 8U complete system.

Supermicro’s GPU A+ Client Systems are designed for both versatility and redundancy. Designers can outfit the system with an impressive array of hardware, starting with two AMD EPYC 9004-series processors and up to 6TB of ECC DDR5 main memory.

Because AI workloads consume massive amounts of storage, Supermicro has also outfitted this 8U server with 12 front hot-swap 2.5-inch NVMe drive bays. There’s also the option to add four more drives via an additional storage controller.

The Supermicro AS -8125GS-TNMR2 also includes room for two hot-swap 2.5-inch SATA bays and two M.2 drives, each with a capacity of up to 3.84TB.

Power for all those components is delivered courtesy of six 3,000-watt redundant titanium-level power supplies.

Coming Soon: Even More AI power

AMD engineers continually push the limits of silicon and human ingenuity to expand the capabilities of their hardware. So it should come as little surprise that new iterations of the AMD Instinct series are expected to be released in the coming months. This past May, AMD officials said they plan to introduce AMD Instinct MI325, MI350 and MI400 accelerators.

Forthcoming Instinct accelerators, AMD says, will deliver advances including additional memory, support for lower-precision data types, and increased compute power.

New features are also coming to the AMD ROCm software stack. Those changes should include software enhancements including kernel improvements and advanced quantization support.

Are you customers looking for a high-powered, low-latency system to run their most demanding HPC and AI workloads? Tell them about these benchmarks and the AMD Instinct MI300X accelerators.

Do More:

 

Featured videos


Follow


Related Content

Developing AI and HPC solutions? Check out the new AMD ROCm 6.2 release

Featured content

Developing AI and HPC solutions? Check out the new AMD ROCm 6.2 release

The latest release of AMD’s free and open software stack for developing AI and HPC solutions delivers 5 important enhancements. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

If you develop AI and HPC solutions, you’ll want to know about the most recent release of AMD ROCm software, version 6.2.

ROCm, in case you’re unfamiliar with it, is AMD’s free and open software stack. It’s aimed at developers of artificial intelligence and high-performance computing (HPC) solutions on AMD Instinct accelerators. It's also great for developing AI and HPC solutions on AMD Instinct-powered servers from Supermicro. 

First introduced in 2016, ROCm open software now includes programming models, tools, compilers, libraries, runtimes and APIs for GPU programming.

ROCm version 6.2, announced recently by AMD, delivers 5 key enhancements:

  • Improved vLLM support 
  • Boosted memory efficiency & performance with Bitsandbytes
  • New Offline Installer Creator
  • New Omnitrace & Omniperf Profiler Tools (beta)
  • Broader FP8 support

Let’s look at each separately and in more detail.

LLM support

To enhance the efficiency and scalability of its Instinct accelerators, AMD is expanding vLLM support. vLLM is an easy-to-use library for the large language models (LLMs) that power Generative AI.

ROCm 6.2 lets AMD Instinct developers integrate vLLM into their AI pipelines. The benefits include improved performance and efficiency.

Bitsandbytes

Developers can now integrate Bitsandbytes with ROCm for AI model training and inference, reducing their memory and hardware requirements on AMD Instinct accelerators. 

Bitsandbytes is an open source Python library that enables LLMs while boosting memory efficiency and performance. AMD says this will let AI developers work with larger models on limited hardware, broadening access, saving costs and expanding opportunities for innovation.

Offline Installer Creator

The new ROCm Offline Installer Creator aims to simplify the installation process. This tool creates a single installer file that includes all necessary dependencies.

That makes deployment straightforward with a user-friendly GUI that allows easy selection of ROCm components and versions.

As the name implies, the Offline Installer Creator can be used on developer systems that lack internet access.

Omnitrace and Omniperf Profiler

The new Omnitrace and Omniperf Profiler Tools, both now in beta release, provide comprehensive performance analysis and a streamlined development workflow.

Omnitrace offers a holistic view of system performance across CPUs, GPUs, NICs and network fabrics. This helps developers ID and address bottlenecks.

Omniperf delivers detailed GPU kernel analysis for fine-tuning.

Together, these tools help to ensure efficient use of developer resources, leading to faster AI training, AI inference and HPC simulations.

FP8 Support

Broader FP8 support can improve the performance of AI inferencing.

FP8 is an 8-bit floating point format that provides a common, interchangeable format for both AI training and inference. It lets AI models operate and perform consistently across hardware platforms.

In ROCm, FP8 support improves the process of running AI models, particularly in inferencing. It does this by addressing key challenges such as the memory bottlenecks and high latency associated with higher-precision formats. In addition, FP8's reduced precision calculations can decrease the latency involved in data transfers and computations, losing little to no accuracy.  

ROCm 6.2 expands FP8 support across its ecosystem, from frameworks to libraries and more, enhancing performance and efficiency.

Do More:

Watch the related video podcast:

Featured videos


Follow


Related Content

Pages