Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Featured content

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols. More on the release of the AMD EPYC™ 9004 series processors in this part three of a four-part series..

Learn More about this topic
  • Applications:
  • Featured Technologies:

 
 
Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols that have seen further innovation with the release of the AMD EPYC™ 9004 series processors. These have a direct benefit for performance-intensive computing applications, particularly for supporting higher-density virtual machines (VMs) and using technologies that can protect data flows from leaving the confines of what Google calls confidential VMs as well as further isolating VM hypervisors. They offer a collection of N2D and C2D instances that support these confidential VMs.
 
“Product security is always our top focus,” said AMD CTO Mark Papermaster. “We are continuously investing and collaborating in the security of these technologies.” 
 
Royal Hansen, VP of engineering for Google Cloud said: “Our customers expect the most trustworthy computing experience on the planet. Google and AMD have a long history and a variety of relationships with the deepest experts on security and chip development. This was at the core of our going to market with AMD’s security solutions for datacenters.”
 
The two companies also worked together on this security analysis.
 
Called Infinity Guard collectively, the security technologies theyv'e been working on involve four initiatives:
 
1. Secure encrypted virtualization provides each VM with its own unique encryption key known only to the processor.
 
2. Secure nested paging complements this virtualization to protect each VM from any malicious hypervisor attacks and provide for an isolated and trusted environment.
 
3. AMD’s secure boot along with the Trusted Platform Module attestation of the confidential VMs happen every time a VM boots, ensuring its integrity and to mitigate any persistent threats.
 
4. AMD’s secure memory encryption and integration into the memory channels speed performance.
 
These technologies are combined and communicate using the AMD Infinity Fabric pathways to deliver breakthrough performance along with better secure communications.
 

Featured videos


Follow


Related Content

Understanding the New Core Architecture of the AMD EPYC 9004 Series Processors

Featured content

Understanding the New Core Architecture of the AMD EPYC 9004 Series Processors

AMD’s announcement of its fourth generation EPYC 9004 Series processors includes major advances in how these chipsets are designed and produced. Part 2 of 4.

Learn More about this topic
  • Applications:
  • Featured Technologies:
AMD’s announcement of its fourth generation EPYC 9004 Series processors includes major advances in how these chipsets are designed and produced for delivering the highest performance levels. These advances involve using a hybrid multi-die architecture.
 
This architecture makes use of two different production processes for cores and I/O pathways. The former makes use of five nanometer dies, while the latter uses six nanometer dies. Each processor package can have up to 12 CPU dies, each with eight 8 cores for a total of 96 cores in the maximum configuration. Each eight-core assembly has its own set of eight 8 dedicated 1 MB L2 caches, and the overall assembly can access a shared 32 MB L3 cache, as shown in the diagram below.
 
32 MB L3 cache image
 
 
 
 
 
 
 
 
 
 
 
In addition to these changes, AMD announced improvements called Zen 4 that involve boosting instructions-per-clock counts and overall clock- speed increases. AMD promises roughly 29 percent faster single-core CPU performance in Zen 4 relative to Zen 3, which were affirmed with Ars Technica’s tests earlier this fall. (Zen 3 chips used the older seven 7 nanometer dies.)
 
 
This configuration provides a great deal of flexibility in how the CPU, memory channels, and I/O paths are arranged. The multi-die setup can reduce fabrication waste and offer better parallel processing support. In addition, AMD EPYC processors are produced in single and dual socket configurations, with the latter offering more I/O pathways and dedicated PCIe generation 5 I/O connections.
 

Featured videos


Follow


Related Content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

Featured content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

AMD announces its fourth-generation EPYC™ CPUs. The new EPYC 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. Part 1 of 4.

Learn More about this topic
  • Applications:
  • Featured Technologies:
AMD very recently announced its fourth-generation EPYC™ CPUs.This generation will provide innovative solutions that can satisfy the most demanding performance-intensive computing requirements for cloud computing, AI and highly parallelized data analytic applications. The design decisions AMD made on this processor generation strirke a good balance among specificaitons, including higher CPU power and I/O performance, latency reductions and improvements in overall data throughput. This lets a single CPU socket address an increasingly larger world of complex workloads. 
 
The new AMD EPYC™ 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. The new chip dies support 12 DDR5 memory channels, doubling the I/O throughput of previous generations. The new CPUs also increase core counts from 64 cores in the previous EPYC 7003 chips to 96 cores in the new chips using 5-nanometer processes. The new generation of chips also increases the maximum memory capacity from 4TB of DDR4-3200 to 6TB of DDR5-4800 memory.
 
 
 
There are three major innovations evident in the AMD EPYC™ 9004 processor series:
  1. A  new hybrid multi-die chip architecture coupled with multi-processor server innovations and a new and more advanced Zen 4 instruction set along with support for an increase in dedicated L2 and shared L3 cache storage
  2. Security enhancements to AMD’s Infinity Guard
  3. Advances to system-on-chip designs that extend and enhance AMD Infinity switching fabric technology,
Taken together, the new AMD EPYC™ 9004 series processors can offer plenty of innovation and performance advantage. The new processors offer better performance per watt of power consumed and better per core performance, too.
 

Featured videos


Follow


Related Content

Unlocking the Value of the Cloud for Mid-size Enterprises

Featured content

Unlocking the Value of the Cloud for Mid-size Enterprises

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Microsoft Azure

Organizations around the world are requiring new options for their next-generation computing environments. Mid-size organizations, in particular, are facing increasing pressure to deliver cost-effective, high-performance solutions within their hyperconverged infrastructures (HCI). Recent collaboration between Supermicro, Microsoft Azure and AMD, leveraging their collective technologies, has created a fresh approach that lets enterprises maintain performance at a lower operational cost while helping to reduce the organization’s carbon footprint in support of sustainability initiatives. This cost-effective, 1U system (a 2U version is available) offers both power, flexibility and modularity in large-scale GPU deployments.

The results of the collaboration combine the latest technologies, supporting multiple CPU, GPU, storage and networking options optimized to deliver uniquely configured and highly scalable systems. The product can be optimized for SQL and Oracle databases, VDI, productivity applications and database analytics. This white paper explores why this universal GPU architecture is an intriguing and cost-effective option for CTOs and IT administrators who are planning to rapidly implement hybrid cloud, data center modernization, branch office/edge networking or Kubernetes deployments at scale.

Get the 7-page white paper that provides the detail to assess the solution for yourself, including the new Azure Stack HCI certified system, specifications, cost justification and more.

 

Featured videos


Follow


Related Content

Enter Your Animation in Pixar’s RenderMan NASA Space Images Art Challenge

Featured content

Enter Your Animation in Pixar’s RenderMan NASA Space Images Art Challenge

For the first time, challengers can run their designs using thousands of AMD EPYC™ core CPUs, enabling artists to develop the most complex animations and the most amazing visualizations. “The contestants have access to this professional-grade render farm just like the pros. It levels the playing field,” said James Knight, the director of entertainment for AMD. “You can make scenes that weren’t possible before on your own PC,” he said.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Pixar

One of the biggest uses of performance-intensive computing is the creation of high-resolution graphic animations used for entertainment and commercial applications. To that end, AMD and Pixar Animation Studios have announced the ninth RenderMan Art Challenge, which is open to the public. The idea is to encourage creative types to use some of the same tools that professional graphic designers and animators use to build something based on actual NASA data.

 

The winners will be determined by a set of Pixar, NASA and Industrial Light and Magic judges. The projects must be submitted by November 15 and the winning entries will be announced at the end of November.

 

This year’s challenge provides access to the AMD virtual Azure virtual machines, letting contestants use the highest-performing compute instances. Contestants will be given entrance to The AMD Creator Cloud, a render farm powered by Azure HBv3 composed of high-performance AMD EPYC™ processors using AMD 3D V-Cache™ technology.

 

For the first time, challengers can run their designs using thousands of AMD EPYC™ core CPUs, enabling artists to develop the most complex animations and the most amazing visualizations. “The contestants have access to this professional-grade render farm just like the pros. It levels the playing field,” said James Knight, the director of entertainment for AMD. “You can make scenes that weren’t possible before on your own PC,” he said.

 

The topic focus for this year’s challenge is space-related, in keeping with NASA’s involvement. The challenge provides scientifically accurate 3D NASA models, including telescopes, space stations, suits and planets. One of the potential advantages: many of past contests have ended up working at Pixar. “The RenderMan challenge gives everyone a chance to learn new things and show their abilities and creativity. The whole experience was great," said Khachik Astvatsatryan, a previous RenderMan Challenge winner.

 

Dylan Sisson, a RenderMan digital artist at Pixar, said “With the advancements we are seeing in hardware and software, individual artists are now able to create images of ever-increasing sophistication and complexity. It is a great opportunity for challengers to unleash their creative vision with these state-of-the-art technologies."

Featured videos


Follow


Related Content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Featured content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Join Supermicro online Nov. 10th to watch the unveiling of the company’s new A+ systems -- featuring next-generation AMD EPYC™ processors. They can't tell us any more right now. But you can register for a link to the event by scrolling down and signing-up on this page.
Learn More about this topic
  • Applications:
  • Featured Technologies:

Featured videos


Follow


Related Content

Energy-Efficient AMD EPYC™ Processors Bring Significant Savings

Featured content

Energy-Efficient AMD EPYC™ Processors Bring Significant Savings

Cut electricity consumption by up to half with AMD's power-saviing EPYC™ processors.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Ateme, DBS, Nokia

Nokia was able to target up to a 40% reduction in server power consumption using EPYC. DBS and Ateme each experienced a 50% drop in energy costs. AMD’s EPYC™ processors can provide big energy-saving benefits, so you can meet your most demanding application performance requirements and still provide planetary and environmental efficiencies.

For example: To provide a collection of 1,200 virtual machines, AMD would require 10 servers compared to 15 for those built using equivalent Intel CPUs. This translates into a 41% lower total cost of ownership over a three-year period, with a third less energy consumption, saving on carbon emissions too. For deep detail and links to case studies by the companies mentioned above. Find out how they  saved significantly on energy-costs while reducing their carbon footprints, check out the infographic.

 

Featured videos


Follow


Related Content

The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs

Featured content

The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs

Weka’s file system, WekaFS, unifies your entire data lake into a shared global namespace where you can more easily access and manage trillions of files stored in multiple locations from one directory.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Weka.io

One of the challenges of building machine learning (ML) models is managing data. Your infrastructure must be able to process very large data sets rapidly as well as ingest both structured and unstructured data from a wide variety of sources.

 

That kind of data is typically generated in performance-intensive computing areas like GPU-accelerated applications, structural biology and digital simulations. Such applications typically have three problems: how to efficiently fill a data pipeline, how to easily integrate data across systems and how to manage rapid changes in data storage requirements. That’s where Weka.io comes into play, providing higher-speed data ingestion and avoiding unnecessary copies of your data while making it available across the entire ML modeling space.

 

Weka’s file system, WekaFS, has been developed just for this purpose. It unifies your entire data lake into a shared global namespace where you can more easily access and manage trillions of files stored in multiple locations from one directory. It works across both on-premises and cloud storage repositories and is optimized for cloud-intensive storage so that it will provide the lowest possible network latencies and highest performance.

 

This next-generation data storage file system has several other advantages: it is easy to deploy, entirely software-based, plus it is a storage solution that provides all-flash level performance, NAS simplicity and manageability, cloud scalability and breakthrough economics. It was designed to run on any standard x86-based server hardware and commodity SSDs or run natively in the public cloud, such as AWS.

 

Weka’s file system is designed to scale to hundreds of petabytes, thousands of compute instances and billions of files. Read and write latency for file operations against active data is as low as 200 microseconds in some instances.

 

Supermicro has produced its own NVMe Reference Architecture that supports WekaFS on some of its servers, including the Supermicro A+ AS-1114S-WN10RT and AS-2114S-WN24RT using the AMD EPYC™ 7402P processors with at least 2TB of memory, expandable to 4TB. Both servers support hot-swappable NVMe storage modules for ultimate performance. Also check out the Supermicro WekaFS A/I and HPC Solution Bundle.

 

 

Featured videos


Follow


Related Content

Mercedes-AMG F1 Racing Team Gains an Edge with AMD’s EPYC™ Processors

Featured content

Mercedes-AMG F1 Racing Team Gains an Edge with AMD’s EPYC™ Processors

In F1, fast cars and fast computers go hand in hand. Computational performance became more important when F1 IT authorities added rules that dictate how much computing and wind tunnel time each team can use. Mercedes was the top finisher in 2021 giving it the biggest compute/wind tunnel handicap. So, when it selected a new computer system, it opted for AMD EPYC™ processors, gaining 20% performance improvement to get more modeling done in less time.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Mercedes-AMG Petronas F1 racing team

In the high-stakes world of Formula One racing, finding that slight edge to build a better performing car often means using the most powerful computers to model aerodynamics. The Mercedes-AMG Petronas F1 racing team found that using AMD EPYC™ processors helps gain that edge. Since 2010, the team has brought home 124 race wins and nine driver’s championships across the F1 racing circuit.

 

Thanks to the increased performance of these AMD EPYC™ CPUs, the team is able to run twice the number of daily simulations. The key is having the best computational fluid dynamics models available. And time is of the essence because the racing association’s IT authorities have added rules that dictate how much computing and wind tunnel time each team can use, along with a dollar limit on computing resources to level the playing field despite resource differences.

 

Teams that traditionally have been top finishers of the race are allowed a third less computing time, and since the Mercedes team was the top 2021 finisher, it has the least computing allocation. The 2022 race limited computing expenditures to $140M, and for 2023, the number will be further cut to $135M. The result is that teams are focused on finding the highest performing computers at the lowest cost. In F1, fast cars and fast computers go hand in hand.

 

“Performance was the key driver of the decision making,” said Simon Williams, Head of Aero Development Software for the team. “We looked at AMD and the competitors. We needed to get this right, because we're going to be using this hardware for the next three years.” Mercedes replaced its existing three-year old computers with AMD EPYC™-based systems and gained 20% performance improvements, letting it run many more simulations in parallel. “I can't stress enough how important the fast turnaround is,” Williams said. “It's been great having AMD help us achieve that."

 

Servers such as the Supermicro A+ series can bring home big wins as well.

Featured videos


Follow


Related Content

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Featured content

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Eliovp, which brings together computing and storage solutions for blockchain workloads, rewrote its code to take full advantage of AMD’s Instinct MI100 and MI250 GPUs. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Eliovp

When you’re building blockchain-based applications, you typically need a lot of computing and storage horsepower. This is the niche that Belgium-based Eliovp fills. They have developed a line of extremely fast cloud-based servers designed to run demanding blockchain workloads.

 

Eliovp has been recognized as the top Filecoin storage provider in Europe. This refers to a decentralized blockchain-based protocol that lets anyone rent spare local storage and is a key Web3 component.

 

To satisfy the compute  and storage needs, Eliiovp employs Supermicro’s A+ AS-1124US® and AS-4124GS® servers, running quad-core AMD EPYC 7543 and 7313 CPUs and as many as 8 AMD Instinct MI100 and MI250 GPUs to further boost performance.

 

What makes these servers especially potent is that Eliovp rewrote its code to run on this specific AMD Instinct GPU family. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

 

One of the attractions of the Supermicro servers is the capability to leverage the high-density core count and higher clock speeds as well as the 32 memory slots. And it comes packaged in a relatively small form factor.

 

“By working with Supermicro, we get new generations of servers with AMD technology earlier in our development cycle, enabling us to bring our products to market faster," said Elio Van Puyvelde, CEO of Eliovp. The company was able to take advantage of new CPU and GPU instructions and memory management to make its code more efficient and effective. Eliovp was also able to reduce overall server power consumption, which is always important in blockchain applications that span dozens of machines.

 

Featured videos


Follow


Related Content

Pages