Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

What is the AMD Instinct MI300A APU?

Featured content

What is the AMD Instinct MI300A APU?

Accelerate HPC and AI workloads with the combined power of CPU and GPU compute. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A APU, set to ship in this year’s second half, combines the compute power of a CPU with the capabilities of a GPU. Your data-center customers should be interested if they run high-performance computing (HPC) or AI workloads.

More specifically, the AMD Instinct MI300A is an integrated data-center accelerator that combines AMD Zen 4 cores, AMD CDNA3 GPUs and high-bandwidth memory (HBM) chiplets. In all, it has more than 146 billion transistors.

This AMD component uses 3D die stacking to enable extremely high bandwidth among its parts. In fact, nine 5nm chiplets that are 3D-stacked on top of four 6nm chiplets with significant HBM surrounding it.

And it’s coming soon. The AMD Instinct MI300A is currently in AMD’s labs. It will soon be sampled with customers. And AMD says it’s scheduled for shipments in the second half of this year. 

‘Most complex chip’

The AMD Instinct MI300A was publicly displayed for the first time earlier this year, when AMD CEO Lisa Su held up a sample of the component during her CES 2023 keynote. “This is actually the most complex chip we’ve ever built,” Su told the audience.

A few tech blogs have gotten their hands on early samples. One of them, Tom’s Hardware, was impressed by the “incredible data throughput” among the Instinct MI300A’s CPU, GPU and memory dies.

The Tom’s Hardware reviewer added that will let the CPU and GPU work on the same data in memory simultaneously, saving power, boosting performance and simplifying programming.

Another blogger, Karl Freund, a former AMD engineer who now works as a market researcher, wrote in a recent Forbes blog post that the Instinct MI300 is a “monster device” (in a good way). He also congratulated AMD for “leading the entire industry in embracing chiplet-based architectures.”

Previous generation

The new AMD accelerator builds on a previous generation, the AMD Instinct MI200 Series. It’s now used in a variety of systems, including Supermicro’s A+ Server 4124GQ-TNMI. This completely assembled system supports the AMD Instinct MI250 OAM (OCP Acceleration Module) accelerator and AMD Infinity Fabric technology.

The AMD Instinct MI200 accelerators are designed with the company’s 2nd gen AMD CDNA Architecture, which encompasses the AMD Infinity Architecture and Infinity Fabric. Together, they offer an advanced platform for tightly connected GPU systems, empowering workloads to share data fast and efficiently.

The MI200 series offers P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric Links with up to 800 GB/sec. of peak total theoretical I/O bandwidth. That’s 2.4x the GPU P2P theoretical bandwidth of the previous generation.

Supercomputing power

The same kind of performance now available to commercial users of the AMD-Supermicro system is also being applied to scientific supercomputers.

The AMD Instinct MI25X accelerator is now used in the Frontier supercomputer built by the U.S. Dept. of Energy. That system’s peak performance is rated at 1.6 exaflops—or over a billion billion floating-point operations per second.

The AMD Instinct MI250X accelerator provides Frontier with flexible, high-performance compute engines, high-bandwidth memory, and scalable fabric and communications technologies.

Looking ahead, the AMD Instinct MI300A APU will be used in Frontier’s successor, known as El Capitan. Scheduled for installation late this year, this supercomputer is expected to deliver at least 2 exaflops of peak performance.

 

Featured videos


Follow


Related Content

Learn, Earn and Win with AMD Arena

Featured content

Learn, Earn and Win with AMD Arena

Channel partners can learn about AMD products and technologies at the AMD Arena site. It’s your site for AMD partner training courses, redeemable points and much more.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Interested in learning more about AMD products while also earning points you can redeem for valuable merch? Then check out the AMD Arena site.

There, you can:

  • Stay current on the latest AMD products with training courses, sales tools, webinars and quizzes;
  • Earn points, unlock levels and secure your place in the leaderboard;
  • Redeem those points for valuable products, experiences and merchandise in the AMD Rewards store.

Registering for AMD Arena is quick, easy and free. Once you’re in, you’ll have an Arena Dashboard as your control center. It’s where you can control your profile, begin a mission, track your progress, and view your collection of badges.

Missions are made of learning objectives that take you through training courses, sales tools, webinars and quizzes. Complete a mission, and you can earn points, badges and chips; unlock levels; and climb the leaderboard.

The more missions you complete, the more rewards you’ll earn. These include points you can redeem for merchandise, experiences and more from the AMD Arena Rewards Store.

Courses galore

Training courses are at the heart of the AMD Arena site. Here are just 3 of the many training courses waiting for you now:

  • AMD EPYC Processor Tool: Leverage the AMD processor-selector and total cost of ownership (TCO) tools to match your customers’ needs with the right AMD EPYC processor.
  • AMD EPYC Processor – Myth Busters: Get help fighting the myths and misconceptions around these powerful CPUs. Then show your data-center customers the way AMD EPYC delivers performance, security and scalability.

Get started

There’s lots more training in AMD Arena, too. The site supports virtually all AMD products across all business segments. So you can learn about both products you already sell as well as new products you’d like to cross-sell in the future.

To learn more, you can take this short training course: Introducing AMD Arena. In just 10 minutes, this course covers how to register for an AMD Arena account, use the Dashboard, complete missions and earn rewards.

Ready to learn, earn and win with AMD Arena? Visit AMD Arena now

 

 

Featured videos


Follow


Related Content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

Featured content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

The Student Cluster Computing challenge made its 16th appearance at the SuperComputer 22 (SC22) event in Dallas. The two student teams that were running AMD EPYC™ CPUs and AMD Instinct™ GPUs were the two teams that aced the Linpack benchmark. That's the test used to determined the TOP500 supercomputers in the world.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last month, the annual Supercomputing Conference 2022 (SC22) was held in Dallas. The Student Cluster Competition (SCC), which began in 2007, was also performed again. The SCC offers an immersive high-performance computing (HPC) experience to undergraduate and high school students.

 

According to the SC22 website: Student teams design and build small clusters, learn scientific applications, apply optimization techniques for their chosen architectures and compete in a non-stop, 48-hour challenge at the SC conference to complete real-world scientific workloads, showing off their HPC knowledge for conference attendees and judges.

 

Each team has six students, at least one faculty advisor, a sutdent team leader, and is associated with vendor sponsors, which provide the equipment. AMD and Supermicro jointly sponsored both the Massachusetts Green Team from MIT, Boston University and Northeastern University and the 2MuchCache team from UC San Diego (UCSD) and the San Diego Supercomputer Center (SDSC). Running AMD EPYC™ CPUs and AMD Instinct™-based GPUs supplied by AMD and Supermicro, the two teams came in first and second in the SCC Linpack test.

 

The Linpack benchmarks measure a system's floating-point computing power, according to Wikipedia. The latest version of these benchmarks is used to determine the TOP500 list, ranks the world's most powerful supercomputers.

 

In addition to chasing high scores on benchmarks, the teams must operate their systems without exceeding a power limit. For 2022, the competition used a variable power limit: at times, the power available to each team for its competition hardware was as high as 4000-watts (but was usually lower) and at times it was as low as 1500-watts (but was usually higher).

 

The “2MuchCache” team offers a poster page with extensive detail about their competition hardware. They used two third-generation AMD EPYC™ 7773X CPUs with 64 cores, 128 threads and 768MB of stacked-die cache. Team 2MuchCache used one AS-4124GQ-TNMI system with four AMD Instinct™ MI250 GPUs with 53 simultaneous threads.

 

The “Green Team’s” poster page also boasts two instances of third-generation AMD 7003-series EPYC™ processors, AMD Instinct™ 1210 GPUs with AMD Infinity fabric. The Green Team utilized two Supermicro AS-4124GS-TNR GPU systems.

 

The Students of 2MuchCache:

Longtian Bao, role: Lead for Data Centric Python, Co-lead for HPCG

Stefanie Dao, role: Lead for PHASTA, Co-lead for HPL

Michael Granado, role: Lead for HPCG, Co-lead for PHASTA

Yuchen Jing, role: Lead for IO500, Co-lead for Data Centric Python

Davit Margarian, role: Lead for HPL, Co-lead for LAMMPS

Matthew Mikhailov Major, role: Team Lead, Lead for LAMMPS, Co-lead for IO500

 

The Students of Green Team:

Po Hao Chen, roles: Team leader, theory & HPC, benchmarks, reproducibility

Carlton Knox, roles: Computer Arch., Benchmarks, Hardware

Andrew Nguyen, roles: Compilers & OS, GPUs, LAMMPS, Hardware

Vance Raiti, roles: Mathematics, Computer Arch., PHASTA

Yida Wang, roles: ML & HPC, Reproducibility

Yiran Yin, roles: Mathematics, HPC, PHASTA

 

Congratulations to both teams!

Featured videos


Follow


Related Content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

Featured content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

A video discussion between Charles Liang, Supermicro CEO, and Dr. Lisa Su, AMD CEO.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Higher clock rates, more cores and larger onboard memory caches are some of the traditional areas of improvement for generational CPU upgrades. Performance improvements are almost a given with a new generation CPU. Increasingly, howeer, the more difficult challenges for data centers and performance-intensive computing are energy efficiency and managing heat. Energy costs have spiked in many parts of the world and “performance per watt” is what many companies are looking for. AMD’s 4th-gen EPYC™ CPU runs a little hotter than its predecessor, but its performance gains far outpace the thermal rise, making for much greater performance per watt. It’s a trade-off that makes sense, especially for performance-intensive computing, such HPC and technical computing applications.

In addition to the energy efficiency and heat dissipation concerns, Dr. Su and Mr. Liang discuss the importance of the AMD EPYC™ roadmap. You’ll learn one or two nuances about AMD’s plans. SMC is ready with 15 products that leverage the Genoa, AMD’s fourth generation EPYC™ CPU. This under 15-minute video recorded on November 15, 2022, will bring you up to date on all things AMD EPYC™. Click the link to see the video:

Supermicro & AMD CEOs Video – The Future of Data Center Computing

 

 

 

 

Featured videos


Follow


Related Content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

Featured content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

The CPU package is configurable at time of purchase with various options that you can match up to the specific characteristics of your workloads. Ask yourself the three questions the story poses.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In a previous post, Performance-Intensive Computing explored the benefits of making your applications and workloads more parallel. Chief among the paybacks may be being able to take advantage of the latest innovations in performance-intensive computing.

 

Although it isn’t strictly a parallel approach, the CPU package is configurable at the time of purchase with various options that you can match up to the specific characteristics of your workloads. The goal of this story is to outline how to match up the appropriate features to purchase the best processors for your particular application collection. For starters: You should be asking yourself these three questions:

 

Question 1. Does your application require a great deal of memory and storage? Memory-bound apps are typically found when an application has to manipulate a large amount of data. To alleviate potential bottlenecks, purchase a CPU with the largest possible onboard caches to avoid swapping data from storage. Apps such as Reveal and others used in the oil and gas industry will typically require large onboard CPU caches to help prevent memory bottlenecks as data moves in and out of the processor.

 

Question 2. Do you have the right amount and type of storage for your data requirements? Storage has a lot of different parameters and how it interacts with the processor and your application isn’t one-size-fits-all. Performance-Intensive Computing has previously written about specialized file systems such as the one developed and sold by WekaIO that can aid in onboarding and manipulating large data collections.

 

Question 3. Does your application spend a lot of time communicating across networks, or is your application bound by the limits of your processor? For either of these situations, it might mean you might need CPUs with more cores and/or higher-processing clock speeds. This is the case, for example, with molecular dynamic apps such as Gromacs and Lammps. These situations might call for parts such as AMD’s Threadripper.

 

As you can see, figuring out the right kind of CPU – and its supporting chipsets – is a lot more involved than just purchasing the highest clock speed and largest number of cores. Knowing your data and applications will guide you to buying CPU hardware that makes your business more efficient.

Featured videos


Follow


Related Content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Featured content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Energy company Petrobas, based in Brazil, is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. Petrobas used system integrator Atos to provide more than 250 Supermicro SuperServers. The cluster is ranked 33 on the current top500 list and goes by the name Pegaso.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Atos

Brazilian energy company Petrobas is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. These techniques can help reduce costs and make finding and extracting new hydrocarbon deposits quicker. Petrobras' geoscientists and software engineers quickly modify algorithms to take advantage of new capabilities as new CPU and GPU technologies become available.

 

The energy company used system integrator Atos to provide more than 250 Supermicro SuperServer AS-4124GO-NART+ servers running dual AMD EPYC™ 7512 processors. The cluster goes by the name Pegaso (which in Portuguese means the mythological horse Pegasus) and is currently listed at number 33 on the top500 list of fastest computing systems. Atos is a global leader in digital transformation with 112,000 world-wide employees. They have built other systems that appeared on the top500 list, and AMD powers 38 of them.

 

Petrobas has had three other systems listed on previous iterations of the Top500 list, using other processors. Pegaso is now the largest supercomputer in South America. It is expected to become fully operational next month.  Each of its servers runs CentOS and has 2TB of memory, for a total of 678TB. The cluster contains more than 230,000 core processors, is running more than 2,000 GPUs and is connected via an InfiniBand HDR networking system running at 400Gb/s. To give you an idea of how much gear is involved with Pegaso, it took more than 30 truckloads to deliver and consists of over 30 tons of hardware.

 

The geophysics team has a series of applications that require all this computing power, including seismic acquisition apps that collect data and is then processed to deliver high-resolution subsurface imaging to precisely locate the oil and gas deposits. Having the GPU accelerators in the cluster helps to reduce the processing time, so that the drilling teams can locate their rigs more precisely.

 

For more information, see this case study about Pegaso.

Featured videos


Follow


Related Content

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Follow


Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Follow


Related Content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Follow


Related Content

How the New EPYC CPUs Deliver System-on-Chip Electronics

Featured content

How the New EPYC CPUs Deliver System-on-Chip Electronics

CPU chipsets are not normally considered systems-on-chip (SoC) but the fourth generation of AMD EPYC processors incorporate numerous I/O functionality at a high level of integration.

Learn More about this topic
  • Applications:
  • Featured Technologies:
Typically, CPU chipsets are not normally considered systems-on-chip (SoC) but the fourth generation of AMD EPYC processors incorporate numerous I/O functionality at a high level of integration. Previous generations have delivered this functionality on external chipsets. The SoC design helps reduce power consumption, packaging costs and improve data throughput by reducing interconnection latencies.
 
The new EPYC processors have 12 DDR5 memory controllers – 50 percent more controllers than any other x86 CPU, which keeps up the higher memory demands of performance-intensive computing applications. As we mentioned in an earlier blog, these controllers also include inline encryption engines for supporting AMD’s Infinity Guard features, including support for an integrated security processor that establishes a secure root of trust and other security tasks.
 
They also include 128 or 160 lanes of PCIe Gen5 controllers, which also helps with higher I/O throughput of these more demanding applications. These support the same physical interfaces for Infinity fabric connectors and provide more remote memory access among CPUs at up to 36 GBps between servers. The new Zen 4 CPU cores can make use of one or two interfaces.
 
The PCIe Gen 5 I/O is supported in the I/O die with eight serializer/deserializer silicon controllers with one independent set of traces to support each port of 16 PCIe lanes.
 
 

Featured videos


Follow


Related Content

Pages