Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: How does Gaming as a Service work?

Featured content

Tech Explainer: How does Gaming as a Service work?

Gaming as a Service is a streaming platform that pushes content from the cloud to personal devices on demand. Though it’s been around for years, in some ways it’s just getting started.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The technology known as Gaming as a Service has been around for 20 years. But in many ways it’s just getting started.

The technology is already enjoyed by literally millions of gamers worldwide. But new advances in AI and edge computing are making a big difference. So are faster, more consistent internet connections.

And coming soon should be a mix of virtual and augmented reality (VR & AR) headsets. They could bring gaming to a whole new level.

But how does GaaS work? Let’s take a look.

Cloud + edge = GaaS

GaaS is to video games what Netflix is to movies. Like Netflix, GaaS is a streaming platform that pushes content from the cloud to PCs, smartphones and other personal devices (including gaming consoles with the appropriate updates) on demand.

GaaS originates in the cloud. There, data centers packed with powerful servers maintain the gaming environment, process user commands, determine interaction between players and the virtual world, and deliver real-time results to players.

If the cloud is GaaS’s brains, then edge computing networks are its arms. They reach out to a worldwide base of users, connecting their devices to the gaming cloud.

Edge devices also keep things speedy by amplifying or, if necessary, taking over various processing duties. This helps reduce latency, the time lag between when a command is issued and when it’s executed.

Latency is especially detrimental to gamers. They rely on split-second actions that can make the difference between winning and losing. For them, lower latency is always better.

Device choice

GaaS is innovative at the user end, too. GaaS can interface with a wide array of client devices. That offers gamers far more flexibility than they get with traditional gaming models.

With GaaS, users are no longer tied to a specific gaming PC or console such as the Microsoft Xbox or Sony PlayStation. Instead, gamers can use any supported device with a decent GPU and a stable internet connection speed of at least 10 to 15 Mbps.

To be sure, some GaaS games—one example is the super-popular Fortnite—require a mobile or desktop app. But these apps are usually free.

Other cloud-based games are designed to work with any standard web browser. This lets a gamer pick up wherever they left off, using nearly any internet-connected device anywhere in the world.

Big business

If all this sounds attractive, it is. One of the first GaaS titles, World of Warcraft, is still active nearly 20 years after its initial launch. In 2015—the last time its publisher, Blizzard Entertainment, reported usage numbers—World of Warcraft had 5.5 million players.

Even more popular is Fortnite, introduced in 2017. Today it has more than 350 million registered users. In part, that’s because of the game’s flexible business model: Fortnite players can sign up and enjoy basic gameplay for free.

Instead of charging these users a fee, Fortnite’s developer, Epic Games, makes money from literally millions of micro-transactions. These include in-game purchases of weapons and accessories, access to tournaments and other gated experiences, and the purchase of a new “season,” released four times a year.

Super-popular games like Fortnite and World of Warcraft have help create a lucrative and compelling business model. This, in turn, has given rise to a new breed of GaaS tech providers.

One such operation is Blacknut, a France-based cloud gaming platform. Together with Australian outfit Radian Arc, Blacknut provides a GaaS digital infrastructure powered by AMD-based GPU servers designed and distributed by Supermicro.

What could go wrong?

Does GaaS have a downside? Sure. No platform is without its flaws.

For one, cloud gamers are at the mercy of the cloud. If a cloud provider experiences a slowdown or outage, a game can disappear until the issue is resolved.

For another, unlike a collection of game titles on physical media, GaaS gamers never really own the games they play. For example, if Epic decided to shut down Fortnite tomorrow, 350+ million gamers would have no choice but to look for alternate entertainment.

Internet access can be an issue, too. Those of us in first-world cities tend to take our high-speed connections for granted. The rest of the world may not be so lucky.

Future of GaaS

Looking ahead, the future of GaaS appears bright.

Advances in AI-powered cloud and edge computing will encourage game developers to create more nuanced and immersive content than ever before.

Faster and more consistent internet connections will help. They’ll give more power to both the bandwidth-hungry devices we use today and the shiny, new objects of desire we’ll clamor for tomorrow.

Tomorrow’s devices will surely include a mixture of VR and AR headsets. These could attach to other smart devices that enhance gameplay, like the interactive bodysuits foretold by movies such as Ready Player One.

GaaS will get smaller, too, as new mobile devices come to the market. Cloud-gaming titles, already a mainstay of mobile gamers, should be further empowered by next-generation mobile processors and faster, more reliable wireless data connections like 5G.

We’re witnessing the evolution of gaming as multiple clients interact with low latencies and high-quality graphics. Welcome to the future.

 

Featured videos


Follow


Related Content

What is the AMD Instinct MI300A APU?

Featured content

What is the AMD Instinct MI300A APU?

Accelerate HPC and AI workloads with the combined power of CPU and GPU compute. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A APU, set to ship in this year’s second half, combines the compute power of a CPU with the capabilities of a GPU. Your data-center customers should be interested if they run high-performance computing (HPC) or AI workloads.

More specifically, the AMD Instinct MI300A is an integrated data-center accelerator that combines AMD Zen 4 cores, AMD CDNA3 GPUs and high-bandwidth memory (HBM) chiplets. In all, it has more than 146 billion transistors.

This AMD component uses 3D die stacking to enable extremely high bandwidth among its parts. In fact, nine 5nm chiplets that are 3D-stacked on top of four 6nm chiplets with significant HBM surrounding it.

And it’s coming soon. The AMD Instinct MI300A is currently in AMD’s labs. It will soon be sampled with customers. And AMD says it’s scheduled for shipments in the second half of this year. 

‘Most complex chip’

The AMD Instinct MI300A was publicly displayed for the first time earlier this year, when AMD CEO Lisa Su held up a sample of the component during her CES 2023 keynote. “This is actually the most complex chip we’ve ever built,” Su told the audience.

A few tech blogs have gotten their hands on early samples. One of them, Tom’s Hardware, was impressed by the “incredible data throughput” among the Instinct MI300A’s CPU, GPU and memory dies.

The Tom’s Hardware reviewer added that will let the CPU and GPU work on the same data in memory simultaneously, saving power, boosting performance and simplifying programming.

Another blogger, Karl Freund, a former AMD engineer who now works as a market researcher, wrote in a recent Forbes blog post that the Instinct MI300 is a “monster device” (in a good way). He also congratulated AMD for “leading the entire industry in embracing chiplet-based architectures.”

Previous generation

The new AMD accelerator builds on a previous generation, the AMD Instinct MI200 Series. It’s now used in a variety of systems, including Supermicro’s A+ Server 4124GQ-TNMI. This completely assembled system supports the AMD Instinct MI250 OAM (OCP Acceleration Module) accelerator and AMD Infinity Fabric technology.

The AMD Instinct MI200 accelerators are designed with the company’s 2nd gen AMD CDNA Architecture, which encompasses the AMD Infinity Architecture and Infinity Fabric. Together, they offer an advanced platform for tightly connected GPU systems, empowering workloads to share data fast and efficiently.

The MI200 series offers P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric Links with up to 800 GB/sec. of peak total theoretical I/O bandwidth. That’s 2.4x the GPU P2P theoretical bandwidth of the previous generation.

Supercomputing power

The same kind of performance now available to commercial users of the AMD-Supermicro system is also being applied to scientific supercomputers.

The AMD Instinct MI25X accelerator is now used in the Frontier supercomputer built by the U.S. Dept. of Energy. That system’s peak performance is rated at 1.6 exaflops—or over a billion billion floating-point operations per second.

The AMD Instinct MI250X accelerator provides Frontier with flexible, high-performance compute engines, high-bandwidth memory, and scalable fabric and communications technologies.

Looking ahead, the AMD Instinct MI300A APU will be used in Frontier’s successor, known as El Capitan. Scheduled for installation late this year, this supercomputer is expected to deliver at least 2 exaflops of peak performance.

 

Featured videos


Follow


Related Content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

Featured content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

The Student Cluster Computing challenge made its 16th appearance at the SuperComputer 22 (SC22) event in Dallas. The two student teams that were running AMD EPYC™ CPUs and AMD Instinct™ GPUs were the two teams that aced the Linpack benchmark. That's the test used to determined the TOP500 supercomputers in the world.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last month, the annual Supercomputing Conference 2022 (SC22) was held in Dallas. The Student Cluster Competition (SCC), which began in 2007, was also performed again. The SCC offers an immersive high-performance computing (HPC) experience to undergraduate and high school students.

 

According to the SC22 website: Student teams design and build small clusters, learn scientific applications, apply optimization techniques for their chosen architectures and compete in a non-stop, 48-hour challenge at the SC conference to complete real-world scientific workloads, showing off their HPC knowledge for conference attendees and judges.

 

Each team has six students, at least one faculty advisor, a sutdent team leader, and is associated with vendor sponsors, which provide the equipment. AMD and Supermicro jointly sponsored both the Massachusetts Green Team from MIT, Boston University and Northeastern University and the 2MuchCache team from UC San Diego (UCSD) and the San Diego Supercomputer Center (SDSC). Running AMD EPYC™ CPUs and AMD Instinct™-based GPUs supplied by AMD and Supermicro, the two teams came in first and second in the SCC Linpack test.

 

The Linpack benchmarks measure a system's floating-point computing power, according to Wikipedia. The latest version of these benchmarks is used to determine the TOP500 list, ranks the world's most powerful supercomputers.

 

In addition to chasing high scores on benchmarks, the teams must operate their systems without exceeding a power limit. For 2022, the competition used a variable power limit: at times, the power available to each team for its competition hardware was as high as 4000-watts (but was usually lower) and at times it was as low as 1500-watts (but was usually higher).

 

The “2MuchCache” team offers a poster page with extensive detail about their competition hardware. They used two third-generation AMD EPYC™ 7773X CPUs with 64 cores, 128 threads and 768MB of stacked-die cache. Team 2MuchCache used one AS-4124GQ-TNMI system with four AMD Instinct™ MI250 GPUs with 53 simultaneous threads.

 

The “Green Team’s” poster page also boasts two instances of third-generation AMD 7003-series EPYC™ processors, AMD Instinct™ 1210 GPUs with AMD Infinity fabric. The Green Team utilized two Supermicro AS-4124GS-TNR GPU systems.

 

The Students of 2MuchCache:

Longtian Bao, role: Lead for Data Centric Python, Co-lead for HPCG

Stefanie Dao, role: Lead for PHASTA, Co-lead for HPL

Michael Granado, role: Lead for HPCG, Co-lead for PHASTA

Yuchen Jing, role: Lead for IO500, Co-lead for Data Centric Python

Davit Margarian, role: Lead for HPL, Co-lead for LAMMPS

Matthew Mikhailov Major, role: Team Lead, Lead for LAMMPS, Co-lead for IO500

 

The Students of Green Team:

Po Hao Chen, roles: Team leader, theory & HPC, benchmarks, reproducibility

Carlton Knox, roles: Computer Arch., Benchmarks, Hardware

Andrew Nguyen, roles: Compilers & OS, GPUs, LAMMPS, Hardware

Vance Raiti, roles: Mathematics, Computer Arch., PHASTA

Yida Wang, roles: ML & HPC, Reproducibility

Yiran Yin, roles: Mathematics, HPC, PHASTA

 

Congratulations to both teams!

Featured videos


Follow


Related Content

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Follow


Related Content

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Featured content

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Eliovp, which brings together computing and storage solutions for blockchain workloads, rewrote its code to take full advantage of AMD’s Instinct MI100 and MI250 GPUs. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Eliovp

When you’re building blockchain-based applications, you typically need a lot of computing and storage horsepower. This is the niche that Belgium-based Eliovp fills. They have developed a line of extremely fast cloud-based servers designed to run demanding blockchain workloads.

 

Eliovp has been recognized as the top Filecoin storage provider in Europe. This refers to a decentralized blockchain-based protocol that lets anyone rent spare local storage and is a key Web3 component.

 

To satisfy the compute  and storage needs, Eliiovp employs Supermicro’s A+ AS-1124US® and AS-4124GS® servers, running quad-core AMD EPYC 7543 and 7313 CPUs and as many as 8 AMD Instinct MI100 and MI250 GPUs to further boost performance.

 

What makes these servers especially potent is that Eliovp rewrote its code to run on this specific AMD Instinct GPU family. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

 

One of the attractions of the Supermicro servers is the capability to leverage the high-density core count and higher clock speeds as well as the 32 memory slots. And it comes packaged in a relatively small form factor.

 

“By working with Supermicro, we get new generations of servers with AMD technology earlier in our development cycle, enabling us to bring our products to market faster," said Elio Van Puyvelde, CEO of Eliovp. The company was able to take advantage of new CPU and GPU instructions and memory management to make its code more efficient and effective. Eliovp was also able to reduce overall server power consumption, which is always important in blockchain applications that span dozens of machines.

 

Featured videos


Follow


Related Content

Microsoft Azure’s More Capable Compute Instances Take Advantage of the Latest AMD EPYC™ Processors

Featured content

Microsoft Azure’s More Capable Compute Instances Take Advantage of the Latest AMD EPYC™ Processors

Azure HBv3 series virtual machines (VMs) are optimized for HPC applications, such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, and various simulation tasks. HBv3 VMs feature up to 120 Third-Generation AMD EPYC™ 7v73X-series CPU cores with more than 450 GB of RAM.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Azure

Increasing demands for higher-performance computing mean that the cloud-based computing needs to ratchet up its performance too. Microsoft Azure has introduced more capable compute virtual machines (VMs) that take advantage of the latest from AMD EPYC™ processors. This means that developers can easily spin up VMs that normally cost thousands of dollars if they were to purchase their physical equivalents.

 

This story's focus is on two of Azure's series: HBv3 and NVv4. In most cases, a single virtual machine is used to take advantage of all its resources. High-performance examples of Azure HBv3 series VMs are optimized for HPC applications, such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, and various simulation tasks. HBv3 VMs feature up to 120 Third-Generation AMD EPYC™ 7v73X-series CPU cores with more than 450 GB of RAM. This series of VMs has processor clock frequencies up to 3.5GHz. All HBv3-series VMs feature 200Gb/sec HDR InfiniBand switches to enable supercomputer-scale HPC workloads. The VMs are connected and optimized to deliver the most consistent performance. Get more information about AMD EPYC and Microsoft Azure virtual machines.

 

A Dutch construction company, TBI, is using the Azure NVv4 to run computer-aided design and building modeling tasks on a series of virtual Windows desktops. The NVv4 VMs are only available running Windows powered by from four to 32 AMD EPYC™ vCPUs and offering a partial to full AMD Instinct™ M125 GPU with memory ranging from 2GB to 17GB. Previous generations of NV instances used Intel CPUs and NVIDIA GPUs that offer less performance.

 

TBI chose this solution because it was cheaper, easier to support and keep its software collection updated. Using virtual desktops meant that no client data was stored on any laptops, making things more secure. Also, these instances delivered equivalent performance, taking advantage of the SR-IOV technology.

 

Supermicro offers a wide range of servers that incorporate the AMD EPYC™ CPU and a number of servers optimized for applications that use GPUs. These servers range from 1U rackmount servers to high end 4U GPU optimized systems. Whether you’re using it on-prem or you’re building your own cloud, Supermicro’s Aplus servers are optimized for performance and technical computing applications and they run Azure and other systems well. Get more information about Supermicro servers with AMD’s EPYC™ CPUs.

Featured videos


Follow


Related Content

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Featured content

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Using six nanometer processes and the CDNA2 graphics dies, AMD has created the third generation of GPU accelerators, which have more than twice the performance of previous GPU processors and deliver 181 teraflops of mixed precision peak computing power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD and Supermicro have made it easier to exploit the most advanced combination of GPU and CPU technologies.

Derek Bouius, a senior product manager at AMD, said “Using six nanometer processes and the CDNA2 graphics dies, we created the third generation of GPU chipsets that have more than twice the performance of previous GPU processors. They deliver 181 teraflops of mixed precision peak computing power.” Called the AMD Instinct MI210™ and AMD Instinct MI250™, they have twice the memory (64 GB) to work with and deliver data at the rate of 1.6 TB/sec. Both these accelerators are packaged as fourth generation PCIe expansion cards and come with direct connectors to Infinity Fabric bridges for faster I/O throughput between GPU cards -- without having their traffic go through the standard PCIe bus.

The Instinct accelerators have immediate benefit for improving performance in the most complex computational applications, such as molecular dynamics, computer-aided engineering, weather and oil and gas modeling.

"We provided optimized containerized applications that are pre-built to support the accelerator and run them out of the box," Bouius said. “It is a very easy lift to go from existing solutions to the AMD accelerator,” he added. It’s accomplished by bringing together AMD’s ROCm™ support libraries and tools with its HIP programming language and device drivers – all of which are open source. They can unlock the GPU performance enhancements to make it easier for software developers to take advantage of its latest processors. AMD offers a catalog of dozens of currently available applications.

Supermicro’s SuperBlade product line combines the new AMD Instinct™ GPU accelerators and AMD EPYC™ processors to deliver higher performance with lower latency for its enterprise customers.

One packaging option is to combine six chassis with 20 blades each, delivering 120 servers that provide a total of more than 3,000 teraflops of combined processing power. This equipment delivers more power efficiency in less space with fewer cables, providing a lower cost of ownership. The blade servers are all hot-pluggable and come with two onboard front-mounted 25 gigabit and two 10 gigabit Ethernet connectors.

“Everything is faster now for running enterprise workloads,” says Shanthi Adloori, senior director of product management for Supermicro. “This is why our Supermicro servers have won the world record in performance from the Standard Performance Evaluation Corp. three years in row.” Another popular design for the SuperBlade is to provide an entire “private cloud in a box” that combines administration and worker nodes and handles deploying a Red Hat Openshift platform to run Kubernetes-based deployments with minimal provisioning.

Related Resources

Featured videos


Follow


Related Content

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Featured content

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Learn More about this topic
  • Applications:
  • Featured Technologies:

Solving some of business’ bigger computing challenges requires a solid partnership between CPU vendor, system builders and channel partners. That is what AMD and Supermicro have brought to the market with the third generation of AMD's EPYC™ processors with AMD 3D V-Cache™ and AMD Instinct™ MI200 series GPU accelerators wrapped up in SuperBlade servers built by Supermicro.

 

“This has immediate benefits for particular fields such as crash and digital circuit simulations and electronic design automation,” said David Weber, Senior Manager for AMD. “It means we can create virtual chips and track workflows and performance before we design and build the silicon." The same situation holds for computational fluid dynamics, he added, "in which we can determine the virtual air and water flows across wings and through water pumps and save a lot of time and money, and the AMD 3D V-Cache™ makes this process a lot faster.” Without any software coding changes, these applications are seeing 50% to 80% performance improvement, Weber said.

 

The chips are not just fast, they come with several built-in security features, including support for Zen 3 and Shadow Stack. Zen 3 is the overall name for a series of improvements to the AMD higher-end CPU line that have shown a 19% improvement in instructions per clock, lower latency for doubled cache delivery when compared to the earlier Zen 2 architecture chips.

 

These processors also support Microsoft’s Hardware-enforced Stack Protection to help detect and thwart control-flow attacks by checking the normal program stack against a secured hardware-stored copy. This helps to boot securely, protect the computer from firmware vulnerabilities, shield the operating system from attacks, and prevent unauthorized access to devices and data with advanced access controls and authentication systems.

 

Supermicro offers its SuperBlade servers that take advantage of all these performance and security improvements. For more information, see this webcast.

Featured videos


Follow


Related Content

Lawrence Livermore Labs Advances Scientific Research with AMD GPU Accelerators

Featured content

Lawrence Livermore Labs Advances Scientific Research with AMD GPU Accelerators

The Lawrence Livermore National Lababoratory chose to use a cluster of 120 servers running AMD EPYC™ processors with nearly 1,000 AMD Instinct™ GPU accelerators. The hardware, facilitated by Supermicro, was an excellent match for the molecular dynamics simulations required for the Lab's cutting-edge research, which combines machine learning with structural biology concepts.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Lawrence Livermore National Laboratory is one of the centers of high-performance computing (HPC) in the world and it is constantly upgrading its equipment to meet increasing computational demands. It houses one of the world's largest computing environments. Among its more pressing research goals derives from the COVID-19 crisis.

Lawrence Livermore researches and supports proposals from the COVID-19 HPC Consortium, which is composed of more than a dozen research organizations across government, academia and private industry. It aims to accelerate disease detection and treatment efforts, as well as to screen antibody candidates virtually and run several disease-related mathematical simulations.

"By leveraging the massive compute capabilities of the world’s [more] powerful supercomputers, we can help accelerate critical modeling and research to help fight the virus," said Forrest Norrod, senior vice president and general manager, AMD Datacenter and Embedded Systems Group.

The lab chose to use a cluster of 120 servers running AMD EPYC™ processors with nearly 1,000 AMD Instinct™ GPU accelerators. The servers were connected by Mellanox switches. The product choices had two benefits: First, the hardware, facilitated by Supermicro, was an excellent match for the molecular dynamics simulations required for this research. The lab is performing cutting-edge research that combines machine learning with structural biology concepts. Second, the gear was tested and packaged together, so it could become operational when it was delivered to the lab.

AMD software engineers and application specialists were able to modify components to run GPU-based applications. This is top-of-the-line gear. The AMD accelerators deliver up to 13.3 teraFLOPS of single-precision peak floating-point performance combined with 32GB of high-bandwidth memory. The scientists were able to reduce their simulation run-times from seven hours to just 40 minutes, allowing  them to test multiple modeling iterations efficiently.

For more information, see the Supermicro case study and Lawrence Livermore report.

Featured videos


Follow


Related Content

Pages