Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Featured content

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols. More on the release of the AMD EPYC™ 9004 series processors in this part three of a four-part series..

Learn More about this topic
  • Applications:
  • Featured Technologies:

 
 
Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols that have seen further innovation with the release of the AMD EPYC™ 9004 series processors. These have a direct benefit for performance-intensive computing applications, particularly for supporting higher-density virtual machines (VMs) and using technologies that can protect data flows from leaving the confines of what Google calls confidential VMs as well as further isolating VM hypervisors. They offer a collection of N2D and C2D instances that support these confidential VMs.
 
“Product security is always our top focus,” said AMD CTO Mark Papermaster. “We are continuously investing and collaborating in the security of these technologies.” 
 
Royal Hansen, VP of engineering for Google Cloud said: “Our customers expect the most trustworthy computing experience on the planet. Google and AMD have a long history and a variety of relationships with the deepest experts on security and chip development. This was at the core of our going to market with AMD’s security solutions for datacenters.”
 
The two companies also worked together on this security analysis.
 
Called Infinity Guard collectively, the security technologies theyv'e been working on involve four initiatives:
 
1. Secure encrypted virtualization provides each VM with its own unique encryption key known only to the processor.
 
2. Secure nested paging complements this virtualization to protect each VM from any malicious hypervisor attacks and provide for an isolated and trusted environment.
 
3. AMD’s secure boot along with the Trusted Platform Module attestation of the confidential VMs happen every time a VM boots, ensuring its integrity and to mitigate any persistent threats.
 
4. AMD’s secure memory encryption and integration into the memory channels speed performance.
 
These technologies are combined and communicate using the AMD Infinity Fabric pathways to deliver breakthrough performance along with better secure communications.
 

Featured videos


Follow


Related Content

Are Your App Workloads Running in Parallel?

Featured content

Are Your App Workloads Running in Parallel?

Learn More about this topic
  • Applications:

To be effective at delivering performance-intensive applications, it pays to split up your workloads and run them simultaneously, a.k.a., in parallel. In the past, we didn’t really think about the resources required to run workloads, because many business computers were all-purpose machines. There was also a tendency to run loads serially to avoid bogging down due to heavy CPU utilization, heavy I/0 and so on.

 

But computers have become much more capable of late. What were once thought of as “desktop” computers have approached the arena once occupied by minicomputers and mainframes. Like the larger systems, they serve multiple concurrent users and higher-demanding applications. As a result, we need to think more carefully about how their various components – processor, memory, storage and network connections – interact, find and eliminate the bottlenecks between these components to make them useful for higher-end workloads.
 

Straighten out Bottlenecks


One way to eliminate bottlenecks is to break your apps into smaller, more digestible pieces that can run concurrently. As the new processors employ more cores and more sophisticated components, this means that more of your code can be consumed by the entire CPU package. This is the inherent nature of parallel processing, and why the world’s fastest supercomputers now routinely span thousands (and some in the millions) of cores.


A company called Weka has developed a file system designed to provide higher-speed data ingestion and more appropriate for machine learning and advanced mathematical modeling applications. Understanding the particular type of data storage – whether it is a parallel file system such as Weka, more scratch space for computations or better backups – can make a big difference in overall performance.


But it is also important how your apps work across the network. Is there a lot of back-and-forth between clients and servers, or sending a small chunk of data and waiting for a reply? This introduces a lot of downtime for the app, and these “wait states” should be identified and potentially eliminated.
 

Offload Workloads


Does your application do a lot of calculation? As discussed in an earlier story appearing on Performance-Intensive Computing, complementary processors, such as co-processors and GPUs, can be a big performance boost so long the processor can move on to its next task, working in parallel, instead of waiting for data returned from the offloaded computation.

 

Working in parallel can be a challenge when your apps frequently pause to wait for data from another process or are highly monolithic designed to run in a serial fashion. Such apps may be challenging to rewrite to take advantage cloud native or parallel operations. At some point, you are going to have to make that break and put in the programming effort to modernize your apps, but only you or your company can decide when it’s right to do that.

 

But if you can modify your workloads for this parallel structure and your hardware was designed to support it, you will see big benefits.

Featured videos


Follow


Related Content

Unlocking the Value of the Cloud for Mid-size Enterprises

Featured content

Unlocking the Value of the Cloud for Mid-size Enterprises

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Microsoft Azure

Organizations around the world are requiring new options for their next-generation computing environments. Mid-size organizations, in particular, are facing increasing pressure to deliver cost-effective, high-performance solutions within their hyperconverged infrastructures (HCI). Recent collaboration between Supermicro, Microsoft Azure and AMD, leveraging their collective technologies, has created a fresh approach that lets enterprises maintain performance at a lower operational cost while helping to reduce the organization’s carbon footprint in support of sustainability initiatives. This cost-effective, 1U system (a 2U version is available) offers both power, flexibility and modularity in large-scale GPU deployments.

The results of the collaboration combine the latest technologies, supporting multiple CPU, GPU, storage and networking options optimized to deliver uniquely configured and highly scalable systems. The product can be optimized for SQL and Oracle databases, VDI, productivity applications and database analytics. This white paper explores why this universal GPU architecture is an intriguing and cost-effective option for CTOs and IT administrators who are planning to rapidly implement hybrid cloud, data center modernization, branch office/edge networking or Kubernetes deployments at scale.

Get the 7-page white paper that provides the detail to assess the solution for yourself, including the new Azure Stack HCI certified system, specifications, cost justification and more.

 

Featured videos


Follow


Related Content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Featured content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Join Supermicro online Nov. 10th to watch the unveiling of the company’s new A+ systems -- featuring next-generation AMD EPYC™ processors. They can't tell us any more right now. But you can register for a link to the event by scrolling down and signing-up on this page.
Learn More about this topic
  • Applications:
  • Featured Technologies:

Featured videos


Follow


Related Content

Energy-Efficient AMD EPYC™ Processors Bring Significant Savings

Featured content

Energy-Efficient AMD EPYC™ Processors Bring Significant Savings

Cut electricity consumption by up to half with AMD's power-saviing EPYC™ processors.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Ateme, DBS, Nokia

Nokia was able to target up to a 40% reduction in server power consumption using EPYC. DBS and Ateme each experienced a 50% drop in energy costs. AMD’s EPYC™ processors can provide big energy-saving benefits, so you can meet your most demanding application performance requirements and still provide planetary and environmental efficiencies.

For example: To provide a collection of 1,200 virtual machines, AMD would require 10 servers compared to 15 for those built using equivalent Intel CPUs. This translates into a 41% lower total cost of ownership over a three-year period, with a third less energy consumption, saving on carbon emissions too. For deep detail and links to case studies by the companies mentioned above. Find out how they  saved significantly on energy-costs while reducing their carbon footprints, check out the infographic.

 

Featured videos


Follow


Related Content

The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs

Featured content

The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs

Weka’s file system, WekaFS, unifies your entire data lake into a shared global namespace where you can more easily access and manage trillions of files stored in multiple locations from one directory.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Weka.io

One of the challenges of building machine learning (ML) models is managing data. Your infrastructure must be able to process very large data sets rapidly as well as ingest both structured and unstructured data from a wide variety of sources.

 

That kind of data is typically generated in performance-intensive computing areas like GPU-accelerated applications, structural biology and digital simulations. Such applications typically have three problems: how to efficiently fill a data pipeline, how to easily integrate data across systems and how to manage rapid changes in data storage requirements. That’s where Weka.io comes into play, providing higher-speed data ingestion and avoiding unnecessary copies of your data while making it available across the entire ML modeling space.

 

Weka’s file system, WekaFS, has been developed just for this purpose. It unifies your entire data lake into a shared global namespace where you can more easily access and manage trillions of files stored in multiple locations from one directory. It works across both on-premises and cloud storage repositories and is optimized for cloud-intensive storage so that it will provide the lowest possible network latencies and highest performance.

 

This next-generation data storage file system has several other advantages: it is easy to deploy, entirely software-based, plus it is a storage solution that provides all-flash level performance, NAS simplicity and manageability, cloud scalability and breakthrough economics. It was designed to run on any standard x86-based server hardware and commodity SSDs or run natively in the public cloud, such as AWS.

 

Weka’s file system is designed to scale to hundreds of petabytes, thousands of compute instances and billions of files. Read and write latency for file operations against active data is as low as 200 microseconds in some instances.

 

Supermicro has produced its own NVMe Reference Architecture that supports WekaFS on some of its servers, including the Supermicro A+ AS-1114S-WN10RT and AS-2114S-WN24RT using the AMD EPYC™ 7402P processors with at least 2TB of memory, expandable to 4TB. Both servers support hot-swappable NVMe storage modules for ultimate performance. Also check out the Supermicro WekaFS A/I and HPC Solution Bundle.

 

 

Featured videos


Follow


Related Content

Supermicro SuperBlades®: Designed to Power Through Distributed AI/ML Training Models

Featured content

Supermicro SuperBlades®: Designed to Power Through Distributed AI/ML Training Models

Running heavy AI/ML workloads can be a challenge for any server, but the SuperBlade has extremely fast networking options, upgradability, the ability to run two AMD EPYC™ 7000-series 64-core processors and the Horovod open-source framework for scaling deep-learning training across multiple GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Running the largest artificial intelligence (AI) and machine learning (ML) workloads is a job for the higher-performing systems. Such loads are often tough for even more capable machines. Supermicro’s SuperBlade combines blades using AMD EPYC™ CPUs with competing GPUs into a single rack-mounted enclosure (such as the Supermicro SBE-820H-822). That leverages an extremely fast networking architecture for these demanding applications that need to communicate with other servers to complete a task.

 

The Supermicro SuperBlade fits everything into an 8U chassis that can host up to 20 individual servers. This means a single chassis can be divided into separate training and model processing jobs. The components are key: servers can take advantage of the 200G HDR InfiniBand network switch without losing any performance. Think of this as delivering a cloud-in-a-box, providing both easier management of the cluster along with higher performance and lower latencies.

 

The Supermicro SuperBlade is also designed as a disaggregated server, meaning that components can be upgraded with newer and more efficient CPUs or memory as technology progresses. This feature significantly reduces E-waste.


The SuperBlade line supports a wide selection of various configurations, including both CPU-only and mixed CPU/GPU models, such as the SBA-4119SG, which comes with up to two AMD EPYC™ 7000-series 64-core CPUs. These components are delivered on blades that can easily slide right in. Plus, they slide out as easily when you need to replace the blades or the enclosure. The SuperBlade servers support a wide network selection as well, ranging from 10G to 200G Ethernet connections.

 

The SuperBlade employs the Horovod distributed model-training, message-passing interface to let multiple ML sessions run in parallel, maximizing performance. In a sample test of two SuperBlade nodes, the solution was able to process 3,622 GoogleNet images/second, and eight nodes were able to scale up to 13,475 GoogleNet images/second.


As you can see, Supermicro’s SuperBlade improves performance-intensive computing and boosts AI and ML use cases, enabling larger models and data workloads. The combined solution enables higher operational efficiency to automatically streamline processes, monitor for potential breakdowns, apply fixes, more efficiently facilitate the flow of accurate and actionable data and scale up training across multiple nodes.

Featured videos


Follow


Related Content

AMD’s Threadripper: Higher-Performance Computing from a Desktop Processor

Featured content

AMD’s Threadripper: Higher-Performance Computing from a Desktop Processor

The AMD Threadripper™ CPU may be a desktop processor, but desktop computing was never like this. The new chipset comes in a variety of multi-core versions, with a maximum of 64 cores running up to 128 threads, 256MB of L3 cache and 2TB of DDR 8-channel memory. The newest Threadrippers are built with AMD’s latest 7 nanometer dies.

 
Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Velocity Micro

Content creators, designers, video animators and digital FX experts make much higher demands of their digital workstations than typical PC users. These disciplines often make use of heavily threaded applications such as Adobe After Effects, Unreal Engine or CAD apps such as Autodesk. What is needed is a corresponding increase in computing power to handle these applications.

 

That’s where one solution comes in handy for this type of power user: the AMD Ryzen Threadripper™ CPU, which now has a PRO 5000 update. One advantage of these newer chipsets is that they can fit on the same WRX80 motherboards that supported the earlier Threadripper series. There are other configurations, including the ProMagix HD150 workstation sold by Velocity Micro. The solution provider is looking at testing overclocking on both the MSI and Asrock motherboards that they will include in their HD150 workstations. That’s right, a chip that’s designed from the get-go to be overclocked. Benchmarks using sample apps (mentioned above) ran about twice as fast as on competitors’ less-capable hardware. (Supermicro offers the MI2SWA-TF motherboard with the Threadripper chipset.)

 

Desktop Was Never Like This

 

The AMD Threadripper™ CPU may be a desktop processor, but desktop computing was never like this. The new chipset comes in a variety of multi-core versions, with a maximum of 64 cores running up to 128 threads, 256MB of L3 cache and 2TB of DDR 8-channel memory. The newest Threadrippers are built with AMD’s latest 7 nanometer dies.

 

The Threadripper CPUs are not just fast but come with several built-in security features, including support for Zen 3 and Shadow Stack. Zen 3 is the overall name for a series of improvements to the AMD higher-end CPU line that have shown a 19% improvement in instructions per clock. And they have lower latency for double the cache delivery when compared to the earlier Zen 2 architecture chips.

 

These processors also support Microsoft’s Hardware-enforced Stack Protection to help detect and thwart control-flow attacks by checking the normal program stack against a secured hardware-stored copy. This helps to boot securely, protect the computer from firmware vulnerabilities, shield the operating system from attacks, and prevent unauthorized access to devices and data with advanced access controls and authentication systems.

Featured videos


Follow


Related Content

Supermicro and Qumulo Deliver High-Performance File Data Management Solution

Featured content

Supermicro and Qumulo Deliver High-Performance File Data Management Solution

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Qumulo

One of the issues that’s key to delivering higher-performing computing solutions is something that predates the PC itself: managing distributed file systems. The challenge becomes more acute when the applications involve manipulating large quantities of data. The tricky part is in how they scale to support these data collections, which might consist of video security footage, life sciences data collections and other research projects.

 

Storage systems from Qumulo integrate well into a variety of existing environments, such as those involving multiple storage protocols and file systems. The company supports a wide variety of use cases that allow for scaling up and out to handle Petabyte data quantities. Qumulo can run at both the network edge, in the data center and on various cloud environments. Their systems run on Supermicro’s all non-volatile memory express (NVMe) platform, the highest performing protocol designed for manipulating data stored on SSD drives. The servers are built on 24-core 2.8 GHz AMD EPYC™ processors.


 

Qumulo provides built-in near real-time data analytics that let IT administrators predict storage trends and better manage storage capacity so that they can proactively plan and optimize workflows.

 

The product handles seamless file and object data storage, is hardware agnostic, and supports single data namespace and burstable computing running on the three major cloud providers (AWS, Google and Azure) with nearly instant data replication. Its distributed file system is designed to handle billions of files and works equally well on both small and large file sizes.

 

Qumulo also works on storage clusters, such as those created with Supermicro AS-1114S servers, which can accommodate up to 150TB per storage node. Qumulo Shift for Amazon S3 is a feature that lets users copy data to the Amazon S3 native format for easy access to AWS services if the required services are not available in an on-prem data center. 

For more information, see the white paper on the Supermicro and Qumulo High-Performance File Data Management and Distributed Storage solution, powered by AMD EPYC™ processors.

Featured videos


Follow


Related Content

Red Hat’s OpenShift Runs More Efficiently with Supermicro’s SuperBlade® Servers

Featured content

Red Hat’s OpenShift Runs More Efficiently with Supermicro’s SuperBlade® Servers

The Supermicro SuperBlade's advantage for the Red Hat OCP environment is that it supports a higher-density infrastructure and lower-latency network configuration, along with benefits from reduced cabling, power and shared cooling features. SuperBlades feature multiple AMD EPYC™ processors using fast DDR4 3200MHz memory modules.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Red Hat

Red Hat’s OpenShift Container Platform (OCP) provides enterprise Kubernetes-bundled devops pipelines. It automates builds and container deployments and lets developers focus on application logic while leveraging best-of-class enterprise infrastructure.

 

OpenShift supports a broad range of programming languages, web frameworks, databases, connectors to mobile devices and external back ends. OCP supports cloud-native, stateless applications and traditional applications. Because of its flexibility and utility in running advanced applications, OCP has become one of the go-to places that support high-performance computing.

 

Red Hat’s OCP comes in several deployment packages, including as a managed service running on the major cloud platforms, as virtual machines, and on “bare metal” servers, meaning a user installs all the software needed for the platform and is the sole tenant of the server.

 

It’s that last use case in which Supermicro’s SuperBlade servers are especially useful. Their advantage is that they support a higher-density infrastructure and lower-latency network configuration, along with benefits from reduced cabling, power and shared cooling features.

 

The SuperBlade comes in an 8U chassis with room to accommodate up to 20 hot-pluggable nodes (processor, network and storage) in a variety of more than a dozen models that support serial-attached SCSI, ordinary SATA drives, and GPU processor modules. It sports multiple AMD EPYC™ processors using fast DDR4 3200MHz memory modules.

A chief advantage of the SuperBlade is that it can support a variety of higher-capacity OCP workload configurations and do so within a single server chassis. This is critical because OCP requires a variety of server roles to deliver its overall functionality, and having these roles working inside of a chassis means performance  and latency benefits. For example, you could partition a SuperBlade’s 20 nodes into various OCP components such as administrative, management, storage, worker, infrastructure and load balancer nodes, all operating within a single chassis. For deeper detail about running OCP on the SuperBlade, check out this Supermicro white paper.

Featured videos


Follow


Related Content

Pages