Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

How Ahrefs speeds SEO services with huge compute, memory & storage

Featured content

How Ahrefs speeds SEO services with huge compute, memory & storage

Ahrefs, a supplier of search engine optimization tools, needed more robust tech to serve its tens of thousands of customers and crawl billions of web pages daily. The solution: More than 600 Supermicro Hyper servers powered by AMD processors and loaded with huge memory and storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering how to satisfy customers who need big—really big—compute and storage? Take a tip from Ahrefs Ltd.

This company, based in Singapore, is a 10-year-old provider of search engine optimization (SEO) tools.

Ahrefs has a web crawler that processes up to 8 billion pages a day. That makes Ahrefs one of the world’s biggest web crawlers, up there with Google and Bing, according to internet hub Cloudflare Radar.

What’s more, Ahrefs’ business has been booming. The company now has tens of thousands of users.

That’s good news. But it also meant that to serve these customers, Ahrefs needed more compute power and storage capacity. And not just a little more. A lot.

Ahrefs also realized that its current generation of servers and CPUs couldn’t meet this rising demand. Instead, the company needed something new and more powerful.

Gearing up

For Ahrefs, that something new is its recent order of more than 600 Supermicro servers. Each system is equipped with dual      4th generation AMD EPYC 9004 Series processor, a whopping 1.5 TB of DDR5 memory, and a massive 120+ TB of storage.

More specifically, Ahrefs selected Supermicro’s AS-2125HS-TNR servers. They’re powered by dual AMD EPYC 9554 processors, each with 64 cores and 128 threads, running at a base clock speed of 3.1 GHz and an all-core boost speed of 3.75 GHz.

For Ahrefs’ configuration, each Supermicro server also contains eight NVMe 15.3 TB SSD storage devices, for a storage total of 122 TB. Also, each server communicates with the Ahrefs data network via two 100 Gbps ports.

Did it work?

Yes. Ahrefs’ response times got faster, even as its volume increased. The company can now offer more services to more customers. And that means more revenue.

Ahrefs’ founder and CEO, Dimitry Gerasimenko, puts it this way: “Supermicro’s AMD-based servers were an ideal fit for our business.”

How about you? Have customers who need really big compute and storage? Tell them about Ahrefs, and point them to these resources:

 

Featured videos


Follow


Related Content

Gaming as a Service gets a platform boost

Featured content

Gaming as a Service gets a platform boost

Gaming as a Service gets a boost from Blacknut’s new platform for content providers that’s powered by Supermicro and Radian Arc.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Getting into Gaming as a Service? Cloud gaming provider Blacknut has released a new platform for content providers that’s powered by Supermicro and Radian Arc.

This comprehensive edge and cloud architecture provides content providers worldwide with bundled and fully managed game licensing, in-depth content metadata and a global hybrid-cloud solution.

If you’re not into gaming yet, you might want to be. Interactive entertainment and game streaming are on the rise.

Last year, an estimated 30 million paying users spent a combined $2.4 billion on cloud gaming services, according to research firm Newzoo. Looking ahead, Newzoo expects this revenue to more than triple by 2025, topping $8 billion. That would make the GaaS market an attractive investment for content providers.

What’s more, studies show that Gen Z consumers (aged 11 to 26 years old) spend over 12 hours a week playing video games. That’s more time than they spend watching TV, by about 30 minutes a week.

Paradigm shift

This data could signal a paradigm shift that challenges the dominance of traditional digital entertainment. That could include subscription video on demand (SVOD) such as Netflix as well as content platforms including ISPs, device manufacturers and media companies.

To help content providers capture younger, more tech-savvy consumers, Blacknut, Supermicro and Radian Arc are lending their focus to deploying a fully integrated GaaS platform. Blacknut, based in France, offers cloud-based gaming. Australia-based Radian Arc provides digital infrastructure and cloud game technology.

The system offers IT hardware solutions at the edge and the core, system management software and extensive IP. Blacknut’s considerable collection includes a catalog of over 600 AAA to indie games.

Blacknut is also providing white-glove services that include:

  • Onboard games wish lists and help establishing exclusive publisher agreements
  • Support for Bring Your Own Game (BYOG) and freemium game models
  • Assistance with the development of IP-licensed games designed in partnership with specialized studios
  • Marketing support to help providers develop go-to-market plans and manage subscriber engagement

The tech behind GaaS

Providers of cloud-based content know all too well the challenge of providing customers with high-availability, low-latency service. The right technology is a carefully choreographed ballet of hybrid cloud infrastructure, modern edge architecture and the IT expertise required to make it all run smoothly.

At the edge, Blacknut’s GaaS offering operates on Radian Arc’s GPU Edge Infrastructure-as-a-Service platform powered by Supermicro GPU Edge Infrastructure solutions.

These hardware solutions include flexible GPU servers featuring 6 to 8 directly attached GPUs and AMD EPYC processors. Also on board are cloud-optimized, scalable management servers and feature-rich ToR networking switches.

Combined with Blacknut’s public and private cloud infrastructure, an impressive array of hardware and software solutions come together. These can create new ways for content providers to quickly roll out their own cloud-gaming products and capture additional market share.

Going global

The Blacknut GaaS platform is already live in 45 countries and is expanding via distribution partnerships with over-the-top providers and carriers.

The solution can also be pre-embedded in set-top boxes and TV ecosystems. Indeed, it has already found its way onto such marquis devices as Samsung Gaming Hub, LG Gaming Shelf and Amazon FireTV.

To learn more about the Blacknut GaaS platform powered by Radian Arc and Supermicro, check out this new solution brief:

 

Featured videos


Follow


Related Content

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos


Follow


Related Content

Do you know why 64 cores really matters?

Featured content

Do you know why 64 cores really matters?

In a recent test, Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors ran engineering simulations nearly as fast as a dual-processor system, but needed only two-thirds as much power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

More cores per CPU sounds good, but what does it actually mean for your customers?

In the case of certain Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors, it means running engineering simulations with dual-processor performance from a single-socket system. And with further cost savings due to two-thirds lower power consumption.

That’s according to tests recently conducted by MVConcept, a consulting firm that provides hardware and software optimizations. The firm tested two Supermicro systems, the AS-5014A-TT SuperWorkstation and AS-2114GT-DPNR server.

A solution brief based on MVConcept’s testing is now available from Supermicro.

Test setup

For these tests, the Supermicro server and workstation were both tested in two AMD configurations:

  • One with the AMD Ryzen Threadripper PRO 5995WX processor
  • The other with an older, 2nd gen AMD Ryzen Threadripper PRO 3995WX processor

In the tests, both AMD processors were used to run 32-core as well as 64-core operations.

The Supermicro systems were tested running Ansys Fluent, fluid simulation software from Ansys Inc. Fluent models fluid flow, heat, mass transfer and chemical reactions. Benchmarks for the testing included aircraft wing, oil rig and pump.

The results

Among the results: The Supermicro systems delivered nearly dual-CPU performance with a single processor, while also consuming less electricity.

What’s more, the 3rd generation AMD 5995WX CPU delivered significantly better performance than the 2nd generation AMD 3995WX.

Systems with larger cache saw performance improved the most. So a system with L3 cache of 256MB outperformed one with just 128MB.

BIOS settings proved to be especially important for realizing the optimal performance from the AMD Ryzen Threadripper PRO when running the tested applications. Specifically, Supermicro recommends using NPS=4 and SMT=OFF when running Ansys Fluent with AMD Ryzen Threadripper PRO. (NPS = non-uniform memory access (NUMA) per socket; and SMT = symmetric multithreading.)

Another cool factor involves taking advantage of the Supermicro AS-2114GT-DPNR server’s two hot-pluggable nodes. First, one node can be used to pre-process the data. Then the other node can be used to run Ansys Fluid.

Put it all together, and you get a powerful takeaway for your customers: These AMD-powered Supermicro systems offer data-center power on both the desktop and server rack, making them ideal for SMBs and enterprises alike.

Do more:

 

Featured videos


Follow


Related Content

Try before you buy with Supermicro’s H13 JumpStart remote access program

Featured content

Try before you buy with Supermicro’s H13 JumpStart remote access program

The Supermicro H13 JumpStart Remote Access program lets you and your customers test data-center workloads on Supermicro systems based on 4th Gen AMD EPYC 9004 Series processors. Even better, the program is free.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You and your customers can now try out systems based on 4th Gen AMD EPYC 9004 Series processors at no cost with the Supermicro remote access program.

Called H13 JumpStart, the free program offers remote access to Supermicro’s top-end H13 systems.

Supermicro’s H13 systems are designed for today’s advanced data-center workloads. They feature 4th Gen AMD EPYC 9004 Series processors with up to 96 Zen 4 cores per socket, DDR5 memory, PCIe 5.0, and support for Compute Express Link (CXL) 1.1+ peripherals.

The H13 JumpStart program lets you and your customers validate, test and benchmark workloads on either of two Supermicro systems:

●      Hyper AS-2025HS-TNR: Features dual AMD EPYC processors, 24 DIMMS, up to 3 accelerator cards, AIOM network adapter, and 12 hot-swap NVMe/SAS/SATA drive bays.

●      CloudDC AS-2015CS-TNR: Features a single AMD processor, 12 DIMMS, 4 accelerator cards, dual AIOM network adapters, and a 240GB solid state drive.

Simple startup

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account.

Run your test. Once you’re done, Supermicro will also ask you to complete a quick survey for your feedback on the program.

Other details

The JumpStart program does have a few limitations. One is the number of sessions you can have open at once. Currently, it’s limited to 1 VNC (virtual network computing), 1 SSH (secure shell), and 1 IPMI (intelligent platform management interface) session per user.

Also, the JumpStart test server is not directly addressable to the internet. However, the servers can reach out to the internet to get files.

You should test with JumpStart using anonymized data only. That’s because the Supermicro server’s security policies may differ from those of your organization.

But rest assured, once you’re done with your JumpStart demo, the server storage is manually erased, the BIOS and firmware are reflashed, and the OS is re-installed with new credentials. So your data and personal information are completely removed.

Get started

Ready to get a jump-start with Supermicro’s H13 JumpStart Remote Access program? Apply now to secure access.

Want to learn more about Supermicro’s H13 system portfolio? Check out a 5-part video series featuring Linus Sebastian of Linus Tech Tips. He takes a deep dive into how these Supermicro systems run faster and greener. 

 

Featured videos


Follow


Related Content

How rackscale integration can help your customers get productive faster

Featured content

How rackscale integration can help your customers get productive faster

Supermicro’s rack integration and deployment service can help your customers get productive sooner.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

How would your key data-center customers like to improve their server performance, speed their rate of innovation, and lower their organization’s environmental impact—all while getting productive sooner?

Those are among the key benefits of Supermicro’s rack integration and deployment service. It’s basically a one-stop shop for a defined process with experts to design and create an effective and efficient cloud and enterprise hardware solution.

Supermicro’s dedicated team can provide everything from early design to onsite integration. That includes design, assembly, configuration, testing and delivery.

Hardware covered by Supermicro’s rack integration service includes servers, storage, switches and rack products. That includes systems based on the latest 4th Generation AMD EPYC server processors. Supermicro’s experts can also work closely with your customer to design a test plan that includes application loading, performance tuning and testing.

All these can be used for a wide range of optimized solutions. These include AI and deep learning, big data and Hadoop refreshes, and vSAN.

Customers of Supermicro’s rackscale systems can also opt for liquid cooling. This can reduce your customer’s operating expenses by more than 40%. And by lowering fan speeds, liquid cooling can further reduce their power needs, delivering a PUE (power usage effectiveness metric) of close to 1.0. All that typically provides an ROI in just 1 year, according to Supermicro.

Five-phase integration

When your customers work with Supermicro on rack integration, they’ll get support through 5 phases:

  • Design: Supermicro learns your customer’s business problems and requirements, develops a proof-of-concept to validate the solution, then selects the most suitable hardware and works with your customer on power requirements and budgets. Then it creates a bill of materials, followed by a detailed rack-level engineering diagram.
  • Assembly: Supermicro technicians familiar with the company’s servers assemble the system, either on your customer’s site or pre-shipment at a Supermicro facility. This includes all nodes, racks, cabling and third-party equipment.
  • Configuration: Each server’s BIOS is updated, optimized and tested. Firmware gets updated, too. OSes and custom images are pre-installed or deployed to specific nodes as needed.
  • Testing: This includes a performance analysis, a check for multi-vendor compatibility, and full rack burn-in testing for a standard 8 hours.
  • Logistics: Supermicro ships the complete system to your customer’s site, can install it, and provides ongoing customer service.

Big benes

For your customers, the benefits of working with Supermicro and AMD can include better performance per watt and per dollar, faster time to market with IT innovation, a reduced environmental impact, and lower costs.

Further, once the system is installed, Supermicro’s support can significantly reduce lead times to fix system issues. The company keeps the whole process from L6 to L12 in-house, and it maintains a vast inventory of spare parts on campus.

Wherever your customers are located, Supermicro likely has an office nearby. With a global footprint, Supermicro operates across the U.S., EMEA and Taiwan. Supermicro has invested heavily in rack-integration testing facilities, too. These centers are now being expanded to test rack-level air and liquid cooling.

For your customers with cloud-based systems, there are additional benefits. These include optimizing the IT environment for their clouds, and meeting co-location requirements.

There’s business for channel partners, too. You can add specific software to the rack system. And you can work with your customer on training and more.

Do more:

 

Featured videos


Follow


Related Content

AMD-based servers support enterprise applications — and break OLTP records

Featured content

AMD-based servers support enterprise applications — and break OLTP records

AMD EPYC server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD EPYC™ server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

AMD EPYC server processors offer a consistent set of features across a range of choices from 8 to 96 cores. This balanced set of resources found in AMD EPYC processors lets your customers right-size server configurations to fit their workloads.

What’s more, these AMD CPUs include models that offer high per-core performance optimized for frequency-sensitive and single-threaded workloads. This can help reduce the TCO for core-based software licenses.

AMD introduced the 4th Generation AMD EPYC processors in late 2022. The first of this generation are the AMD EPYC 9004 series CPUs. They’ve been designed to support performance and efficiency, help keep data secure, and use the latest industry features and architectures.

AMD continues to ship and support the previous 3rd Generation AMD EPYC 7002 and 7003 series processors. These processors power servers that are now available from a long list of leading hardware suppliers, including Supermicro.

Record-breaking

Good as all that may sound, you and your customers still need hard evidence that AMD processors can truly speed up their enterprise applications. Well, a new independent test of AMD-based Supermicro servers has provided just that.

The test was performed by the Telecommunications Technology Association (TTA), an IT standardization association based in Seongnam, South Korea. The TTA tested several Supermicro database and web servers powered by 3rd Gen AMD EPYC 7343 processors.

The results: The Supermicro servers set a world record for performance by a non-cluster system of 507,802 transactions per minute (tpmC).

That test was conducted using the TPC Benchmark, which measures a server’s online transaction processing (OLTP) performance. The tpmC metric measures how many new-order transactions a system can generate in a minute while executing business transactions under specific response-time requirements.

What’s more, when compared with servers based on the previous 2nd Gen AMD EPYC processors, the newer Supermicro servers were 33% faster, as shown in the chart below:

DATA: Telecommunications Technology Association

All that leads the TTA to conclude that Supermicro servers powered by the latest AMD processors “empower organizations to create deployments that deliver data insights faster than ever before.”

Do more:

Note:

1. https://www.tpc.org/1809

 

Featured videos


Follow


Related Content

Protect Customer Data Centers with AMD Infinity Guard

Featured content

Protect Customer Data Centers with AMD Infinity Guard

AMD’s 4th Gen EPYC server processors can keep your customers safe with Infinity Guard, a set of innovative and powerful security features.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When AMD released its 4th generation EPYC server processors, the company also doubled down on its commitment to enterprise data-center security. AMD did so with a set of security features it calls AMD Infinity Guard.

The latest EPYC processors—previously code-named Genoa—include an array of silicon-level security assets designed to resist increasingly sophisticated cyberattacks.

CIOs and IT managers who deploy AMD’s latest security tech may sigh with relief as they sidestep mounting threats such as ransomware, malicious virtual machines (VMs) and hypervisor-based attacks like data replay and memory re-mapping.

Growing concerns

Hackers are relentless. Beguiled by the siren song of easy riches through cybercrime, they spend countless hours devising new ways to exploit even the slightest hardware vulnerability. The bigger the organization, the more money these cyber criminals can extort—which is why they often target enterprise data centers.

AMD took this into account when designing the EPYC server processor series. The company had three goals: to address hardware-level vulnerabilities, eliminate likely threat vectors, and deny hackers access to any surface they could exploit.

Perhaps just as vital, AMD set a goal of addressing security concerns without impacting system performance. This is especially important for modern application workloads that require both high performance and low latency.

For instance, organizations that offer streaming content and mass storage could be just as easily crushed by glitches and malfunctions as they could by a significant security breach.

Security tech within

AMD is taking a decidedly ain’t-messin’-around approach to its latest security tech. Rather than paying lip service to IT Ops’ concerns, AMD engineers went deep down into the heart of their processor architecture to identify and remedy threat vectors.

The impressive security portfolio includes 4 primary tools to guard against threats:

  • Secure Encrypted Virtualization: SEV provides individual encryption for every virtual machine on a given server. Each VM is assigned one of up to 509 unique encryption keys known only to the processor. This protects data confidentiality in the event that a malicious VM breaches a system’s memory, or a compromised hypervisor reaches into a guest VM.
  • Secure Memory Encryption: Full memory encryption protects against internal and physical attacks such as the dreaded cold boot attack. There, an attacker with physical access to a computer conducts a memory dump by performing a hard reset of the target machine. SME ensures that the data remains encrypted even if the main memory is physically removed from a server.
  • Secure Boot: To help mitigate the threat of malware, AMD EPYC processors employ an embedded security checkpoint called a “root of trust.” This validates the initial BIOS software boot without corruption.
  • Shadow Stack: It may sound like a Marvel superhero, but in fact this guards against threat vectors such as return-oriented programming (ROP) attacks. Shadow Stack does this by compiling a record of return addresses so a comparison can be made to help ensure software-code integrity.

A well-rounded engine

A modern server processor serves many masters. While addressing security concerns is vitally important, so are ensuring high performance, impressive energy efficiency and a decent return on investment (ROI).

Your customers may appreciate knowing that AMD’s latest EPYC processor series addresses these factors. Rather than focusing solely on headline-grabbing tech like speeds & feeds, AMD took a more holistic approach, addressing many issues endemic to modern data-center operations.

EPYC CPUs also boast broad ecosystem support. For AMD, this means fostering collaboration with a network of solution providers. And for your customers, this means worry-free migration and seamless integration with their existing x86 infrastructures.

Your data-center customers are probably concerned about security. Who isn’t, these days? So talk to them about AMD Infinity Guard. After all, a secure customer is a happy customer.

 

Featured videos


Follow


Related Content

For Greener Data Centers, Look to Energy-Efficient Components

Featured content

For Greener Data Centers, Look to Energy-Efficient Components

Energy-efficient systems can help your customers lower their data-center costs while supporting a cleaner environment. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Creating a more energy-efficient data center isn’t only good for the environment, but also a great way for your customers to lower their total cost of ownership (TCO).

In many organizations, the IT department is the single biggest consumer of power. Data centers are filled with power-hungry components, including servers, storage devices, air conditioning and cooling systems.

The average data center uses anywhere from 2 to 4 Terawatt hours (TWh) of electricity per year. That works out to nearly 3% of total global energy use, according to Supermicro. Looking ahead, that’s forecast to reach as high as 8% by 2030.

One important measure of data-center efficiency is Power Usage Effectiveness (PUE). It’s calculated by taking the total electricity in a data center and dividing it by the electricity used by center’s IT components. The difference is how much electricity is being used for cooling, lighting and other non-IT components.

The lower a data center’s PUE, the better. The most energy-efficient data centers have a PUE of 1.0 or lower. The average PUE worldwide last year was 1.55, says the Uptime Institute, a benchmarking organization. That marked a slight improvement over 2021, when the average PUE was 1.57.

Costly power

All that power is expensive, too. Among the short list of ways your customers can lower that cost, moving to energy-efficient server CPUs is especially effective.

For example, AMD says that 11 servers based on of its 4th gen AMD EPYC processors can use up to 29% less power a year than the 17 servers based on competitive CPUs required to handle the same workload volume. And that can help reduce an organization’s capital expenditures by up to 46%, according to AMD.

As that example shows, CPUs with more cores can also reduce power needs by handling the same workloads with fewer physical servers.

Yes, a high-core CPU typically consumes more power than one with fewer cores, especially when run at the same frequency. But by handling more workload volume, a high-core CPU lets your customer do the same or more work with fewer racks. That can also reduce the real estate footprint and lower the need for cooling.

Greener tactics

Other tactics can contribute to a greener data center, too.

One approach involves what Supermicro calls a “disaggregated” server architecture. Essentially, this means that a server’s subsystems—including its CPU, memory and storage—can be upgraded without having to replace the entire chassis. For a double benefit, this lowers TCO while reducing E-waste.

Another approach involves designing servers that can share certain resources, such as power supplies and fans. This can lower power needs by up to 10%, Supermicro says.

Yet another approach is designing servers for maximum airflow, another Supermicro feature. This allows the CPU to operate at higher temperatures, reducing the need for air cooling.

It can also lower the load on a server’s fans. That’s a big deal, because a server’s fans can consume up to 15% of its total power.

Supermicro is also designing systems for liquid cooling. This allows a server’s fan to run at a lower speed, reducing its power needs. Liquid cooling can also lower the need for air conditioning, which in turn lowers PUE.

Liquid cooling functions similarly to a car’s radiator system. It’s basically a circular system involving an external “chiller” that cools the liquid and a series of pipes. The liquid is pumped to run through one or more pipes over a server’s CPU and GPU. The heat from those components warms the liquid. Then the now-hot liquid is sent back to the chiller for cooling and then recirculation.

Green vendors

Leading suppliers can help you help your customers go green.

AMD, for one, has pledged itself to delivering a 30x increase in energy efficiency for its processors and accelerators by 2025. That should translate into a 97% reduction in energy use per computation.

Similarly, Supermicro is working hard to help customers create green data centers. The company participates in industry consortia focused on new cooling alternatives and is a leader in the Liquid Cooling Standing Working Group of The Green Grid, a membership organization that fosters energy-efficient data centers.

Supermicro also offers products using its disaggregated rack-scale design approach to offer higher efficiency and lower costs.

Do more:

 

Featured videos


Follow


Related Content

What are Your Server Customers Looking For? It Depends on Who They Are

Featured content

What are Your Server Customers Looking For? It Depends on Who They Are

While hyperscalers and enterprises both buy servers powered by the latest CPUs, their purchase decisions are based on very different criteria. Knowing who you’re selling to, and what they’re looking for, can make all the difference.

Learn More about this topic
  • Applications:
  • Featured Technologies:
Think all buyers of servers powered by the latest-generation CPUs are all looking for the same thing? Think again.
 
It pays to think of these customers as falling into one of two major groups. On the one hand are the so-called hyperscalers, those large providers of public cloud services. On the other are CIOs and other IT executives at large enterprises who are looking to improve their on-premises data centers. 
 
Customers in both groups are serious buyers of the latest, greatest servers. But their buying criteria? Two very different things.
 
Hyperscalers: TCO, x86, VM
 
When it comes to cutting-edge servers, hyperscalers including Amazon Web Services (AWS), Microsoft Azure and Google Cloud are attracted to the cost advantage.
 
As Mark Papermaster, chief technology officer at AMD, explained in a recent technology conference sponsored by Morgan Stanley, “For the hyperscalers, new server processors are an easy transition. Because they’re massive buyers, hyperscalers see the TCO [total cost of ownership] advantage.”
 
Hyperscalers also like the fact that most if not all new server CPUs still adhere to the x86 family of instruction-set architectures. “For their workloads,” Papermaster said, “it lifts and shifts.”
 
Big hyperscalers are also big implementers of containers and virtual machines. That’s an efficient workload application for today’s high-density CPUs. The higher the CPU density, the more VMs can be supported on a single server. 
 
For example, AMD’s 4th gen EPYC processors (formerly code-named Genoa) pack in 96 cores, or 50% more than the previous generation. That kind of density suits hyperscalers well, because they have such extensive inventories of VMs.
 
Enterprise CIOs: different priorities
 
For CIOs and other enterprise IT executives, server priorities and buying criteria are quite different. These buyers are looking mainly for ease of migration, broad ecosystem support, robust security and energy efficiency (which can also be a component of TCO). 
 
CIOs also need to keep their CFOs and boards happy, so they’re also looking for a clear and easily explainable return on investment (ROI). They may also need to tie this calculation to their organization’s strategic goals. For example, if a company were looking to increase its market share, the CIO might want to explain how purchasing new servers could help achieve that goal. 
 
One relatively new and increasingly important priority is energy efficiency. Enterprises increasingly need to demonstrate their support for “green” initiatives. One way a company can do that is by showing how their computer technology gets more done with less electric power.
 
Also, many data centers are already receiving as much electric power as they’re configured for. In other words, they can’t add power to get more work done. But they can add energy-efficient servers able to get more work done with the same or even less power than the systems they replace.
 
A third group, too
 
During his recent Morgan Stanley presentation, Papermaster of AMD also discussed a third group of server buyers: Organizations with hybrid IT orchestrations, both cloud and on-premises, that want the ability to move workloads back and forth. Essentially, this means mimicking the cloud in an on-prem environment.
 
Looking ahead, Papermaster discussed a forthcoming EPYC processor, code-named Bergamo, which he said is “right on track” to ship in this year’s first half. 
 
The new CPU will be aimed at cloud-native applications that need high levels of both throughput and per-socket performance. As previously announced, Bergamo will have up to 128 “Zen 4c” cores, and will come with the same software and security features as Genoa. 
 
“We listen to our customers,” Papermaster said, “and we see where workloads are going.” That’s a good practice for channel partners, too.
 
Do more:

 

Featured videos


Follow


Related Content

Pages