Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: What’s the deal with AMD’s 3D V-Cache technology?

Featured content

Tech Explainer: What’s the deal with AMD’s 3D V-Cache technology?

AMD’s unique 3D V-Cache technology offers a marked performance boost. By stacking the cache vertically, AMD produces a 3x increase in the amount of a CPU’s L3 cache. This enables faster calculations and a noticeable increase in overall processor speed. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The more cache, the merrier.

The cache’s job is to store as much data as possible directly onto the CPU die. And the cache’s proximity to the processor makes it an ideal data-delivery system.

That might not be much of a sales pitch to folks who ask comparatively little of their devices. But when it comes to high-performance data centers, extra on-die storage can make a world of difference.

That’s because the faster and more efficiently a CPU can access data, the quicker it can complete the complex calculations required to return the requested result.

The trouble is, the usefulness of a cache has so far been limited by its small storage capacity. Unlike other computer storage devices, such as RAM and SSDs, your average cache is relatively tiny.

In a modern computer, it’s not uncommon to get 1 to 4 terabytes of storage space. But on that same machine, the processor cache would typically hold only 50 to 60 megabytes.

It stands to reason, then, that a cache able to hold 3 times as much data could dramatically increase the system’s efficiency. And that’s what AMD’s latest innovation, known as AMD 3D V-Cache technology, can do.

A whole new dimension

Modern caches are designed with a succession of stages labeled L1, L2 and L3. The “L” stands for Level.

L1 is closest to the processor and offers the fastest speed, yet it also provides the smallest capacity. L2 is a bit bigger, but also a bit slower. And L3 cache always provides the most data storage.

AMD calls its innovation 3D V-Cache because of the unique design. All 3 layers of the L3 cache are stacked vertically on the die.

This vertical stacking also means that all 3 layers are the same distance from the processor. As a result, all 3 also offer the same speeds.

What’s more, AMD 3D V-Cache’s extra capacity enables the processor to store and stream more instructions, yet without increasing the die’s size. As a result, the CPU does its job much faster and more efficiently than could a similarly powered processor with a traditional cache.

Out in the wild

AMD first introduced 3D V-Cache technology in 2022 as part of its gaming-focused Ryzen 7 5800X3D processor. That first iteration offered a 96MB L3 cache feeding 8 cores, each with a maximum clock speed of 4.5GHz.

This past February, AMD introduced its Ryzen 7000-series processors for content creators. Two of those CPUs—the AMD Ryzen 9 7950X3D and 7900X3D—featured L3 caches with 128MB of capacity.

But AMD knows the market for 3D V-Cache-enabled chips is far bigger than just gamers. Power-hungry data centers are cropping up everywhere, helping to power our private and public clouds, AI-based applications and other miraculous virtual innovations.

To power those data centers, AMD now offers 4th generation EPYC processors featuring AMD 3D V-Cache technology. These data-crunching monsters are truly on the cutting edge. The biggest of them all, the AMD EPYC 9654P, packs 96 cores and an astounding L3 cache of 348MB.

Taking the show on the road

With the introduction of the Ryzen 9 7945HX3D mobile processor, AMD announced to the world that speed and portability aren’t mutually exclusive. The first laptop gaming chip is a 16-core phenom featuring AMD 3D V-Cache Technology, which adds an impressive 128MB of L3 cache.

This past August, AMD launched its new silicon marvel inside the ROG STRIX SCAR 17 X3D, a titanically powerful gaming laptop with a 240Hz QHD display. AMD managed to get a mobile version to market barely a year after the launch of its first AMD 3D V-Cache Technology-enabled desktop chip. That’s an impressive cadence by any standard.

This is a laptop that was always going to be faster than most. But the addition of the Ryzen chip makes the STRIX SCAR one of the fastest mobile gaming rigs on this—or any other—planet.

Go ahead, try to find one that’s faster.

Do more:

 

Featured videos


Follow


Related Content

Meet the newest 3rd gen AMD EPYC processors

Featured content

Meet the newest 3rd gen AMD EPYC processors

AMD extends 3rd Gen EPYC CPU lineup to deliver greater price/performance for mainstream server applications.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are you or your customers looking for server processors that offer great price/performance, modern security, and energy efficiency for less technically demanding mainstream business workloads?

If so, AMD has some new server CPUs for you.

“We have seen a clear opportunity to give our customers more options,” says Dan McNamara, GM of AMD’s server business.

Expanded value options

The six new SKUs are actually part of AMD’s 3rd generation EPYC processor family. Originally introduced nearly three years ago, the 3rd gen AMD EPYCs have since been joined by the company’s 4th Gen.

So why might your customers be interested in new 3rd gen CPUs from AMD?

Several reasons. One, they might not need the latest AMD processor features, which include support for both DDR5 memory and PCIe Gen 5 connectivity. The new members of AMD’s 3rd gen EPYC family, which use the company’s Zen 3 cores, support up to 8 channels of the older DDR4 memory and up to 128 lanes of PCIe Gen 4.

By contrast, AMD’s 4th gen EPYC processors, with their Zen 4 cores, support both DDR5 and PCIe Gen 5. For some companies, AMD says, the upgrade to these newer technologies is “still high on the cost curve.”

Price/performance, too

Another reason is to get in on the new price/performance gains. Four of the six new SKUs are 8- and 16-core processors, and their retail prices range from just under $340 to just over $600. The other two CPUs offer 48 and 56 cores.

Yet another reason: modern security features that include AMD Infinity Guard. It’s a full suite of security features, built into silicon, that includes encrypted virtualization; nested paging; memory encryption; and AMD Shadow Stack, which offers hardware-enforced protection against malware.

The new AMD processors are fully compatible with existing AMD EPYC 7003 series-based server systems. And AMD’s major partners, including Supermicro, have said they’ll support the new 3rd Gen AMD EPYC processors in their enterprise server solutions.

Supermicro works closely with AMD to offer a wide range of application-optimized servers in its H13 product line. These systems, which support the new AMD EPYC processors, are designed for applications that need powerful processing performance, but may have thermal constraints.

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: GenAI, 10 IT trends, cybersecurity, CEOs, and privacy

Featured content

Research Roundup: GenAI, 10 IT trends, cybersecurity, CEOs, and privacy

Catch up on the latest IT research and analysis from leading market watchers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is booming. Ten trends will soon rock your customers’ world. While cybersecurity spending is up, CEOs lack cyber confidence. And Americans worry about their privacy.

That’s some of the latest from leading IT market watchers. And here’s your Performance Intensive Computing roundup.

GenAI market to hit $143B by 2027

Generative AI is quickly becoming a big business.

Market watcher IDC expects that spending on GenAI software, related hardware and services will this year reach nearly $16 billion worldwide.

Looking ahead, IDC predicts GenAI spending will reach $143 billion by 2027. That would represent a compound annual growth rate (CAGR) over the years 2023 to 2027 of 73%—more than twice the growth rate in overall AI spending.

“GenAI is more than a fleeting trend or mere hype,” says IDC group VP Ritu Jyoti.

Initially, IDC expects, the largest GenAI investments will go to infrastructure, including hardware, infrastructure as a service (IaaS), and system infrastructure software. Then, once the foundation has been laid, spending is expected to shift to AI services.

Top 10 IT trends

What will be top-of-mind for your customers next year and beyond? Researchers at Gartner recently made 10 predictions:

1. AI productivity will be a primary economic indicator of national power.

2. Generative AI tools will reduce modernization costs by 70%.

3. Enterprises will collectively spend over $30 billion fighting “malinformation.”

4. Nearly half of all CISOs will expand their responsibilities beyond cybersecurity, driven by regulatory pressure and expanding attack surfaces.

5. Unionization among knowledge workers will increase by 1,000%, motivated by fears of job loss due to the adoption of GenAI.

6. About one in three workers will leverage “digital charisma” to advance their careers.

7. One in four large corporations will actively recruit neurodivergent talent—including people with conditions such as autism and ADHD—to improve business performance.

8. Nearly a third of large companies will create dedicated business units or sales channels for machine customers.

9. Due to labor shortages, robots will soon outnumber human workers in three industries: manufacturing, retail and logistics.

10. Monthly electricity rationing will affect fully half the G20 nations. One result: Energy efficiency will become a serious competitive advantage.

Cybersecurity spending in Q2 rose nearly 12%

Heightened threat levels are leading to heightened cybersecurity spending.

In the second quarter of this year, global spending on cybersecurity products and services rose 11.6% year-on-year, reaching a total of $19 billion worldwide, according to Canalys.

A mere 12 vendors received nearly half that spending, Canalys says. They include Palo Alto Networks, Fortinet, Cisco and Microsoft.

One factor driving the spending is fear, the result of a 50% increase in the number of publicly reported ransomware attacks. Also, the number of breached data records more than doubled in the first 8 months of this year, Canalys says.

All this increased spending should be good for channel sellers. Canalys finds that nearly 92% of all cybersecurity spending worldwide goes through the IT channel.

CEOs lack cyber confidence

Here’s another reason why cybersecurity spending should be rising: Roughly three-quarters of CEOs (74%) say they’re concerned about their organizations’ ability to avert or minimize damage from a cyberattack.

That’s according to a new survey, conducted by Accenture, of 1,000 CEOs from large organizations worldwide.

Two findings from the Accenture survey really stand out:

  • Nearly two-thirds of CEOs (60%) say their organizations do not incorporate cybersecurity into their business strategies, products or services
  • Nearly half (44%) the CEOs believe cybersecurity can be handled with episodic interventions rather than with ongoing, continuous attention.

Despite those weaknesses, nearly all the surveyed CEOs (96%) say they believe cybersecurity is critical to their organizations’ growth and stability. Mind the gap!

How do Americans view data privacy?

Fully eight in 10 Americans (81%) are concerned about how companies use their personal data. And seven in 10 (71%) are concerned about how their personal data is used by the government.

So finds a new Pew Research Center survey of 5,100 U.S. adults. The study, conducted in May and published this month, sought to discover how Americans think about privacy and personal data.

Pew also found that Americans don’t understand how their personal data is used. In the survey, nearly eight in 10 respondents (77%) said they have little to no understanding of how the government uses their personal data. And two-thirds (67%) said the same thing about businesses, up from 59% a year ago.

Another key finding: Americans don’t trust social media CEOs. Over three-quarters of Pew’s respondents (77%) say they have very little or no trust that leaders of social-medica companies will publicly admit mistakes and take responsibility.

And about the same number (76%) believe social-media companies would sell their personal data without their consent.

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: How does design simulation work? Part 2

Featured content

Tech Explainer: How does design simulation work? Part 2

Cutting-edge technology powers the virtual design process.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The market for simulation software is hot, growing at a compound annual growth rate (CAGR) of 13.2%, according to Markets and Markets. The research firm predicts that the global market for simulation software, worth an estimated $18.1 billion this year, will rise to $33.5 billion by 2027.

No surprise, then, that tech titans AMD and Supermicro would design an advanced hardware platform to meet the demands of this burgeoning software market.

AMD and Supermicro have teamed up with Ansys Inc., a U.S.-based designer of engineering simulation software. One result of this three-way collaboration is the Supermicro SuperBlade.

Shanthi Adloori, senior director of product management at Supermicro, calls the SuperBlade “one of the fastest simulation-in-a-box solutions.”

Adloori adds: “With a high core count, large memory capacity and faster memory bandwidth, you can reduce the time it takes to complete a simulation .”

One very super blade

Adloori isn’t overstating the case.

Supermicro’s SuperBlade can house up to 20 hot-swappable nodes in its 8U chassis. Each of those blades can be equipped with AMD EPYC CPUs and AMD Instinct GPUs. In fact, SuperBlade is the only platform of its kind designed to support both GPU and non-GPU nodes in the same enclosure.

Supermicro SuperBlade’s other tech specs may be less glamorous, but they’re no less impressive. When it comes to memory, each blade can address a maximum of either 8TB or 16TB of DDR5-4800 memory.

Each node can also house 2 NVMe/SAS/SATA drives and as many as eight 3000W Titanium Level power supplies.

Because networking is an essential element of enterprise-grade design simulation, SuperBlade includes redundant 25Gb/10Gb/1Gb Ethernet switches and up to 200Gbps/100Gbps InfiniBand networking for HPC applications.

For smaller operations, the Supermicro SuperBlade is also available in smaller configurations, including  6U and 4U. These versions pack fewer nodes, which ultimately means they’re able to bring less power to bear. But, hey, not every design team makes passenger jets for a living.

It’s all about the silicon

If Supermicro’s SuperBlade is the tractor-trailer of design simulation technology, then AMD CPUs and GPUs are the engines under the hood.

The differing designs of these chips lend themselves to specific core competencies. CPUs can focus tremendous power on a few tasks at a time. Sure, they can multitask. But there’s a limit to how many simultaneous operations they can address.

AMD bills its EPYC 7003 Series CPUs as the world’s highest-performing server processors for technical computing. The addition of AMD 3D V-Cache technology delivers an expanded L3 cache to help accelerate simulations.

GPUs, on the other hand, are required when running simulations where certain tasks require simultaneous operations to be performed. The AMD Instinct MI250X Accelerator contains 220 compute units with 14,080 stream processors.

Instead of throwing a ton of processing power at a small number of operations, the AMD Instinct can address thousands of less resource-intensive operations simultaneously. It’s that capability that makes GPUs ideal for HPC and AI-enabled operations, an increasingly essential element of modern design simulation.

The future of design simulation

The development of advanced hardware like SuperBlade and the AMD CPUs and GPUs that power it will continue to progress as more organizations adopt design simulation as their go-to product development platform.

That progression will continue to manifest in global companies like Boeing and Volkswagen. But it will also find its way into small startups and single users.

Also, as the required hardware becomes more accessible, simulation software should become more efficient.

This confluence of market trends could empower millions of independent designers with the ability to perform complex design, testing and validation functions.

The result could be nothing short of a design revolution.

Part 1 of this two-part Tech Explainer explores the many ways design simulation is used to create new products, from tiny heart valves to massive passenger aircraft. Read Part 1 now.

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: How does design simulation work? Part 1

Featured content

Tech Explainer: How does design simulation work? Part 1

Design simulation lets designers and engineers create, test and improve designs of real-world airplanes, cars, medical devices and more while working safely and quickly in virtual environments. This workflow also reduces the need for physical tests and allows designers to investigate more alternatives and optimize their products.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Design simulation is a type of computer-aided engineering used to create new products, reducing the need for physical prototypes. The result is a faster, more efficient design process in which complex physics and math do much of the heavy lifting.

Rapid advances in CPUs and GPUs that are used to perform simulation and software have made it possible to shift product design from the physical world to a virtual one.

In this virtual space, engineers can create and test new designs as quickly as their servers can calculate the results and then render them with visualization software.

Getting better all the time

Designing via AI-powered virtual simulation offers significant improvements over older methods.

Back in the day, it might have taken a small army of automotive engineers years to produce a single new model. Prototypes were often sculpted from clay and carted into a wind tunnel to test aerodynamics.

Each new model went through a seemingly endless series of time-consuming physical simulations. The feedback from those tests would literally send designers back to the drawing board.

It was an arduous and expensive process. And the resources necessary to accomplish these feats of engineering often came at the expense of competition. Companies whose pockets weren’t deep enough might fail to keep up.

Fast-forward to the present. Now, we’ve got smaller design teams aided by increasingly powerful clusters of high-performance systems.

These engineers can tweak a car’s crumple zone in the morning … run the new version through a virtual crash test while eating lunch … and send revised instructions to the design team before day’s end.

Changing designs, saving lives

Faster access to this year’s Ford Mustang is one thing. But if you really want to know how design simulation is changing the world, talk to someone whose life was saved by a mechanical heart valve.

Using the latest tech, designers can simulate new prosthetics in relation to the physiology they’ll inhabit. Many factors come into play here, including size, shape, materials, fluid dynamics, failure models and structural integrity over time.

What’s more, it’s far better to theorize how a part will interact with the human body before the doctor installs it. Simulations can warn medical pros about potential infections, rejections and physical mismatches. AI can play a big part in these types of simulations and manufacturing.

Sure, perfection may be unattainable. But the closer doctors get to a perfect match between a prosthetic and its host body, the better the patient will fair after the procedure.

Making the business case

Every business wants to cut costs, increase efficiency and get an edge over the competition. Here, too, design simulation offers a variety of ways to achieve those lofty goals.

As mentioned above, simulation can drastically reduce the need for expensive physical prototypes. Creating and testing a new airplane design virtually means not having to come within 100 miles of a runway until the first physical prototype is ready to take flight. 

Aerospace and automotive industries rely heavily on both the structural integrity of an assembly but also on computational fluid dynamics. In this way, simulation can potentially save an aerospace company billions of dollars over the long run.

What’s more, virtual airplanes don’t crash. They can’t be struck by lightning. And in a virtual passenger jet, test pilots don’t need to worry about their safety.

By the time a new aircraft design rolls onto the tarmac, it’s already been proven air-worthy—at least to the extent that a virtual simulation can make those kinds of guarantees.

Greater efficiency

Simulation makes every aspect of design more efficient. For instance, iteration, a vital element of the design process, becomes infinitely more manageable in a simulated environment.

Want to find out how a convertible top will affect your new supercar’s 0-to-60 time? Simulation allows engineers to quickly replace the hard-top with some virtual canvas and then create a virtual drag race against the original model.

Simulation can take a product to the manufacturing phase, too. Once a design is finished, engineers can simulate its journey through a factory environment.

This virtual factory, or digital twin, can help determine how long it will take to build a product and how it will react to various materials and environmental conditions. It can even determine how many moves a robot arm will need to make and when human intervention might become necessary. This process helps engineers optmize the manufacturing process.

In countless ways, simulation has never been more real.

In Part 2 of this 2-part blog, we’ll explore the digital technology behind design simulation. This cutting-edge technology is made possible by the latest silicon, vast swaths of high-speed storage, and sophisticated blade servers that bring it all together.

Do more:

 

Featured videos


Follow


Related Content

OCP Global Summit demos the power of collaboration for the data center

Featured content

OCP Global Summit demos the power of collaboration for the data center

Learn More about this topic
  • Applications:
  • Featured Technologies:

Thousands of data-center professionals will gather in Silicon Valley this month for the 2023 OCP Global Summit.

This in-person event, sponsored by the Open Compute Project, will be held in San Jose, Calif., on Oct. 17 – 19.

The theme for this year’s conference: “Scaling innovation through collaboration.”

About OCP

OCP members share data center products and best practices that apply open standards. The group’s projects include server design, data storage, rack design, energy-efficient data centers, open networking switches, and servers.

The OCP requires that all contributions meet at least 3 of its 5 core tenets:

  • Efficiency: including power delivery, thermal, platform, overall cost, latencies
  • Impact: including efficiency gains, use of new tech, more robust supply chain
  • Openness: strive to comply with existing open interfaces
  • Scalability: can be used in large-scale deployments.
  • Sustainability: Be transparent about environmental impact, and aspire to improve over time.

OCP began as a Facebook project, launched in 2009, to build an energy-efficient data center. That led to the opening of a Facebook data center in Pineville, Ore., that the company says is 24% less expensive to run than its previous facilities.

That led to OCP being founded in 2011. Today the nonprofit organization has nearly 300 corporate members and over 6,000 active participants. Membership is available in four levels — community, silver, gold and platinum.

OCP’s membership list is a veritable who’s who of tech. Members include Amazon, AMD, Arm, AT&T, Cisco, Dell, Google, HPE, IBM, Lenovo, Meta, Supermicro and Tencent.

AMD & Supermicro participating

Among the keynote speakers at this year’s OCP Global Summit will be Forrest Norrod, executive VP and GM of the data center solutions business group at AMD. He’ll be giving a presentation on Oct. 17 entitled, “Together we advance the modern data center.”

Also, Supermicro will be showing three of its servers at the OCP Global Summit:

  • Supermicro CloudDC A+ Server: Designed for data center, web server, cloud computing and more, this 1U rackmount server is powered by a single AMD EPYC 9004 Series processor.
  • Supermicro Hyper A+ Server: This 2U server is intended for virtualization, AI inference and machine learning, software-defined storage, cloud computing, and use as an enterprise server. It’s powered by dual AMD EPYC 9004 Series processors.
  • Supemicro Storage A+ Server: This 2U storage device can handle software-defined storage, cloud, in-memory computing, data-intensive HPC workloads, and NVMe-over-fabric solutions. It’s powered by a single AMD EPYC 9004 Series processor.

Do more:

 

Featured videos


Follow


Related Content

Why M&E content creators need high-end VDI, rendering & storage

Featured content

Why M&E content creators need high-end VDI, rendering & storage

Content creators in media and entertainment need lots of compute, storage and networking. Supermicro servers with AMD EPYC processors are enhancing the creativity of these content creators by offering improved rendering and high-speed storage. These systems empower the production of creative ideas.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

When content creators at media and entertainment (M&E) organizations create videos and films, they’re also competing for attention. And today that requires a lot of technology.

Making a full-length animated film involves no fewer than 14 complex steps, including 3D modeling, texturing, animating, visual effects and rendering. The whole process can take years. And it requires a serious quantity of high-end compute, storage and software.

From an IT perspective, three of the most compute-intensive activities for M&E content creators are VDI, rendering and storage. Let’s take a look at each.

* Virtual desktop infrastructure (VDI): While content creators work on personal workstations, they need the kind of processing power and storage capacity available from a rackmount server. That’s what they get with VDI.

VDI separates the desktop and associated software from the physical client device by hosting the desktop environment and applications on a central server. These assets are then delivered to the desktop workstation over a network.

To power VDI setups, Supermicro offers a 4U GPU server with up to 8 PCIe GPUs. The Supermicro AS -4125GS-TNRT server packs a pair of AMD EPYC 9004 processors, Nvidia RTX 6000 GPUs, and 6TB of DDR5 memory.

* Rendering: The last stage of film production, rendering is where the individual 3D images created on a computer are transformed into the stream of 2D images ready to be shown to audiences. This process, conducted pixel by pixel, is time-consuming and resource-hungry. It requires powerful servers, lots of storage capacity and fast networking.

For rendering, Supermicro offers its 2U Hyper system, the AS -2125HS-TNR. It’s configured with dual AMD EPYC 9004 processors, up to 6TB of memory, and your choice of NVMe, SATA or SAS storage.

* Storage: Content creation involves creating, storing and manipulating huge volumes of data. So the first requirement is simply having a great deal of storage capacity. But it’s also important to be able to retrieve and access that data quickly.

For these kinds of storage challenges, Supermicro offers Petascale storage servers based on AMD EPYC processors. They can pack up to 16 hot-swappable E3.S (7.5mm) NVMe drive bays. And they’ve been designed to store, process and move vast amounts of data.

M&E content creators are always looking to attract more attention. They’re getting help from today’s most advanced technology.

Do more:

 

 

Featured videos


Follow


Related Content

Need help turning your customers’ data into actionable insights?

Featured content

Need help turning your customers’ data into actionable insights?

Your customers already have plenty of data. What they need now are insights. Supermicro, AMD and Cloudera are here to help.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers already have plenty of data. What they need now are insights.

Data just sits there, taking up costly storage and real estate. But actionable insights can help your customers strengthen their overall business, improve their business processes, and create new products and services.

Increasingly, these insights are based on data captured at the edge. For example, a retailer might collect customer and sales data using the point-of-sale terminals in its stores.

Supermicro is here to help. Its edge systems, including the latest WIO and short-depth servers powered by AMD processors, have been designed to collect data at the business edge.

These servers are powered by AMD’s EPYC 8004 Series processors. Introduced in September, these CPUs extend the company’s ‘Zen4c’ architecture into lower-core-count processors designed for edge servers and form factors.

GrandTwin too

For more insights, tell your customers to check out Supermicro’s GrandTwin servers. They’re powered by AMD EPYC 9004 processors and can run Cloudera Data Flow (CDF), a scalable, real-time streaming analytics platform.

The Supermicro GrandTwin systems provide a multi-node rackmount platform for cloud data centers. They come in 2U with 4 nodes for optimal deployment.

These systems offer AMD’s 4th Gen EPYC 9004 Series of general-purpose processors, which support DDR-5 4800 memory and PCI Express Gen 5 I/O.

Distributed yet united

If you’re unfamiliar with Cloudera, the company’s approach is based on a simple idea: single clouds are passé. Instead, Cloudera supports a hybrid data platform, one that can be used with any cloud, any analytics and any data.

The company’s idea is that data-management components should be physically distributed, but treated as a cohesive whole with AI and automation.

Cloudera’s CDF solution ingests, curates and analyzes data for key insights and immediate actionable information. That can include issues or defects that need remediating. And AI and machine learning systems can use the data to suggest real-time improvements.

More specifically, CDF delivers flow management, edge management, streams processing, streams management, and streaming analytics.

The upshot: Your customers need actionable insights, not more data. And to get those insights, they can check out the powerful combination of Supermicro servers, AMD processors and Cloudera solutions.

Do more:

 

Featured videos


Follow


Related Content

Supermicro celebrates 30 years of business

Featured content

Supermicro celebrates 30 years of business

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro Inc. is celebrating its 30th year of research, development and manufacturing.

At the company, formed in 1993, some things remain the same. Founder Charles Liang remains Supermicro’s president and CEO. And the company is still based in California’s Silicon Valley.

Of course, in 30 years a lot has also changed, too. For one, AI is now a critical component. And Supermicro, with help from component makers including AMD, is offering a range of solutions designed with AI in mind. Also, Supermicro has stated its intention to be a leader in the newer field of generative AI.

Another recent change is the industry’s focus on “green computing” and sustainability. Here, too, Supermicro has had a vision. The company’s Green IT initiative helps customers lowers data-center TCO, take advantage of recyclable materials, and do more work with lower power requirements.

Another change is just how big Supermicro has grown. Revenue for its most recent fiscal year totaled $7.12 billion, a year-on-year increase of 37%. Looking ahead, Supermicro has told investors it expects an even steeper 47% revenue growth in the current fiscal year, for total revenue of $9.5 billion to $10.5 billion. 

All that growth has also led Supermicro to expand its manufacturing facilities. The company now runs factories in Silicon Valley, Taiwan and the Netherlands, and it has a new facility coming online in Malaysia. All that capacity, the company says, means Supermicro can now deliver more than 4,000 racks a month.

Top voices

Industry leaders are joining the celebration.

“Supermicro has been and continues to be my dream work,” CEO Liang wrote in an open letter commemorating the company’s 30th anniversary.

Looking ahead, Liang writes that the company’s latest initiative, dubbed “Supermicro 4.0,” will focus on AI, energy saving, and time to market.

AMD CEO Lisa Su adds, “AMD and Supermicro have a long-standing history of delivering leadership computing solutions. I am extremely proud of the expansive portfolio of data center, edge and AI solutions we have built together, our leadership high-performance computing solutions and our shared commitment to sustainability.”

Happy 30th anniversary, Supermicro!

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: What is the intelligent edge? Part 2

Featured content

Tech Explainer: What is the intelligent edge? Part 2

The intelligent edge has emerged as an essential component of the internet of things. By moving compute and storage close to where data is generated, the intelligent edge provides greater control, flexibility, speed and even security.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The Internet of Things (IoT) is all around us. It’s in the digital fabric of a big city, the brain of a modern factory, the way your smart home can be controlled from a tablet, and even the tech telling your fridge it’s time to order a quart of milk.

As these examples show, IoT is fast becoming a must-have. Organizations and individuals alike turn to the IoT to gain greater control and flexibility over the technologies they regularly use. Increasingly, they’re doing it with the intelligent edge.

The intelligent edge moves command and control from the core to the edge, closer to where today’s smart devices and sensors actually are installed. That’s needed because so many IoT devices and connections are now active, with more coming online every day.

Communicating with millions of connected devices via a few centralized data centers is the old way of doing things. The new method is a vast network of local nodes capable of collecting, processing, analyzing, and making decisions from the IoT information as close to its origin as possible.

Controlling IoT

To better understand the relationship between IoT and intelligent edge, let’s look at two use cases: manufacturing and gaming.

Modern auto manufacturers like Tesla and Rivian use IoT to control their industrial robots. Each robot is fitted with multiple sensors and actuators. The sensors report their current position and condition, and the actuators control the robot’s movements.

In this application, the intelligent edge acts as a small data center in or near the factory where the robots work. This way, instead of waiting for data to transfer to a faraway data center, factory managers can use the intelligent edge to quickly capture, analyze and process data—and then act just as quickly.

Acting on that data may include performing preventative or reactive maintenance, adjusting schedules to conserve power, or retasking robots based on product configuration changes. 

The benefits of a hyper-localized setup like this can prove invaluable for manufacturers. Using the intelligent edge can save them time, money and person-hours by speeding both analysis and decision-making.

For manufacturers, the intelligent edge can also add new layers of security. That’s because data is significantly more vulnerable when in transit. Cut the distance the data travels and the use of external networks, and you also eliminate many cybercrime threat vectors.

Gaming is another marquee use case for the intelligent edge. Resource-intensive games such as “Fortnite” and “World of Warcraft” demand high-speed access to the data generated by the game itself and a massive online gaming community of players. With speed at such a high premium, waiting for that data to travel to and from the core isn’t an option.

Instead, the intelligent edge lets game providers collect and process data near their players. The closer proximity lowers latency by limiting the distance the data travels. It also improves reliability. The resulting enhanced data flow makes gameplay faster and more responsive.

Tech at the edge

The intelligent edge is sometimes described as a network of localized data centers. That’s true as far as it goes, but it’s not the whole story. In fact, the intelligent edge infrastructure’s size, function and location come with specific technological requirements.

Unlike a traditional data center architecture, the edge is often better served by rugged form factors housing low-cost, high-efficiency components. These components, including the recently released AMD EPYC 8004 Series processors, feature fewer cores, less heat and lower prices.

The AMD EPYC 8004 Series processors share the same 5nm ‘Zen4c’ core complex die (CCD) chiplets and 6nm AMD EPYC I/O Die (IOD) as the more powerful AMD EPYC 9004 Series.

However, the AMD EPYC 8004s offers a more efficiency-minded approach than its data center-focused cousins. Nowhere is this better illustrated than the entry-level AMD EPYC 8042 processor, which provides a scant 8 cores and a thermal design power (TDP) of just 80 watts. AMD says this can potentially save customers thousands of dollars in energy costs over a five-year period.

To deploy the AMD silicon, IT engineers can choose from an array of intelligent edge systems from suppliers, including Supermicro. The selection includes expertly designed form factors for industrial, intelligent retail and smart-city deployments.

High-performance rack mount servers like the Supermicro H13 WIO are designed for enterprise-edge deployments that require data-center-class performance. The capacity to house multiple GPUs and other hardware accelerators makes the Supermicro H13 an excellent choice for deploying AI and machine learning applications at the edge.

The future of the edge

The intelligent edge is another link in a chain of data capture and analysis that gets longer every day. As more individuals and organizations deploy IoT-based solutions, an intelligent edge infrastructure helps them store and mine that information faster and more efficiently.

The insights provided by an intelligent edge can help us improve medical diagnoses, better control equipment, and more accurately predict human behavior.

As the intelligent edge architecture advances, more businesses will be able to deploy solutions that enable them to cut costs and improve customer satisfaction simultaneously. That kind of deal makes the journey to the edge worthwhile.

Part 1 of this two-part blog series on the intelligent edge looked at the broad strokes of this emerging technology and how organizations use it to increase efficiency and reliability. Read Part 1 now.

Do more:

 

Featured videos


Follow


Related Content

Pages