Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Where Are Blockchain and Web3 Taking Us? — Part 3: Web3 Emerging

Featured content

Where Are Blockchain and Web3 Taking Us? — Part 3: Web3 Emerging

This is the third in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3.

Learn More about this topic
  • Applications:

Part 1: First There Was Blockchain  |  Part 2: Delving Deeper into Blockchain  |  Part 4: The Web3 and Blockchain FAQ

Perhaps the most surprising aspect about Web3 is the DAO, an acronym that stands for Decentralized Autonomous Organization. A DAO is an emerging alternative type of organization that operates without centralized management. Instead, power is shared by token holders who cast votes in a bottom-up approach, according to Investopedia. Activity in the DAO is recorded on the blockchain, where it is open to all. Smart contracts form actions that help govern organizational process and policy. From the outside, a DAO appears to function similarly to a corporation; from the inside it is very different. There’s no CEO, COO or President. Instead, DAOs are often managed by governing bodies, although many rules are pre-determined by smart contracts. For real-world examples of DAOs, see this Forbes article.

 

So, What is Web3, Anyway?

 

Web3 (also spelled Web 3.0) is a name for the next evolution of the web, following Web 1.0 and Web 2.0. It is expected to be built on open-source software, blockchain, NFTs (non-fungible tokens), smart contracts and other Blockchain-related technologies. Gavin Wood, founded Polkadot, co-founded Ethereum and was the originator of the Web3 Foundation. Wood coined the term Web3 in a 2014 blog post.

 

Web3 is not to be confused with another effort to remake the Web, also known as the Semantic Web and sometimes called Web 3.0.  The Semantic Web dates back to 1999, when Tim Berners-Lee coined the phrase, according to Wikipedia. The Semantic Web’s primary goal is to extend the standards set by the World Wide Web Consortium (W3C) to make the meaning of internet data machine-readable.

 

In April 2022, speaking to CNBC International, Wood defined Web3 “as an alternative vision of the web, where the services we use are not hosted by a single service provider but instead are purely algorithmic. They are in some sense hosted by everybody” in peer-to-peer fashion. “The idea being that all participants contribute a small slice of the ultimate service. No one really has any advantage over anyone else, not in the same sense at least as when you go to Amazon, eBay or Facebook, for example, where the company providing the service has power over how” that service is rendered and how your data are handled. In summing up, Wood said: “Web3 is about reducing the trust needed to use the internet services we use every day.”

 

Still Early Days

 

Web3 is available in test-tube fashion today. A basic form of the tech stack can be cobbled together using the Ethereum blockchain, it’s challenging and still doesn’t create the seamless end-user experience that it is hoped will describe the eventual product set.

 

The Web3 Foundation and others are working on different aspects of Web3.  Visit the Web3 Foundation for a look at Wood’s Web3 Technology Stack diagram.

 

The diagram describes the end-user software as a “protocol-extensible user-interface cradle ("browser")” that “a user would use to interact directly with the blockchain without needing to know implementation details. Examples include Status, MetaMask or MyCrypto.”

 

Web3 leaders would do well to remember that it was the user interface in the form of the Mosaic web browser that exploded, making the advent world-wide web content a certain thing. Available in public beta since earlier that year, Mosaic 1.0 released for Windows in November 1993. Just a year earlier, in November 1992, there were only 26 websites in the world (Wikipedia).

 

Juan Benet, founder, ceo, engineer of Protocol Labs, and creator of Filecoin and IPFS, is another key Web3 visionary who is tracking the user experience. In 2018, he gave a speech at the Web3 Summit called What Exactly Is Web3? Among other things, he spoke about browsers and the Web3 user interface: “Web 3.0 browsers are very different. Some look like existing browsers and they browse the web that way. Some are a single webpage that that connects you to the blockchain and lets you [initiate] transactions. Some are [“Web3 wallets”]. And some are extensions to your existing browser that add capabilities. We don’t really know what the browser of Web3 ought to be. We don’t have good usability yet. It’s a major challenge.”

 

Fervor

 

Web3 is a call to action for a user movement — like the user movement to PCs; like the movement to the world-wide web. When very large numbers of users insist upon a specific change, change happens. You don’t have to be clairvoyant to see that end-user security and privacy in the US has been severely compromised by our own intelligence agencies, big tech companies and foreign countries. It’s a bubbling pot waiting to boil over. How long before users demand a change?

 

There’s a fervor you sense from some of the people behind Web3 that you may not immediately understand. Blockchain is an open source-based system. It’s based on a P2P approach, which eliminates intervention from other parties, such as large tech companies, each of which controls access to a huge block of users. They have an overlapping monopoly on the personally identifiable data of millions of people. Web3 seeks to use a new web technology stack, blockchain and user crypto wallets to give back the ownership of such data to its users. For many the prospect of using blockchain technology and Web3 principles to take back user privacy is empowering.

 

For a well-written and comprehensive primer of Web3, see Ethereum’s Introduction to Web3.

 

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Where Are Blockchain and Web3 Taking Us? — Part 2: Delving Deeper into Blockchain

Featured content

Where Are Blockchain and Web3 Taking Us? — Part 2: Delving Deeper into Blockchain

This is the second in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3.

Learn More about this topic
  • Applications:

Part 1: First There Was Blockchain  |  Part 3: Web3 Emerging  |  Part 4: The Web3 and Blockchain FAQ

To get a sound understanding of blockchain, you should be aware of some of the nagging issues and criticisms. For example, blockchain has no governance. It could really use the guidance of a small representative group of industry visionaries to help it chart a course, but that might lead to a more centralized orientation. You should also familiarize yourself with the related tools and technologies and what they do. NFTs, in particular, work hand in hand with blockchain and add protection for those who create.

 

Getting NFTs

 

It has been effectively open season on digital content on the internet from the get-go. DRM technology didn’t solve the problem. Will the non-fungible token (NFT) make inroads? Its long-term success or lack thereof will largely be dependent on the success of blockchain. Make no mistake, blockchain is here to stay. It’s too useful a tool to leave behind. But Web3’s premise — that blockchain-based servers might someday run the internet — is by no means certain. (Come back for Part 3 which explores Web3.)

 

What are NFTs? “NFTs facilitate non-fraudulent trade for digital asset producers and consumers or collectors,” said Eric Frazier, senior solutions manager, Supermicro.

 

An NFT is a digital asset authentication system located on a blockchain that gives the holder proof of ownership of digital creations. It does this via metadata that make each NFT unique. Plus, no two people can own the same NFT, which also can’t be changed or destroyed.

 

Applications include digital artwork, but an NFT (sometimes called a "nifty") can be used for a wide variety of uses in music, gaming, entertainment, popular culture items (such as sports merchandise), virtual real estate, prevention of counterfeit products, domain name provenance and others. Down the road, NFTs may have a significant effect on software licensing, intellectual property rights and copyright. Land registry, birth and death certificates, and many other types of records are also potential future beneficiaries of NFTs.

 

If you’re wondering whether NFTs can be traded for cryptocurrency, they can be. What they are not is interchangeable. You may have an NFT for a piece of art that was sold as multiple copies by its owner. But each of those NFTs has unique meta data, so they may not be exchanged one for the other.

 

Smart Contracts Execute

 

A smart contract is blockchain-based, self-executing contract containing code that runs automatically when predetermined conditions are met as set out in an agreement or transaction. So, a hypothetical example might be: on January 15, transfer X value of cryptocurrency in payment for a specific NFT owned by a specific person. Smart contracts are autonomous, trustless, traceable, transparent and irreversible. Key hallmarks of the Smart Contract are that they exclude intermediaries and third parties like lawyers and notaries. And they usually use simple language, require fewer steps and involve less paperwork.

 

Blockchain Power Consumption

 

Some blockchains gobble up electricity and are heavy users of compute and storage resources. But blockchains are not all created equally. Bitcoin is known to be resource in hungry, while “Filecoin’s needs are materially less,” said to Michael Fair, chief revenue officer and longtime channel expert, PiKNiK.

 

It’s also possible to make changes to some blockchains to make them less power hungry. For example, Ethereum switched from the Proof-of-Work (PoW) to the Proof-of-Stake (PoS) algorithm a few months ago, which reduced power consumption by over 99%. However, Ethereum is less decentralized as a result because it is now 80% hosted on AWS. (See the discussion on Understanding Decentralized in Part 1.)

 

“With the algorithm switch from PoW to PoS, Ethereum’s decentralization took a big hit because the majority of transactions and validations are running on Amazon’s cloud” said Jörg Roskowetz, director of blockchain technology, AMD. “From my point of view, hybrid systems like Lightning on the Bitcoin network will keep all the parameters improving — scalability, latency and power-consumption challenges. This will likely take years to be developed and improved.

 

Can Web3 Remain Decentralized?

 

Is the blockchain movement viable going forward? There are those who are skeptical: For example, Scott Nover writing in Quartz and Moxie Marlinspike. Both stories were published almost a year ago in January 2022, well before the change at Ethereum.

 

Nover writes: “Even if blockchains are decentralized, the Web3 services that interact with them are controlled by a very small number of privately held companies. In fact, the industry emerging to support the decentralized web is highly consolidated, potentially undermining the promise of Web3.”

 

These are real concerns. But it’s not like the expectation was that Web3 would exist in a world free of potentially undermining factors, including the consolidation of Web3 blockchain companies as well as some interaction with Web 2.0 companies. If Web3 succeeds, it will need to support a good user experience and be resilient enough to develop additional ways of shielding tself from centralizing influences. It’s not going to exist in a vacuum.

 

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Where Are Blockchain and Web3 Taking Us? — Part 1: First There Was Blockchain

Featured content

Where Are Blockchain and Web3 Taking Us? — Part 1: First There Was Blockchain

This is the first story in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3. 

Learn More about this topic
  • Applications:

 |  Part 2: Delving Deeper into Blockchain  |  Part 3: Web3 Emerging  |  Part 4: The Web3 and Blockchain FAQ

There has been a lot of buzz about blockchain over the past five years, and yet seemingly not much movement. Long, long ago I concluded that the amount of truth to the reported value of a new technology was inversely proportional to the level of din its hype made. But as with so much else about blockchain, it defies conventional wisdom. Blockchain is a bigger deal than is generally realized.

 

Basic Blockchain Definition and Introduction

 

(Source: Wikipedia): Blockchain is a peer-to-peer (P2P) or publicly decentralized ledger (shared distributed database) that consists of blocks of data bound together with cryptography. Each block contains a cryptographic hash of the previous block, a time stamp and a transaction date. Because each block contains information from the previous block, they effectively form a chain – hence the name blockchain.

 

Blockchain transactions resist being altered once they are recorded because the data in any given block cannot be altered retroactively without altering all subsequent blocks that duplicate that data. As a P2P publicly distributed ledger, nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks.

 

“A blockchain is a system of recording information in a way that makes it difficult or impossible to change, cheat or hack the system,” said Eric Frazier, senior solutions manager, Supermicro. “It is a digital ledger that is duplicated and distributed to a network of multiple nodes on the blockchain.”

 

Michael Fair, PiKNiK’s chief revenue officer and longtime channel expert added, “In the blockchain, data is immutable. It’s actually sealed within the network, which is monitored by the blockchain 24 x 7 x 365 days a year.”

 

Blockchain was created in 2008 under the apparent pseudonym, Satoshi Nakamoto. Its original use was to provide a public distributed ledger for the bitcoin cryptocurrency also created by the same entity. But the true promise of blockchain goes way beyond cryptocurrency. The downside is that blockchain operations are computationally intensive and tend to use lots of power. This issue will be covered in more detail later in the series.

 

Understanding “Decentralized”

 

The term decentralized is probably the most important tenet of Web3 and it is at least partially delivered by blockchain. The word has a specific set of meanings, although it’s become something of a buzzword, which tends to blur its meaning.

 

Gavin Wood is an Ethereum Cofounder, Polkadot founder and the person who coined the term Web3 in 2014. Based on comments made by Wood in a January 2022 YouTube video by CNBC International, as well as other sources, decentralized means that no one company’s servers exclusively own a crucial part of the internet. There are two related meanings for decentralized that get confused sometimes:

 

1. In its most basic form, decentralized is about keeping data safe from monopolization by using blockchain and other technologies to make data and content independent. Data in a blockchain is copied to servers all over the world, which cannot change that information unilaterally. There’s no one place that this data exists and that protects it. Blockchain makes it immutable.

 

2. Decentralized also means what Wood called “political decentralization,” wherein “no one will have the power to turn off content,” the way top execs could (in theory) at companies like Google, Facebook, Amazon, Microsoft and Twitter. Decentralization could potentially kick these and other companies out of the “Your Data” business. A key phrase that relates to this meaning of the term is highly consolidated. How many companies have Google, Amazon, Microsoft, and Facebook purchased over the past couple of decades? Google purchased YouTube. Facebook bought Instagram. Microsoft nabbed LinkedIn. But that’s just the tip of the iceberg. Where once there were many companies, now there are a few, very large companies exerting control over the internet. That’s what highly consolidated refers to. It’s term that’s often used to describe the opposite of decentralized.

 

Blockchain Uses

 

Since 2019 or so, new ideas for blockchain applications have arrived fast and furiously. And while many are plausible theories, others have been actively produced. If your company’s sector of the marketplace happens to be one of the areas that blockchain has been identified with, chances are good that blockchain is at least on your company’s radar.

 

Many organizations are looking to blockchain to rejuvenate their product pipelines. The future of blockchain will very likely be determined by technocrats and developers who harness it to chase profits. In other words, thousands of enterprises are developing blockchain products and services to their own needs, and if they succeed, many others will likely follow.

 

Beyond supporting cryptocurrency, three early uses of blockchain have been:

  • Financial services
  • Government use of blockchain for voting
  • Helping to keep track of supply chains. There’s a synergy in the way they work that makes blockchain and supply chain ideal for one another.

Blockchain has quickly spread to several areas of financial services like tokenizing assets and fiat currencies, P2P lending backed by assets, decentralized finance (DeFi) and self-enforcing smart contracts to name a few.

 

Blockchain voting could help put a stop to the corruption surrounding elections. Countries like Sierra Leone and Russia were early to it. But several other countries have tried it – including the U.S.

 

In healthcare, a handful of companies are attempting to revolutionize e-records by developing them on blockchain-based decentralized ledgers instead of stored away in some company’s database. The medical community is looking at it to store DNA information.

 

Storage systems are an early and important blockchain application. Companies like PiKNiK offer decentralized blockchain storage on a BTB basis.

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Perspective: Don’t Back into Performance-Intensive Computing

Featured content

Perspective: Don’t Back into Performance-Intensive Computing

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and automation to differentiate their products and services. In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive.

Learn More about this topic
  • Applications:

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and decision-support analytics, technical computing, big data, modeling and simulation, cryptocurrency and other blockchain applications, automation and high-performance computing to differentiate their products and services.

 

In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive. Without thinking through the compute performance you need as measured against your most demanding workloads – now and at least two years from now – you’re setting yourself up for failure or unnecessary expense. When it comes to performance-intensive computing: plan, don’t dabble.

 

There are questions you should ask before jumping in, too. In the cloud or on-premises? There are pluses and minuses to each. Is your data highly distributed? If so, you’ll need network services that won’t become a bottleneck. There’s a long list of environmental and technology needs that are required to make performance-intensive computing pay off. Among them is making it possible to scale. And, of course, planning and building out your environment in advance of your need is vastly preferable to stumbling into it.

 

The requirement that sometimes gets short shrift is organizational. Ultimately, this is about revealing data with which your company can make strategic decisions. There’s no longer anything mundane about enterprise technology and especially the data it manages. It has become so important that virtually every department in your company affects and is affected by it. If you double down on computational performance, the C-suite needs to be fully represented in how you use that power, not just the approval process. Leaving top leadership, marketing, finance, tax, design, manufacturing, HR or IT out of the picture would be a mistake. And those are just sample company building blocks. You also need measurable, meaningful metrics that will help your people determine the ROI of your efforts. Even so, it’s people who make the leap of faith that turns data into ideas.

 

Finally, if you don’t already have the expertise on staff to learn the ins and outs of this endeavor, hire or contract or enter into a consulting arrangement with smart people who clearly have the chops to do this right. You don’t want to be the company with a rocket ship that no one can fly.

 

So, don’t back into performance-intensive computing. But don’t back out of it either. Being able to take full advantage of your data at scale can play an important role in ensuring the viability of your company going forward.

 

Related Content:

 


 

Featured videos


Follow


Related Content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

Featured content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

A video discussion between Charles Liang, Supermicro CEO, and Dr. Lisa Su, AMD CEO.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Higher clock rates, more cores and larger onboard memory caches are some of the traditional areas of improvement for generational CPU upgrades. Performance improvements are almost a given with a new generation CPU. Increasingly, howeer, the more difficult challenges for data centers and performance-intensive computing are energy efficiency and managing heat. Energy costs have spiked in many parts of the world and “performance per watt” is what many companies are looking for. AMD’s 4th-gen EPYC™ CPU runs a little hotter than its predecessor, but its performance gains far outpace the thermal rise, making for much greater performance per watt. It’s a trade-off that makes sense, especially for performance-intensive computing, such HPC and technical computing applications.

In addition to the energy efficiency and heat dissipation concerns, Dr. Su and Mr. Liang discuss the importance of the AMD EPYC™ roadmap. You’ll learn one or two nuances about AMD’s plans. SMC is ready with 15 products that leverage the Genoa, AMD’s fourth generation EPYC™ CPU. This under 15-minute video recorded on November 15, 2022, will bring you up to date on all things AMD EPYC™. Click the link to see the video:

Supermicro & AMD CEOs Video – The Future of Data Center Computing

 

 

 

 

Featured videos


Follow


Related Content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

Featured content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

The CPU package is configurable at time of purchase with various options that you can match up to the specific characteristics of your workloads. Ask yourself the three questions the story poses.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In a previous post, Performance-Intensive Computing explored the benefits of making your applications and workloads more parallel. Chief among the paybacks may be being able to take advantage of the latest innovations in performance-intensive computing.

 

Although it isn’t strictly a parallel approach, the CPU package is configurable at the time of purchase with various options that you can match up to the specific characteristics of your workloads. The goal of this story is to outline how to match up the appropriate features to purchase the best processors for your particular application collection. For starters: You should be asking yourself these three questions:

 

Question 1. Does your application require a great deal of memory and storage? Memory-bound apps are typically found when an application has to manipulate a large amount of data. To alleviate potential bottlenecks, purchase a CPU with the largest possible onboard caches to avoid swapping data from storage. Apps such as Reveal and others used in the oil and gas industry will typically require large onboard CPU caches to help prevent memory bottlenecks as data moves in and out of the processor.

 

Question 2. Do you have the right amount and type of storage for your data requirements? Storage has a lot of different parameters and how it interacts with the processor and your application isn’t one-size-fits-all. Performance-Intensive Computing has previously written about specialized file systems such as the one developed and sold by WekaIO that can aid in onboarding and manipulating large data collections.

 

Question 3. Does your application spend a lot of time communicating across networks, or is your application bound by the limits of your processor? For either of these situations, it might mean you might need CPUs with more cores and/or higher-processing clock speeds. This is the case, for example, with molecular dynamic apps such as Gromacs and Lammps. These situations might call for parts such as AMD’s Threadripper.

 

As you can see, figuring out the right kind of CPU – and its supporting chipsets – is a lot more involved than just purchasing the highest clock speed and largest number of cores. Knowing your data and applications will guide you to buying CPU hardware that makes your business more efficient.

Featured videos


Follow


Related Content

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Follow


Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Follow


Related Content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Featured content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • GigaIO

Today’s data center has numerous challenges: provisioning hardware and cloud workloads, balancing the needs of performance-intensive applications across compute, storage and network resources, and having a consistent monitoring and analytics framework to feed intelligent systems management. Plus, you may have the need to deploy or re-deploy all these resources as needs shift, moment to moment.

Supermicro has created its own tool to assist with these decisions to monitor and manage this broad IT portfolio, called the SuperCloud Composer (SCC). It combines a standardized web-based interface using an Open Distributed Infrastructure Management interface with a unified dashboard based on the RedFish message bus and service agents.

SCC can track the various resources and assign them to different pools with its own predictive analytics and telemetry. It delivers a single intelligent management solution that covers both existing on-premises IT equipment as well as a more software-defined cloud collection. Additional details can be found in this SuperCloud Composer white paper.

SuperCloud Composer makes the use of a cluster-level PCIe network using the FabreX software from GigaIO Networks. It has the capability to flexibly scale up and out storage systems while using the lowest latency paths available.

It also supports Weka.IO cluster members, which can be deployed across multiple systems simultaneously. See our story The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs.

SCC can create automated installation playbooks in Ansible, including a software boot image repository that can quickly deploy new images across the server infrastructure. It has a fast-deploy feature that allows a new image to be deployed within seconds.

SuperCloud Composer offers a robust analytics engine that collects historical and up-to-date analytics stored in an indexed database within its framework. This data can produce a variety of charts, graphs and tables so that users can better visualize what is happening with their server resources. Each end-user is provided with analytic capable charting represented by IOPS, network, telemetry, thermal, power, composed node status, storage allocation and system status.

Last but not least, SCC also has both network provisioning and storage fabric provisioning features where build plans are pushed to data or fabric switches either as single-threaded or multithreaded operations, such that multiple switches can be updated simultaneously by shared or unique build plan templates.

For more information, watch this short SCC explainer video. Or schedule an online demo of SCC and request a free 90-day trial of the software.

Featured videos


Follow


Related Content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Follow


Related Content

Pages