Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Research Roundup: AI boosts project management & supply chains, HR woes, SMB supplier overload

Featured content

Research Roundup: AI boosts project management & supply chains, HR woes, SMB supplier overload

Catch up on the latest IT market intelligence from leading researchers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Artificial intelligence is boosting both project management and supply chains. Cybersecurity spending is on a tear. And small and midsize businesses are struggling with more suppliers than employees.

That’s some of the latest IT intelligence from leading industry watchers. And here’s your research roundup.

AI for PM 

What’s artificial intelligence good for? One area is project management.

In a new survey, nearly two-thirds of project managers (63%) reported improved productivity and efficiency with AI integration.

The survey was conducted by Capterra, an online marketplace for software and services. As part of a larger survey, the company polled 2,500 project managers in 12 countries.

Nearly half the respondents (46%) said they use AI in their project management tools. Capterra then dug in deeper with this second group—totaling 1,153 project managers—to learn what kinds of benefits they’re enjoying with AI.

Among the findings:

  • Over half the AI-using project managers (54%) said they use the technology for risk management. That’s the top use case reported.
  • Project managers plan to increase their AI spending by an average of 36%.
  • Nine in 10 project managers (90%) said their AI investments earned a positive return in the last 12 months.
  • Improved productivity as a result of using AI was reported by nearly two-thirds of the respondents (63%).
  • Looking ahead, respondents expect the areas of greatest impact from AI to be task automation, predictive analytics and project planning.

AI for Supply Chains, Too

A new report from consulting firm Accenture finds that the most mature supply chains are 23% more profitable than others. These supply-chain leaders are also six times more likely than others to use AI and Generative AI widely.

To figure this out, Accenture analyzed nearly 1,150 companies in 15 countries and 10 industries. Accenture then identified the 10% of companies that scored highest on its supply-chain maturity scale.

This scale was based on the degree to which an organization uses GenAI, advanced machine learning and other new technologies for autonomous decision-making, advanced simulations and continuous improvement. The more an organization does this, the higher was their score.

Accenture also found that supply-chain leaders achieved an average profit margin of 11.8%, compared with an average margin of 9.6% among the others. (That’s the 23% profit gain mentioned earlier.) The leaders also delivered 15% better returns to shareholders: 8.5% vs. 7.4% for others.

HR: Help Wanted 

If solving customer pain points is high on your agenda—and it should be—then here’s a new pain point to consider: Fewer than 1 in 4 human relations functions say they’re getting full business value from their HR technology.

In other words, something like 75% of HR executives could use some IT help. That’s a lot of business.

The assessment comes from research and analysis firm Gartner, based on its survey of 85 HR leaders conducted earlier this year. Among Gartner’s findings:

  • Only about 1 in 3 HR executives (35%) feel confident that their approach to HR technology helps to achieve their organization’s business objectives.
  • Two out of three HR executives believe their HR function’s effectiveness will be hurt if they don’t improve their technology.

Employees are unhappy with HR technology, too. Earlier this year, Gartner also surveyed more than 1,200 employees. Nearly 7 in 10 reported experiencing at least one barrier when interacting with HR technology over the previous 12 months.

Cybersecurity’s Big Spend

Looking for a growth market? Don’t overlook cybersecurity.

Last year, worldwide spending on cybersecurity products totaled $106.8 billion. That’s a lot of money. But event better, it marked a 15% increase over the previous year’s spending, according to market watcher IDC.

Looking ahead, IDC expects this double-digit growth rate to continue for at least the next five years. By 2028, IDC predicts, worldwide spending on cybersecurity products will reach $200 billion—nearly double what was spent in 2023.

By category, the biggest cybersecurity spending last year went to network security: $27.4 billion. After that came endpoint security ($21.6 billion last year) and security analytics ($20 billion), IDC says.

Why such strong spending? In part because cybersecurity is now a board-level topic.

“Cyber risk,” says Frank Dickson, head of IDC’s security and trust research, “is business risk.”

SMBs: Too Many Suppliers

It’s not easy standing out as a supplier of small and midsize business customers. A new survey finds the average SMB has nine times more suppliers than it does employees—and actually uses only about 1 in 4 of those suppliers.

The survey, conducted by spend-management system supplier Spendesk, focused on customers in Europe. (Which makes sense, as Spendesk is headquartered in Paris.) Spendesk examined 4.7 million suppliers used by a sample of its 5,000 customers in the UK, France, Germany and Spain.

Keeping many suppliers while using only a few of them? That’s not only inefficient, but also costly. Spendesk estimates that its SMB customers could be collectively losing some $1.24 billion in wasted time and management costs.

And there’s more at stake, too. A recent study by management consultants McKinsey & Co. finds that small and midsize organizations—those with anywhere from 1 to 200 employees—are actually big business.

By McKinsey’s reckoning, SMBs account for more than 90% of all businesses by number … roughly half the global GDP … and more than two-thirds of all business jobs.

Fun fact: Nearly 1 in 5 of the largest businesses originally started as small businesses.

Do More:

 

Featured videos


Follow


Related Content

HBM: Your memory solution for AI & HPC

Featured content

HBM: Your memory solution for AI & HPC

High-bandwidth memory shortens the information commute to keep pace with today’s powerful GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

As AI powered by GPUs transforms computing, conventional DDR memory can’t keep up.

The solution? High-bandwidth memory (HBM).

HBM is memory chip technology that essentially shortens the information commute. It does this using ultra-wide communication lanes.

An HBM device contains vertically stacked memory chips. They’re interconnected by microscopic wires known as through-silicon vias, or TSVs for short.

HBM also provides more bandwidth per watt. And, with a smaller footprint, the technology can also save valuable data-center space.

Here’s how: A single HBM stack can contain up to eight DRAM modules, with each module connected by two channels. This makes an HBM implementation of just four chips roughly equivalent to 30 DDR modules, and in a fraction of the space.

All this makes HBM ideal for workloads that utilize AI and machine learning, HPC, advanced graphics and data analytics.

Latest & Greatest

The latest iteration, HBM3, was introduced in 2022, and it’s now finding wide application in market-ready systems.

Compared with the previous version, HBM3 adds several enhancements:

  • Higher bandwidth: Up to 819 GB/sec., up from HBM2’s max of 460 GB/sec.
  • More memory capacity: 24GB per stack, up from HBM2’s 8GB
  • Improved power efficiency: Delivering more data throughput per watt
  • Reduced form factor: Thanks to a more compact design

However, it’s not all sunshine and rainbows. For one, HBM-equipped systems are more expensive than those fitted out with traditional memory solutions.

Also, HBM stacks generate considerable heat. Advanced cooling systems are often needed, adding further complexity and cost.

Compatibility is yet another challenge. Systems must be designed or adapted to HBM3’s unique interface and form factor.

In the Market

As mentioned above, HBM3 is showing up in new products. That very definitely includes both the AMD Instinct MI300A and MI300X series accelerators.

The AMD Instinct MI300A accelerator combines a CPU and GPU for running HPC/AI workloads. It offers HBM3 as the dedicated memory with a unified capacity of up to 128GB.

Similarly, the AMD Instinct MI300X is a GPU-only accelerator designed for low-latency AI processing. It contains HBM3 as the dedicated memory, but with a higher capacity of up to 192GB.

For both of these AMD Instinct MI300 accelerators, the peak theoretical memory bandwidth is a speedy 5.3TB/sec.

The AMD Instinct MI300X is also the main processor in Supermicro’s AS -8125GS-TNMR2, an H13 8U 8-GPU system. This system offers a huge 1.5TB of HBM3 memory in single-server mode, and an even huger 6.144TB at rack scale.

Are your customers running AI with fast GPUs, only to have their systems held back by conventional memory? Tell them to check out HBM.

Do More:

 

Featured videos


Follow


Related Content

Tech Explainer: What is CXL — and how can it help you lower data-center latency?

Featured content

Tech Explainer: What is CXL — and how can it help you lower data-center latency?

High latency is a data-center manager’s worst nightmare. Help is here from an open-source solution known as CXL. It works by maintaining “memory coherence” between the CPU’s memory and memory on attached devices.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Latency is a crucial measure for every data center. Because latency measures the time it takes for data to travel from one point in a system or network to another, lower is generally better. A network with high latency has slower response times—not good.

Fortunately, the industry has come up with an open-source solution that provides a low-latency link between processors, accelerators and memory devices such as RAM and SSD storage. It’s known as Compute Express Link, or CXL for short.

CXL is designed to solve a couple of common problems. Once a processor uses up the capacity of its direct-attached memory, it relies on an SSD. This introduces a three-order-of-magnitude latency gap that can hurt both performance and total cost of ownership (TCO).

Another problem is that multicore processors are starving for memory bandwidth. This has become an issue because processors have been scaling in terms of cores and frequencies faster than their main memory channels. The resulting deficit leads to suboptimal use of the additional processor cores, as the cores have to wait for data.

CXL overcomes these issues by introducing a low-latency, memory cache coherent interconnect. CXL works for processors, memory expansion and AI accelerators such as the AMD Instinct MI300 series. The interconnect provides more bandwidth and capacity to processors, which increases efficiency and enables data-center operators to get more value from their existing infrastructure.

Cache-coherence refers to IT architecture in which multiple processor cores share the same memory hierarchy, yet retain individual L1 caches. The CXL interconnect reduces latency and increases performance throughout the data center.

The latest iteration of CXL, version 3.1, adds features to help data centers keep up with high-performance computational workloads. Notable upgrades include new peer-to-peer direct memory access, enhancements to memory pooling, and CXL Fabric improvements.

3 Ways to CXL

Today, there are three main types of CXL devices:

  • Type 1: Any device without integrated local memory. CXL protocols enable these devices to communicate and transfer memory capacity from the host processor.
  • Type 2: These devices include integrated memory, but also share CPU memory. They leverage CXL to enable coherent memory-sharing between the CPU and the CXL device.
  • Type 3: A class of devices designed to augment existing CPU memory. CXL enables the CPU to access external sources for increased bandwidth and reduced latency.

Hardware Support

As data-center architectures evolve, more hardware manufacturers are supporting CXL devices. One such example is Supermicro’s All-Flash EDSFF and NVM3 servers.

Supermicro’s cutting-edge appliances are optimized for resource-intensive workloads, including data-center infrastructure, data warehousing, hyperscale/hyperconverged and software-defined storage. To facilitate these workloads, Supermicro has included support for up to eight CXL 2.0 devices for advanced memory-pool sharing.

Of course, CXL can be utilized only on server platforms designed to support communication between the CPU, memory and CXL devices. That’s why CXL is built into the 4th gen AMD EPYC server processors.

These AMD EPYC processors include up to 96 ‘Zen 4’ 5nm cores. Each core includes 32MB per CCD of L3 cache, as well as up to 12 DDR5 channels supporting as much as 12TB of memory.

CXL memory expansion is built into the AMD EPYC platform. That makes these CPUs ideally suited for advanced AI and GenAI workloads.

Crucially, AMD also includes 256-bit AES-XTS and secure multikey encryption. This enables hypervisors to encrypt address space ranges on CXL-attached memory.

The Near Future of CXL

Like many add-on devices, CXL devices are often connected via the PCI Express (PCIe) bus. However, implementing CXL over PCIe 5.0 in large data centers has some drawbacks.

Chief among them is the way its memory pools remain isolated from each other. This adds latency and hampers significant resource-sharing.

The next generation of PCIe, version 6.0, is coming soon and will offer a solution. CXL for PCIe6.0 will offer twice as much throughput as PCIe 5.0.

The new PCIe standard will also add new memory-sharing functionality within the transaction layer. This will help reduce system latency and improve accelerator performance.

CXL is also leading to the start of disaggregated computing. There, resources that reside in different physical enclosures can be available to several applications.

Are your customers suffering from too much latency? The solution could be CXL.

Do More:

 

 

Featured videos


Follow


Related Content

At Computex, AMD & Supermicro CEOs describe AI advances you’ll be adopting soon

Featured content

At Computex, AMD & Supermicro CEOs describe AI advances you’ll be adopting soon

At Computex Taiwan, Lisa Su of AMD and Charles Liang of Supermicro delivered keynotes that focused on AI, liquid cooling and energy efficiency.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The chief executives of both AMD and Supermicro used their Computex keynote addresses to describe their companies’ AI products and, in the case of AMD, pre-announce important forthcoming products.

Computex 2024 was held this past week in Taipei, Taiwan, with the conference theme of “connecting AI.” Exhibitors featured some 1,500 companies from around the world, and keynotes were delivered by some of the IT industry’s top executives.

That included Lisa Su, chairman and CEO of AMD, and Charles Liang, founder and CEO of Supermicro. Here's some of what they previewed at Computex 2024

Lisa Su, AMD: Top priority is AI

Su of AMD presented one of this Computex’s first keynotes. Anyone who thought she might discuss topics other than AI was quickly set straight.

“AI is our number one priority,” Su told the crowd. “We’re at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business, improves our quality of life, and reshapes every part of the computing market.”

AMD intends to lead in AI solutions by focusing on three priorities, she added: delivering a broad portfolio of high-performance, energy-efficient compute engines (including CPUs, GPUs and NPUs); enabling an open and developer-friendly ecosystem; and co-innovating with partners.

The latter point was supported during Su’s keynote by brief visits from several partner leaders. They included Pavan Dhavulari, corporate VP of Windows devices at Microsoft; Christian Laforte, CTO of Stability AI; and (via a video link) Microsoft CEO Satya Nadella.

Fairly late in Su’s hour-plus keynote, she held up AMD’s forthcoming 5th gen EPYC server processor, codenamed Turin. It’s scheduled to ship by year’s end.

As Su explained, Turin will feature up to 192 cores and 384 threads, up from the current generation’s max of 128 cores and 256 threads. Turin will contain 13 chiplets built in both 3-nm and 6-nm processor technology. Yet it will be available as a drop-in replacement for existing EPYC platforms, Su said.

Turin processors will use AMD’s new ‘Zen5’ cores, which Su also announced at Computex. She described AMD’s ‘Zen5’ as “the highest performance and most energy-efficient core we’ve ever built.”

Su also discussed AMD’s MI3xx family of accelerators. The MI300, introduced this past December, has become the fastest ramping product in AMD’s history, she said. Microsoft’s Nadella, during his short presentation, bragged that his company’s cloud was the first to deliver general availability of virtual machines using the AMD MI300X accelerator.

Looking ahead, Su discussed three forthcoming Instinct accelerators on AMD’s road map: The MI325, MI350 and MI400 series.

The AMD Instinct MI325, set to launch later this year, will feature more memory (up to 288GB) and higher memory bandwidth (6TB/sec.) than the MI300. But the new component will still use the same infrastructure as the MI300, making it easy for customers to upgrade.

The next series, MI350, is set for launch next year, Su said. It will then use AMD’s new CDNA4 architecture, which Su said “will deliver the biggest generational AI leap in our history.” The MI350 will be built on 3nm process technology, but will still offer a drop-in upgrade from both the MI300 and MI325.

The last of the three, the MI400 series, is set to start shipping in 2026. That’s also when AMD will deliver a new generation of CDNA, according to Su.

Both the MI325 and MI350 series will leverage the same industry standard universal baseboard OCP server design used by MI300. Su added: “What that means is, our customers can adopt this new technology very quickly.”

Charles Liang, Supermicro: Liquid cooling is the AI future

Liang dedicated his Computex keynote to the topics of liquid cooling and “green” computing.

“Together with our partners,” he said, “we are on a mission to build the most sustainable data centers.”

Liang predicted a big change from the present, where direct liquid cooling (DLC) has a less-than-1% share of the data center market. Supermicro is targeting 15% of new data center deployments in the next year, and Liang hopes that will hit 30% in the next two years.

Driving this shift, he added, are several trends. One, of course, is the huge uptake of AI, which requires high-capacity computing.

Another is the improvement of DLC technology itself. Where DLC system installations used to take 4 to 12 months, Supermicro is now doing them in just 2 to 4 weeks, Liang said. Where liquid cooling used to be quite expensive, now—when TCO and energy savings are factored in—“DLC can be free, with a big bonus,” he said. And where DLC systems used to be unreliable, now they are high performing with excellent uptime.

Supermicro now has capacity to ship 1,000 rack scale solutions with liquid cooling per month, Liang said. In fact, the company is shipping over 50 liquid-cooled racks per day, with installations typically completed within just 2 weeks.

“DLC,” Liang said, “is the wave of the future.”

Do more:

 

Featured videos


Follow


Related Content

Supermicro intros MicroCloud server powered by AMD EPYC 4004 CPUs

Featured content

Supermicro intros MicroCloud server powered by AMD EPYC 4004 CPUs

Supermicro’s latest 3U server, the Supermicro MicroCloud, supports up to 10 nodes of AMD’s entry-level server processor. With this server and the high-density enclosure, Supermicro offers an efficient, high-density and affordable solution for SMBs, corporate departments and branches, and hosted IT service providers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro’s latest H13 server is powered by the AMD EPYC 4004 series processors introduced last month. Designated the Supermicro MicroCloud AS -3015MR-H10TNR, this server is designed to run cloud-native workloads for small and midsized businesses (SMBs), corporate departments and branch offices, and hosted IT service providers.

Intended workloads for the new server include web hosting, cloud gaming and content-delivery networks.

10 Nodes, 3U Form

This new Supermicro MicroCloud server supports up to 10 nodes in a 3U form factor. In addition, as many as 16 enclosures can be loaded into a single track, providing a total of 160 individual nodes.

Supermicro says customers using the new MicroCloud server can increase their computing density by 3.3X compared with industry-standard 1U rackmount servers at rack scale.

The new server also supports high-performance peripherals with either two PCIe 4.0 x8 add-on cards or one x16 full-height, full-width GPU accelerator. System memory maxes out at 192GB. And the unit gets air-cooled by five heavy-duty fans.

4004 for SMBs

The AMD EPYC 4004 series processors bring an entry-level family of CPUs to AMD’s EPYC line. They’re designed for use in entry-level servers used by organizations that typically don’t require either hosting on the public cloud or more powerful server processors.

The new AMD EPYC 4004 series is initially offered as eight SKUs, all designed for use in single-processor systems. They offer from 8 to 16 ‘Zen 4’ cores with up to 32 threads; 128MB of L3 cache; 2 DDR channels with a memory capacity of up to 192GB; and 28 lanes of PCIe 5 connectivity.

More Than One

Supermicro is also using the new AMD EPYC 4004 series processors to power three other server lines.

That includes a 1U server designed for web hosting and SMB applications. A 2U server aimed specifically at companies in financial services. And towers intended for content creation, entry-level servers, workstations and even desktops.

All are designed to be high-density, efficient and affordable. Isn’t that what your SMB customers are looking for?

Do More:

 

Featured videos


Follow


Related Content

Meet AMD's new Alveo V80 Compute Accelerator Card

Featured content

Meet AMD's new Alveo V80 Compute Accelerator Card

AMD’s new Alveo V80 Compute Accelerator Card has been designed to overcome performance bottlenecks in compute-intensive workloads that include HPC, data analytics and network security.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are you or your customers looking for an accelerator for memory-bound applications with large data sets that require FPGA hardware adaptability? If so, then check out the new AMD Alveo V80 Compute Accelerator Card.

It was introduced by AMD at ISC High Performance 2024, an event held recently in Hamburg, Germany.

The thinking behind the new component is that for large-scale data processing, raw computational power is only half the equation. You also need lots of memory bandwidth.

Indeed, AMD’s new hardware adaptable accelerator is purpose-built to overcome performance bottlenecks for compute-intensive workloads with large data sets common to HPC, data analytics and network security applications. It’s powered by AMD’s 7nm Versal HBM Series adaptive system-on-chip (SoC).

Substantial gains

AMD says that compared with the previous-generation Alveo U55C, the new Alveo V80 offers up to 2x the memory bandwidth, 2x the PCIe bandwidth, 2x the logic density, and 4x the network bandwidth (820GB/sec.).

The card also features 4x200G networking, PCIe Gen4 and Gen5 interfaces, and DDR4 DIMM slots for memory expansion.

Appropriate workloads for the new AMD Alveo V80 include HPC, data analytics, FinTech/Blockchain, network security, computational storage, and AI compute.

In addition, the AMD Alveo V80 can scale to hundreds of nodes over Ethernet, creating compute clusters for HPC applications that include genomic sequencing, molecular dynamics and sensor processing.

Developers, too

A production board in a PCIe form factor, the AMD Alveo V80 is designed to offer a faster path to production than designing your own PCIe card.

Indeed, for FPGA developers, the V80 is fully enabled for traditional development via the Alveo Versal Example Design (AVED), which is available on Github.

This example design provides an efficient starting point using a pre-built subsystem implemented on the AMD Versal adaptive SoC. More specifically, it targets the new AMD Alveo V80 accelerator.

Supermicro offering

The new AMD accelerator is already shipping in volume, and you can get it from either AMD or an authorized distributor.

In addition, you can get the Alveo V80 already integrated into a partner-provided server.

Supermicro is integrating the new AMD Alveo V80 with its AMD EPYC processor-powered A+ servers. These include the Supermicro AS-4125GS-TNRT, a compact 4U server for deployments where compute density and memory bandwidth are critical.

Early user

AMD says one early customer for the new accelerator card is the Commonwealth Scientific Industrial Research Organization (CSIRO), the national research organization of Australia.

CSIRO plans to upgrade an older setup with 420 previous-generation AMD Alveo U55C accelerator cards, replacing them with the new Alveo V80.

 Because the new part is so much more powerful than its predecessor, the organization expects to reduce the number of cards it needs by two-thirds. That, in turn, should shrink the data-center footprint required and lower system costs.

If those sound like benefits you and your customers would find attractive, check out the AMD Alveo V80 links below.

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: AI edition

Featured content

Research Roundup: AI edition

Catch up on the latest research and analysis around artificial intelligence.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is the No. 1 AI solution being deployed. Three in 4 knowledge workers are already using AI. The supply of workers with AI skills can’t meet the demand. And supply chains can be helped by AI, too.

Here’s your roundup of the latest in AI research and analysis.

GenAI is No. 1

Generative AI isn’t just a good idea, it’s now the No. 1 type of AI solution being deployed.

In a survey recently conducted by research and analysis firm Gartner, more than a quarter of respondents (29%) said they’ve deployed and are now using GenAI.

That was a higher percentage than any other type of AI in the survey, including natural language processing, machine learning and rule-based systems.

The most common way of using GenAI, the survey found, is embedding it in existing applications. For example, using Microsoft Copilot for 365. This was cited by about 1 in 3 respondents (34%).

Other approaches mentioned by respondents included prompt engineering (cited by 25%), fine-tuning (21%) and using standalone tools such as ChatGPT (19%).

Yet respondents said only about half of their AI projects (48%) make it into production. Even when that happens, it’s slow. Moving an AI project from prototype to production took respondents an average of 8 months.

Other challenges loom, too. Nearly half the respondents (49%) said it’s difficult to estimate and demonstrate an AI project’s value. They also cited a lack of talent and skills (42%), lack of confidence in AI technology (40%) and lack of data (39%).

Gartner conducted the survey in last year’s fourth quarter and released the results earlier this month. In all, valid responses were culled from 644 executives working for organizations in the United States, the UK and Germany.

AI ‘gets real’ at work

Three in 4 knowledge workers (75%) now use AI at work, according to the 2024 Work Trend Index, a joint project of Microsoft and LinkedIn.

Among these users, nearly 8 in 10 (78%) are bringing their own AI tools to work. That’s inspired a new acronym: BYOAI, short for Bring Your Own AI.

“2024 is the year AI at work gets real,” the Work Trend report says.

2024 is also a year of real challenges. Like the Gartner survey, the Work Trend report finds that demonstrating AI’s value can be tough.

In the Microsoft/LinkedIn survey, nearly 8 in 10 leaders agreed that adopting AI is critical to staying competitive. Yet nearly 6 in 10 said they worry about quantifying the technology’s productivity gains. About the same percentage also said their organization lacks an AI vision and plan.

The Work Trend report also highlights the mismatch between AI skills demand and supply. Over half the leaders surveyed (55%) say they’re concerned about having enough AI talent. And nearly two-thirds (65%) say they wouldn’t hire someone who lacked AI skills.

Yet fewer than 4 in 10 users (39%) have received AI training from their company. And only 1 in 4 companies plan to offer AI training this year.

The Work Trend report is based on a mix of sources: a survey of 31,000 people in 31 countries; labor and hiring trends on the LinkedIn site; Microsoft 365 productivity signals; and research with Fortune 500 customers.

AI skills: supply-demand mismatch

The mismatch between AI skills supply and demand was also examined recently by market watcher IDC. It expects that by 2026, 9 of every 10 organizations will be hurt by an overall IT skills shortage. This will lead to delays, quality issues and revenue loss that IDC predicts will collectively cost these organizations $5.5 trillion.

To be sure, AI skills are currently the most in-demand skill for most organizations. The good news, IDC finds, is that more than half of organizations are now using or piloting training for GenAI.

“Getting the right people with the right skills into the right roles has never been more difficult,” says IDC researcher Gina Smith. Her prescription for success: Develop a “culture of learning.”

AI helps supply chains, too

Did you know AI is being used to solve supply-chain problems?

It’s a big issue. Over 8 in 10 global businesses (84%) said they’ve experienced supply-chain disruptions in the last year, finds a survey commissioned by Blue Yonder, a vendor of supply-chain solutions.

In response, supply-chain executives are making strategic investments in AI and sustainability, Blue Yonder finds. Nearly 8 in 10 organizations (79%) said they’ve increased their investments in supply-chain operations. Their 2 top areas of investment were sustainability (cited by 48%) and AI (41%).

The survey also identified the top supply-chain areas for AI investment. They are planning (cited by 56% of those investing in AI), transportation (53%) and order management (50%).

In addition, 8 in 10 respondents to the survey said they’ve implemented GenAI in their supply chains at some level. And more than 90% said GenAI has been effective in optimizing their supply chains and related decisions.

The survey, conducted by an independent research firm with sponsorship by Blue Yonder, was fielded in March, with the results released earlier this month. The survey received responses from more than 600 C-suite and senior executives, all of them employed by businesses or government agencies in the United States, UK and Europe.

Do more:

 

Featured videos


Follow


Related Content

AMD intros entry-level server CPUs for SMBs, enterprise branches, hosted service providers

Featured content

AMD intros entry-level server CPUs for SMBs, enterprise branches, hosted service providers

The new AMD EPYC 4004 processors extend the company’s ‘Zen 4’ core architecture into a line of entry-level systems for small and midsized businesses, schools, branch IT and regional providers of hosted IT services.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD has just introduced the AMD EPYC 4004 processors, bringing a new entry-level line to its family of 4th gen server processors.

To deliver these new processors, AMD has combined the architecture of its Ryzen 7000 series processors with the packaging of its EPYC line of server processors. The result is a line of CPUs that lowers the entry-level pricing for EPYC-powered servers.

The AMD EPYC 4004 processors are designed for use in entry-level servers and towers, systems that typically retail for $1,500 to $3,000. That’s a price level affordable for most small and medium businesses, enterprise IT branches, public school districts, and regional providers of hosted IT services. It’s even less than the retail price for some high-end processor CPUs.

Many SMBs can’t afford either hosting on the public cloud or AMD’s more powerful server processors. As a result, they often make do with using PCs as servers. The new AMD processors aim to change that.

There are lots of reasons why a real server offers a better solution. These reasons include greater performance and scalability, higher rates of dependability and easier management.

Under the hood

The new AMD EPYC 4004 series is initially offered as eight SKUs, all designed for use in single-processor systems. They offer from 8 to 16 ‘Zen 4’ cores with up to 32 threads; 128MB of L3 cache; 2 DDR channels with a memory capacity of up to 192GB; and 28 lanes of PCIe 5 connectivity.

Two of the new SKUs—4584PX and 4484PX—offer AMD’s 128MB 3D V-Cache technology. As the name implies, V-Cache is a 3D vertical cache designed to offer faster interconnect density, greater energy efficiency and higher per-core performance for cache-hungry applications.

All the new AMD EPYC 4004 processors use AMD’s AM5 socket. That makes them incompatible with AMD’s higher-end EPYC 8004 and EPYC 9004 server processors, which use a different socket.

OEM support

AMD is working with several server OEMs to get systems built around the new EPYC 4004 processors to market quickly. Among these OEMs is Supermicro, which is supporting the new AMD CPUs in select towers and servers.

That includes Supermicro’s H13 MicroCloud system, a high-density, 3U rackmount system for the cloud. It has now been updated with additional performance offered by the AMD EPYC 4004.

Supermicro’s H13 MicroCloud retails for about $10K, making it more expensive than most entry-level servers. But unlike those less-expensive servers, the MicroCloud offers 8 single-processor nodes for applications requiring multiple discrete servers, such as e-commerce sites, code development, cloud gaming and content creation.

AMD says shipments of the new AMD EPYC 4004 Series processors, as well as of OEM systems powered by the new CPUs, are expected to begin during the first week of June. Pre-sales orders of the new processors, AMD adds, have already been strong.

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: Why the Rack is Now the Unit

Featured content

Tech Explainer: Why the Rack is Now the Unit

Today’s rack scale solutions can include just about any standard data center component. They can also save your customers money, time and manpower.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are your data center customers still installing single servers and storage devices instead of full-rack solutions? If so, they need to step up their game. Today, IT infrastructure management is shifting toward rack scale integrations. Increasingly, the rack is the unit.

A rack scale solution can include just about any standard data center component. A typical build combines servers, storage devices, network switches and other rack products like power-management and cooling systems. Some racks are loaded with the same type of servers, making optimization and maintenance easier.

With many organizations developing and deploying resource-intensive AI-enabled applications, opting for fully integrated turnkey solutions that help them become more productive faster makes sense. Supermicro is at the vanguard of this movement.

The Supermicro team is ready and well-equipped to design, assemble, test, configure and deploy rack scale solutions. These solutions are ideal for modern datacenter workloads, including AI, deep learning, big data and vSAN.

Why rack scale?

Rack scale solutions let your customers bypass the design, construction and testing of individual servers. Instead of spending precious time and money building, integrating and troubleshooting IT infrastructure, rack scale and cluster-level solutions arrive preconfigured and ready to run.

Supermicro advertises plug-and-play designs. That means your customers need only plug in and connect to their networks, power and optional liquid cooling. After that, it’s all about getting more productivity faster.

Deploying rack scale solutions could enable your customers to reduce or redeploy IT staff, help them optimize their multicloud deployments, and lower their environmental impact and operating costs.

Supermicro + AMD processors = lower costs

Every organization wants to save time and money. Your customers may also need to adhere to stringent environmental, social and governance (ESG) policies to reduce power consumption and battle climate change.

Opting for AMD silicon helps increase efficiency and lower costs. Supermicro’s rack scale solutions feature 4th generation AMD EPYC server processors. These CPUs are designed to shrink rack space and reduce power consumption in your customers’ data center.

AMD says its EPYC-series processors can:

  • Run resource-intensive workloads with fewer servers
  • Reduce operational and energy costs
  • Free up precious data center space and power, then re-allocate this capacity for new workloads and services

Combined with a liquid-cooling system, Supermicro’s AMD-powered rack scale solutions can help reduce your customer’s IT operating expenses by more than 40%.

More than just the hardware

The right rack scale solution is about more than just hardware. Your customers also need a well-designed, fully integrated solution that has been tested and certified before it leaves the factory.

Supermicro provides value-added services beyond individual components to create a rack scale solution greater than the sum of its parts.

You and your customers can collaborate with Supermicro product managers to determine the best platform and components. That includes selecting optimum power supplies and assessing network topology architecture and switches.

From there, Supermicro will optimize server, storage and switch placement at rack scale. Experienced hardware and software engineers will design, build and test the system. They’ll also install mission-critical software benchmarked to your customer’s requirements.

Finally, Supermicro performs strenuous burn-in tests and delivers thoroughly tested L12 clusters to your customer’s chosen site. It’s a one-stop shop that empowers your customers to maximize productivity from day one.

Do more:

 

Featured videos


Follow


Related Content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Featured content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Supermicro and Vast Data are jointly offering an AMD-based turnkey solution that promises to simplify and accelerate AI and data pipelines.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro and Vast Data are collaborating to deliver a turnkey, full-stack solution for creating and expanding AI deployments.

This joint solution is aimed at hyperscalers, cloud service providers (CSPs) and large, data-centric enterprises in fintech, adtech, media and entertainment, chip design and high-performance computing (HPC).

Applications that can benefit from the new joint offering include enterprise NAS and object storage; high-performance data ingestion; supercomputer data access; scalable data analysis; and scalable data processing.

Vast, founded in 2016, offers a software data platform that enterprises and CSPs use for data-intensive computing. The platform is based on a distributed systems architecture, called DASE, that allows a system to run read and write operations at any scale. Vast’s customers include Pixar, Verizon and Zoom.

By collaborating with Supermicro, Vast hopes to extend its market. Currently, Vast sells to infrastructure providers at a variety of scales. Some of its largest customers have built 400 petabyte storage systems, and a few are even discussing systems that would store up to 2 exabytes, according to John Mao, Vast’s VP of technology alliances.

Supermicro and Vast have engaged with many of the same CSPs separately, supporting various parts of the solution. By formalizing this collaboration, they hope to extend their reach to new customers while increasing their sell-through to current customers.

Vast is also looking to the Supermicro alliance to expand its global reach. While most of Vast’s customers today are U.S.-based, Supermicro operates in over 100 countries worldwide. Supermicro also has the infrastructure to integrate, test and ship 5,000 fully populated racks per month from its manufacturing plants in California, Netherlands, Malaysia and Taiwan.

There’s also a big difference in size. Where privately held Vast has about 800 employees, publicly traded Supermicro has more than 5,100.

Rack solution

Now Vast and Supermicro have developed a new converged system using Supermicro’s Hyper A+ servers with AMD EPYC 9004 processors. The solution combines 2 separate Vast servers. 

This converged system is well suited to large service providers, where the typical Supermicro-powered Vast rack configuration will start at about 2PB, Mao adds.

Rack-scale configurations can cut costs by eliminating the need for single-box redundancy. This converged design makes the system more scalable and more cost-efficient.

Under the hood

One highlight of the joint project: It puts Vast’s DASE architecture on Supermicro’s  industry-standard servers. Each server will have both the compute and storage functions of a Vast cluster.

At the same time, the architecture is disaggregated via a high-speed Ethernet NVMe fabric. This allows each node to access all drives in the cluster.

The Vast platform architecture uses a series of what the company calls an EBox. Each EBox, in turn, contains 2 kinds of storage servers in a container environment: CNode (short for Compute Node) and DNode (short for Data Node). In a typical EBox, one CNode interfaces with client applications and writes directly to two DNode containers.

In this configuration, Supermicro’s storage servers can act as a hardware building block to scale Vast to hundreds of petabytes. It supports Vast’s requirement for multiple tiers of solid-state storage media, an approach that’s unique in the industry.

CPU to GPU

At the NAB Show, held recently in Las Vegas, Supermicro’s demos included storage servers, each powered by a single-socket AMD EPYC 9004 Series processor.

With up to 128 PCIe Gen 5 lanes, the AMD processor empowers the server to connect more SSDs via NVMe with a single CPU. The Supermicro storage server also lets users move data directly from storage to GPU memory supporting Nvidia’s GPU Direct storage protocol, essentially bypassing a GPU cluster’s CPU using RDMA.

If you or your customers are interested in the new Vast solution, get in touch with your local Supermicro sales rep or channel partner. Under the terms of the new partnership, Supermicro is acting as a Vast integrator and OEM. It’s also Vast’s only rack-scale partner.

Do more:

 

Featured videos


Follow


Related Content

Pages