Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

How Ahrefs speeds SEO services with huge compute, memory & storage

Featured content

How Ahrefs speeds SEO services with huge compute, memory & storage

Ahrefs, a supplier of search engine optimization tools, needed more robust tech to serve its tens of thousands of customers and crawl billions of web pages daily. The solution: More than 600 Supermicro Hyper servers powered by AMD processors and loaded with huge memory and storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering how to satisfy customers who need big—really big—compute and storage? Take a tip from Ahrefs Ltd.

This company, based in Singapore, is a 10-year-old provider of search engine optimization (SEO) tools.

Ahrefs has a web crawler that processes up to 8 billion pages a day. That makes Ahrefs one of the world’s biggest web crawlers, up there with Google and Bing, according to internet hub Cloudflare Radar.

What’s more, Ahrefs’ business has been booming. The company now has tens of thousands of users.

That’s good news. But it also meant that to serve these customers, Ahrefs needed more compute power and storage capacity. And not just a little more. A lot.

Ahrefs also realized that its current generation of servers and CPUs couldn’t meet this rising demand. Instead, the company needed something new and more powerful.

Gearing up

For Ahrefs, that something new is its recent order of more than 600 Supermicro servers. Each system is equipped with dual      4th generation AMD EPYC 9004 Series processor, a whopping 1.5 TB of DDR5 memory, and a massive 120+ TB of storage.

More specifically, Ahrefs selected Supermicro’s AS-2125HS-TNR servers. They’re powered by dual AMD EPYC 9554 processors, each with 64 cores and 128 threads, running at a base clock speed of 3.1 GHz and an all-core boost speed of 3.75 GHz.

For Ahrefs’ configuration, each Supermicro server also contains eight NVMe 15.3 TB SSD storage devices, for a storage total of 122 TB. Also, each server communicates with the Ahrefs data network via two 100 Gbps ports.

Did it work?

Yes. Ahrefs’ response times got faster, even as its volume increased. The company can now offer more services to more customers. And that means more revenue.

Ahrefs’ founder and CEO, Dimitry Gerasimenko, puts it this way: “Supermicro’s AMD-based servers were an ideal fit for our business.”

How about you? Have customers who need really big compute and storage? Tell them about Ahrefs, and point them to these resources:

 

Featured videos


Follow


Related Content

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos


Follow


Related Content

Do you know why 64 cores really matters?

Featured content

Do you know why 64 cores really matters?

In a recent test, Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors ran engineering simulations nearly as fast as a dual-processor system, but needed only two-thirds as much power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

More cores per CPU sounds good, but what does it actually mean for your customers?

In the case of certain Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors, it means running engineering simulations with dual-processor performance from a single-socket system. And with further cost savings due to two-thirds lower power consumption.

That’s according to tests recently conducted by MVConcept, a consulting firm that provides hardware and software optimizations. The firm tested two Supermicro systems, the AS-5014A-TT SuperWorkstation and AS-2114GT-DPNR server.

A solution brief based on MVConcept’s testing is now available from Supermicro.

Test setup

For these tests, the Supermicro server and workstation were both tested in two AMD configurations:

  • One with the AMD Ryzen Threadripper PRO 5995WX processor
  • The other with an older, 2nd gen AMD Ryzen Threadripper PRO 3995WX processor

In the tests, both AMD processors were used to run 32-core as well as 64-core operations.

The Supermicro systems were tested running Ansys Fluent, fluid simulation software from Ansys Inc. Fluent models fluid flow, heat, mass transfer and chemical reactions. Benchmarks for the testing included aircraft wing, oil rig and pump.

The results

Among the results: The Supermicro systems delivered nearly dual-CPU performance with a single processor, while also consuming less electricity.

What’s more, the 3rd generation AMD 5995WX CPU delivered significantly better performance than the 2nd generation AMD 3995WX.

Systems with larger cache saw performance improved the most. So a system with L3 cache of 256MB outperformed one with just 128MB.

BIOS settings proved to be especially important for realizing the optimal performance from the AMD Ryzen Threadripper PRO when running the tested applications. Specifically, Supermicro recommends using NPS=4 and SMT=OFF when running Ansys Fluent with AMD Ryzen Threadripper PRO. (NPS = non-uniform memory access (NUMA) per socket; and SMT = symmetric multithreading.)

Another cool factor involves taking advantage of the Supermicro AS-2114GT-DPNR server’s two hot-pluggable nodes. First, one node can be used to pre-process the data. Then the other node can be used to run Ansys Fluid.

Put it all together, and you get a powerful takeaway for your customers: These AMD-powered Supermicro systems offer data-center power on both the desktop and server rack, making them ideal for SMBs and enterprises alike.

Do more:

 

Featured videos


Follow


Related Content

Try before you buy with Supermicro’s H13 JumpStart remote access program

Featured content

Try before you buy with Supermicro’s H13 JumpStart remote access program

The Supermicro H13 JumpStart Remote Access program lets you and your customers test data-center workloads on Supermicro systems based on 4th Gen AMD EPYC 9004 Series processors. Even better, the program is free.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You and your customers can now try out systems based on 4th Gen AMD EPYC 9004 Series processors at no cost with the Supermicro remote access program.

Called H13 JumpStart, the free program offers remote access to Supermicro’s top-end H13 systems.

Supermicro’s H13 systems are designed for today’s advanced data-center workloads. They feature 4th Gen AMD EPYC 9004 Series processors with up to 96 Zen 4 cores per socket, DDR5 memory, PCIe 5.0, and support for Compute Express Link (CXL) 1.1+ peripherals.

The H13 JumpStart program lets you and your customers validate, test and benchmark workloads on either of two Supermicro systems:

●      Hyper AS-2025HS-TNR: Features dual AMD EPYC processors, 24 DIMMS, up to 3 accelerator cards, AIOM network adapter, and 12 hot-swap NVMe/SAS/SATA drive bays.

●      CloudDC AS-2015CS-TNR: Features a single AMD processor, 12 DIMMS, 4 accelerator cards, dual AIOM network adapters, and a 240GB solid state drive.

Simple startup

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account.

Run your test. Once you’re done, Supermicro will also ask you to complete a quick survey for your feedback on the program.

Other details

The JumpStart program does have a few limitations. One is the number of sessions you can have open at once. Currently, it’s limited to 1 VNC (virtual network computing), 1 SSH (secure shell), and 1 IPMI (intelligent platform management interface) session per user.

Also, the JumpStart test server is not directly addressable to the internet. However, the servers can reach out to the internet to get files.

You should test with JumpStart using anonymized data only. That’s because the Supermicro server’s security policies may differ from those of your organization.

But rest assured, once you’re done with your JumpStart demo, the server storage is manually erased, the BIOS and firmware are reflashed, and the OS is re-installed with new credentials. So your data and personal information are completely removed.

Get started

Ready to get a jump-start with Supermicro’s H13 JumpStart Remote Access program? Apply now to secure access.

Want to learn more about Supermicro’s H13 system portfolio? Check out a 5-part video series featuring Linus Sebastian of Linus Tech Tips. He takes a deep dive into how these Supermicro systems run faster and greener. 

 

Featured videos


Follow


Related Content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Featured content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Do you know how the latest IT market research could help you and your business?

Learn More about this topic
  • Applications:

It’s time to consider performance intensive computing as a service. Get ready for a modest spending surge. And be on the lookout for new data-center components.

Those are takeaways from the latest in IT market research and analysis. And here’s your tech partner’s roundup.

Performance intensive computing: now as a service

If you don’t offer cloud-based performance intensive computing as a service, you might want to consider doing so. The market, already big, is growing fast.

Sales of performance intensive computing as a service (PICaaS) will rise from $22.3 billion worldwide in 2021 to $103 billion by 2027, predicts market watcher IDC. That’s a compound annual growth rate (CAGR) of nearly 28%.

With PICaaS, customers use public cloud services to run the mathematically intensive computations needed for AI, HPC, big data analytics, and engineering and technical applications.

Driving the market are two factors, IDC says. One, performance intensive computing is going mainstream and is increasingly mission critical. And two, a growing number of businesses define themselves as digital.

What can you do to get ready for this market? Among other tactics, IDC recommends that suppliers formulate an end-to-end bundled PICaaS offering, demonstrate a secure cloud infrastructure, and become trusted advisors of hybrid development models.

Strong IT spending — this year and next

What kind of year will 2023 shape up to be? If your customers are like most, pretty good. Overall IT spending will rise this year by 5.5%, reaching a grand total of $4.6 trillion, predicts analyst firm Gartner, and some segments will rise by much more.

But what about sales dips, tech layoffs and other financial issues? “Macroeconomic headwinds are not slowing digital transformation,” insists Gartner analyst John-David Lovelock. “IT spending will remain strong.”

On the hardware front, Gartner expects data center systems sales worldwide this year to rise by less than 4%. Next year looks better with a projected rise of about 6%.

IT services are in demand. Sales will rise by just over 9% this year, Gartner forecasts, and by about 10% next year.

Devices such as PCs and smartphones are a weak point, with sales projected to drop by nearly 5% this year after tumbling nearly 11% last year. Next year, sales should pick up, Gartner expects, rising an impressive 11%.

New components coming to customer data centers

Have you and your data-center customers spoken yet about three components—SmartNICs, data processing units (DPUs) and infrastructure processing units (IPUs)?

If not, you probably will soon, according to ABI Research. Demand for these components is being driven by two factors: specialized workloads such as AI, IoT and 5G; and the rise of cloud hyperscalers such as AWS, Azure and Google Cloud.

“Organizations are exploring the feasibility of running specific applications that require high processing power on public-cloud data centers to ensure business continuity,” says ABI analyst Yih-Khai Wong.

Big opportunities include networks, cloud platforms and security. For example, AMD’s Xilinx Alveo line of adaptable accelerator cards includes the industry’s first software-defined, hardware-accelerated SmartNIC.

To be sure, the shift is still in its early stages. But Wong says servers equipped by default with SmartNICs, DPUs or IPUs are coming “sooner rather than later.”

 

Featured videos


Follow


Related Content

AMD-based servers support enterprise applications — and break OLTP records

Featured content

AMD-based servers support enterprise applications — and break OLTP records

AMD EPYC server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD EPYC™ server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

AMD EPYC server processors offer a consistent set of features across a range of choices from 8 to 96 cores. This balanced set of resources found in AMD EPYC processors lets your customers right-size server configurations to fit their workloads.

What’s more, these AMD CPUs include models that offer high per-core performance optimized for frequency-sensitive and single-threaded workloads. This can help reduce the TCO for core-based software licenses.

AMD introduced the 4th Generation AMD EPYC processors in late 2022. The first of this generation are the AMD EPYC 9004 series CPUs. They’ve been designed to support performance and efficiency, help keep data secure, and use the latest industry features and architectures.

AMD continues to ship and support the previous 3rd Generation AMD EPYC 7002 and 7003 series processors. These processors power servers that are now available from a long list of leading hardware suppliers, including Supermicro.

Record-breaking

Good as all that may sound, you and your customers still need hard evidence that AMD processors can truly speed up their enterprise applications. Well, a new independent test of AMD-based Supermicro servers has provided just that.

The test was performed by the Telecommunications Technology Association (TTA), an IT standardization association based in Seongnam, South Korea. The TTA tested several Supermicro database and web servers powered by 3rd Gen AMD EPYC 7343 processors.

The results: The Supermicro servers set a world record for performance by a non-cluster system of 507,802 transactions per minute (tpmC).

That test was conducted using the TPC Benchmark, which measures a server’s online transaction processing (OLTP) performance. The tpmC metric measures how many new-order transactions a system can generate in a minute while executing business transactions under specific response-time requirements.

What’s more, when compared with servers based on the previous 2nd Gen AMD EPYC processors, the newer Supermicro servers were 33% faster, as shown in the chart below:

DATA: Telecommunications Technology Association

All that leads the TTA to conclude that Supermicro servers powered by the latest AMD processors “empower organizations to create deployments that deliver data insights faster than ever before.”

Do more:

Note:

1. https://www.tpc.org/1809

 

Featured videos


Follow


Related Content

For Greener Data Centers, Look to Energy-Efficient Components

Featured content

For Greener Data Centers, Look to Energy-Efficient Components

Energy-efficient systems can help your customers lower their data-center costs while supporting a cleaner environment. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Creating a more energy-efficient data center isn’t only good for the environment, but also a great way for your customers to lower their total cost of ownership (TCO).

In many organizations, the IT department is the single biggest consumer of power. Data centers are filled with power-hungry components, including servers, storage devices, air conditioning and cooling systems.

The average data center uses anywhere from 2 to 4 Terawatt hours (TWh) of electricity per year. That works out to nearly 3% of total global energy use, according to Supermicro. Looking ahead, that’s forecast to reach as high as 8% by 2030.

One important measure of data-center efficiency is Power Usage Effectiveness (PUE). It’s calculated by taking the total electricity in a data center and dividing it by the electricity used by center’s IT components. The difference is how much electricity is being used for cooling, lighting and other non-IT components.

The lower a data center’s PUE, the better. The most energy-efficient data centers have a PUE of 1.0 or lower. The average PUE worldwide last year was 1.55, says the Uptime Institute, a benchmarking organization. That marked a slight improvement over 2021, when the average PUE was 1.57.

Costly power

All that power is expensive, too. Among the short list of ways your customers can lower that cost, moving to energy-efficient server CPUs is especially effective.

For example, AMD says that 11 servers based on of its 4th gen AMD EPYC processors can use up to 29% less power a year than the 17 servers based on competitive CPUs required to handle the same workload volume. And that can help reduce an organization’s capital expenditures by up to 46%, according to AMD.

As that example shows, CPUs with more cores can also reduce power needs by handling the same workloads with fewer physical servers.

Yes, a high-core CPU typically consumes more power than one with fewer cores, especially when run at the same frequency. But by handling more workload volume, a high-core CPU lets your customer do the same or more work with fewer racks. That can also reduce the real estate footprint and lower the need for cooling.

Greener tactics

Other tactics can contribute to a greener data center, too.

One approach involves what Supermicro calls a “disaggregated” server architecture. Essentially, this means that a server’s subsystems—including its CPU, memory and storage—can be upgraded without having to replace the entire chassis. For a double benefit, this lowers TCO while reducing E-waste.

Another approach involves designing servers that can share certain resources, such as power supplies and fans. This can lower power needs by up to 10%, Supermicro says.

Yet another approach is designing servers for maximum airflow, another Supermicro feature. This allows the CPU to operate at higher temperatures, reducing the need for air cooling.

It can also lower the load on a server’s fans. That’s a big deal, because a server’s fans can consume up to 15% of its total power.

Supermicro is also designing systems for liquid cooling. This allows a server’s fan to run at a lower speed, reducing its power needs. Liquid cooling can also lower the need for air conditioning, which in turn lowers PUE.

Liquid cooling functions similarly to a car’s radiator system. It’s basically a circular system involving an external “chiller” that cools the liquid and a series of pipes. The liquid is pumped to run through one or more pipes over a server’s CPU and GPU. The heat from those components warms the liquid. Then the now-hot liquid is sent back to the chiller for cooling and then recirculation.

Green vendors

Leading suppliers can help you help your customers go green.

AMD, for one, has pledged itself to delivering a 30x increase in energy efficiency for its processors and accelerators by 2025. That should translate into a 97% reduction in energy use per computation.

Similarly, Supermicro is working hard to help customers create green data centers. The company participates in industry consortia focused on new cooling alternatives and is a leader in the Liquid Cooling Standing Working Group of The Green Grid, a membership organization that fosters energy-efficient data centers.

Supermicro also offers products using its disaggregated rack-scale design approach to offer higher efficiency and lower costs.

Do more:

 

Featured videos


Follow


Related Content

What are Your Server Customers Looking For? It Depends on Who They Are

Featured content

What are Your Server Customers Looking For? It Depends on Who They Are

While hyperscalers and enterprises both buy servers powered by the latest CPUs, their purchase decisions are based on very different criteria. Knowing who you’re selling to, and what they’re looking for, can make all the difference.

Learn More about this topic
  • Applications:
  • Featured Technologies:
Think all buyers of servers powered by the latest-generation CPUs are all looking for the same thing? Think again.
 
It pays to think of these customers as falling into one of two major groups. On the one hand are the so-called hyperscalers, those large providers of public cloud services. On the other are CIOs and other IT executives at large enterprises who are looking to improve their on-premises data centers. 
 
Customers in both groups are serious buyers of the latest, greatest servers. But their buying criteria? Two very different things.
 
Hyperscalers: TCO, x86, VM
 
When it comes to cutting-edge servers, hyperscalers including Amazon Web Services (AWS), Microsoft Azure and Google Cloud are attracted to the cost advantage.
 
As Mark Papermaster, chief technology officer at AMD, explained in a recent technology conference sponsored by Morgan Stanley, “For the hyperscalers, new server processors are an easy transition. Because they’re massive buyers, hyperscalers see the TCO [total cost of ownership] advantage.”
 
Hyperscalers also like the fact that most if not all new server CPUs still adhere to the x86 family of instruction-set architectures. “For their workloads,” Papermaster said, “it lifts and shifts.”
 
Big hyperscalers are also big implementers of containers and virtual machines. That’s an efficient workload application for today’s high-density CPUs. The higher the CPU density, the more VMs can be supported on a single server. 
 
For example, AMD’s 4th gen EPYC processors (formerly code-named Genoa) pack in 96 cores, or 50% more than the previous generation. That kind of density suits hyperscalers well, because they have such extensive inventories of VMs.
 
Enterprise CIOs: different priorities
 
For CIOs and other enterprise IT executives, server priorities and buying criteria are quite different. These buyers are looking mainly for ease of migration, broad ecosystem support, robust security and energy efficiency (which can also be a component of TCO). 
 
CIOs also need to keep their CFOs and boards happy, so they’re also looking for a clear and easily explainable return on investment (ROI). They may also need to tie this calculation to their organization’s strategic goals. For example, if a company were looking to increase its market share, the CIO might want to explain how purchasing new servers could help achieve that goal. 
 
One relatively new and increasingly important priority is energy efficiency. Enterprises increasingly need to demonstrate their support for “green” initiatives. One way a company can do that is by showing how their computer technology gets more done with less electric power.
 
Also, many data centers are already receiving as much electric power as they’re configured for. In other words, they can’t add power to get more work done. But they can add energy-efficient servers able to get more work done with the same or even less power than the systems they replace.
 
A third group, too
 
During his recent Morgan Stanley presentation, Papermaster of AMD also discussed a third group of server buyers: Organizations with hybrid IT orchestrations, both cloud and on-premises, that want the ability to move workloads back and forth. Essentially, this means mimicking the cloud in an on-prem environment.
 
Looking ahead, Papermaster discussed a forthcoming EPYC processor, code-named Bergamo, which he said is “right on track” to ship in this year’s first half. 
 
The new CPU will be aimed at cloud-native applications that need high levels of both throughput and per-socket performance. As previously announced, Bergamo will have up to 128 “Zen 4c” cores, and will come with the same software and security features as Genoa. 
 
“We listen to our customers,” Papermaster said, “and we see where workloads are going.” That’s a good practice for channel partners, too.
 
Do more:

 

Featured videos


Follow


Related Content

Research Roundup: Cloud Infrastructure, 9 Trends for Tech Providers, Euro IT Spending, RPA

Featured content

Research Roundup: Cloud Infrastructure, 9 Trends for Tech Providers, Euro IT Spending, RPA

Check out the latest analysis and forecasts from top IT market researchers. 

Learn More about this topic
  • Applications:

Cloud infrastructure spending is growing, but at a slower pace. Nine trends will have a huge impact on tech providers. European business leaders say they won’t let even a recession stop them from spending on tech. And RPA is forecast for fast growth.

That’s some of the latest analysis and forecasts from top IT market researchers. Here’s your research roundup.

Cloud Infrastructure Spending: Rising, But More Slowly  

Spending on cloud infrastructure grew 23% year-on-year in the fourth quarter of 2022, for a worldwide total of $65.8 billion, according to market watcher Canalys.

Impressive as that growth may sound, it represents a slowdown from the previous quarter. In last year’s third quarter, cloud-infrastructure spending worldwide rose by 34% year-on-year, an 11-point difference.

Three main factors were behind the dip, Canalys says. First was rising public-cloud costs, fueled by inflation, which encouraged organizations to review, and in some cases reduce, their spending. Second was uncertainty over the economy. And a third was “repatriation,” the act of taking certain workloads from the public cloud and returning them to private or co-location data centers.

By supplier, the cloud infrastructure market remained dominated by three big names, according to Canalys. In Q4:22, AWS led with a 32% market share, followed by Microsoft Azure (23%) and Google Cloud (10%). That left miscellaneous “others” with a collective 35% share.

9 Top Trends for Tech Providers

Nine trends will matter for tech providers through 2025, predicts research firm Gartner. These trends reflect 3 overarching themes: the increasing reliance of businesses on technology; new opportunities emerging through tech; and the impact of external macro forces. “The march of digitization continues even amidst disruption,” says Gartner researcher Rajesh Kandaswamy.

Here are Gartner’s top 9 trends for tech providers:

1. Democratization of Technology: Organizations are empowering non-IT workers to seek, implement and custom-fit their own tech.

2. Federated Enterprise Tech Buying: Buying decisions are increasingly made not by IT alone, but instead by representatives across the business.

3. Product-led Growth: This go-to-market strategy lets customers gain value via free product offers and interactive demos.

4. Co-innovation Ecosystems: Businesses and their tech providers collaborate to create unique, innovative solutions.

5. Digital Marketplaces: These help both technical and nontechnical buyers find, buy, implement and integrate technologies with ease.

6. Intelligent Applications: Advanced technologies such as generative AI create value and disrupt markets by learning, adapting and generating new ideas and outcomes.

7. Metaverse/Web3: Virtual environments are gaining traction as organizations look to create unique experiences, interactions and engagements.

8. Sustainability: Customers increasingly view sustainable products and practices as a “must have” rather than just a “nice to have.”

9. Techno-Nationalism: Selected regional markets are becoming more localized as new governmental policies aim for digital sovereignty.

Euro IT Spending: What Recession?

The European technology market is huge and growing.

Total IT spending in Europe will hit $1.2 trillion this year, predicts IDC. Looking ahead, the market intelligence firm expects that figure to top $1.4 trillion by 2026.

For the years 2021 to 2026, total IT spending in Europe will represent a compound annual growth rate (CAGR) of 5.4%, IDC expects. For this year alone, IDC predicts the year-on-year spending rise will be a somewhat lower 4.2%.

One remarkable aspect is that the forecasted spending rise comes as most European business leaders expect a recession. Nonetheless, IT spending will remain high, says IDC researcher Zsolt Simon, because European business leaders “regard technology investments as a means of gaining a competitive edge.”

By industry, banking remains Europe’s largest spender on IT, representing nearly 14% of all, says IDC. Looking ahead a few years, IDC expects the fastest-growing spender between now and 2026 will be professional services.

RPA Market Forecast: 40% Growth

Robotic process automation—software that makes it easy to build, deploy and manage software robots—is more than just a good idea. It’s also a booming market.

RPA sales worldwide will total $30.85 billion by 2030, representing a CAGR of nearly 40%, according to forecasts by Grand View Research.

New RPA sales will be driven primarily by customers looking to lower their operating costs, Grand View expects. But other goals for RPA include improved compliance and higher worker productivity.

In addition, the technology has gotten easier to use, even for complex processes. That should make RPA more attractive to potential customers, and more useful for them too.

 

Featured videos


Follow


Related Content

Part 4: The Web3 and Blockchain FAQ

Featured content

Part 4: The Web3 and Blockchain FAQ

This is the last in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3.

Learn More about this topic
  • Applications:

Part 1: First There Was Blockchain  |  Part 2: Delving Deeper into Blockchain  |  Part 3: Web3 Emerging

No matter how much information an article offers, sometimes you just want a fast, detail-complete answer to your burning question. Here’s nine more answers. We hope the answers to these frequently asked questions — along with the other three parts of the series — address any nagging Web3 and Blockchain questions you may have.

 

QUESTION: Is Web3, a methodology, a set of principles, a change in the way we do business, a trend, an idea, a philosophy, a fad?

 

ANSWER: I think Web3 is both philosophical and technical. Philosophical because awareness concerns over data privacy are growing with a few large entities gaining control of the lion’s share of data; prompting the desire for more decentralization. Technical processing and data storage is moved to the edge and decentralized with Web3, and networking is in a peer-to-peer architecture. Hence the catchphrase the edge is the new cloud. I think Web3 is the new Internet, and it's very disruptive. —Eric Frazier, senior solutions manager, Supermicro

 

 

QUESTION: What are the key benefits of Web3?

 

ANSWER:

1. Decentralization (peer to peer network and anyone can be connected with anyone directly

2. Trustless (no intermediary/middleman)

3. Permissionless (anyone can participate without authorization from a governing body

—Jörg Roskowetz, director product management blockchain technology, AMD

 

 

QUESTION: How should the data center of an enterprise be equipped to handle blockchain?

 

ANSWER 1: A tier 3 data center with proper redundancy is highly recommended as well as consistent bandwidth at 10 Gig minimum with 100 Gig recommended. —Michael Fair, chief revenue officer and longtime channel expert, PiKNiK

 

ANSWER 2: Typically, workstations with complementary GPU coprocessors. —Frazier

 

 

 

QUESTION: What are the benefits of NFTs?

 

ANSWER: There are several, here are the three more significant benefits: 

1. NFTs preserve the information necessary to support collection of royalties, even after multiple resales, with the proper smart contracts in place.

 

2. NFT’s could lead to the development and growth of a completely new creator economy in which music, video, art, book creators would be in control of their content sales, merchandising, they might even get tokenized payments for fan interaction and avoid any need to transfer ownership to platforms that publicize their content. The long line of business “partners” with their hands out awaiting their cut could be a thing of the past.

 

3. Inclusive growth: As NFTs bring content creators together into a shared market, a democratization of valuation will most likely occur that will benefit all participants.

—Frazier and Scot Finnie, managing editor, Performance-Intensive Computing

 

 

QUESTION: Does Blockchain or Web3 improve security?

 

ANSWER  1: First, Web3 user-to-platform interactions are confidential and anonymous, both in principle and in practice. This lets individuals realize their self-sovereignty and be assured of the security of their private information. Plus, Web3’s decentralized structure offers inherent security benefits because it eliminates the single point of failure. Blockchains are also composed of several built-in security qualities, such as cryptography, public and private keys, software-mediated consensus, contracts and identity controls. Bitcoin has not been hacked since its inception because the Bitcoin blockchain is constantly reviewed by the entire network. —Frazier

 

ANSWER  2: Blockchain-based data storage can defeat some types of ransomware, where the data itself is not sensitive but it is unique and irreplaceable. If your data storage system inherently gives you around-the-world redundancy that can survive flood, tornado, fire and so on, that’s increased data security. —Fair and Finnie

 

 

QUESTION: What are the advantages of storing enterprise data in a blockchain cloud storage service?

 

ANSWER: The value proposition for using a service like PiKNiK Web3 Cloud Storage is based on the following differences from traditional cloud storage providers:

1. Significantly lower costs

2. Immutability.  The blockchain monitors the data stored and reports back regularly to the customer verifying that the data has not been altered in any way. This is a free service; it’s part of the blockchain protocol.

3. Extra copies can be made anywhere in the world upon customer request. — Fair

 

 

QUESTION: What enterprise applications and make good sense for organizations to implement with blockchain?

 

ANSWER: An InterPlanetary File System (IPFS) storage system, supply chain, logistics, IP protection, licensing --Frazier and Roskowetz

 

 

QUESTION: What is the InterPlanetary File System and why does it matter?

 

ANSWER: The InterPlanetary File System was created by Juan Benet and was initially released as an alpha build in early 2015 (Wikipedia). Benet also founded Protocol Labs and is a Web3 advocate. It is believed that Benet was inspired for various aspects of the code by GitHub, BitTorrent and an MIT Distributed Hash Table (DHT). The DHT assigns a 24-digit immutable hash to identify content by name instead of by location. IPFS may be thought of as a replacement for the location-based addressing scheme that has underpinned the Web for 30 years—the HTTP protocol, written by Tim Berners-Lee. One of the key aspects of IPFS is that works like BitTorrent to leverage downloads by peers to preserve bandwidth. IPFS is a key piece of Web3 architecture. —Finnie

 

 

QUESTION: What are “dapps” (decentralized apps)?

 

ANSWER: Rich MacManus, writing for the New Stack, concluded that the problem with Web3 is dapps. It stands for decentralized app. Apparently, backend coding for a dapp is very different from traditional app coding and focuses on communication with smart contracts. What’s a dapp? Dapps serve the same purpose as apps written for other platforms except that they’re designed to run in a blockchain environment. For a recent list of top 10 Dapps see Geekflare. —Finnie

 

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Pages