[wp_tech_share]

Happy New Year! This always marks an opportune time to look back and reflect on the previous year and make predictions for the upcoming one. Possibly the only thing certain about 2023 is it’s to be filled with uncertainty. After two years of Covid-related lockdowns, the world appears to be fully open with China lifting Covid restrictions in early 2023. Supply chain constraints, although still present, are easing. At the same time, the global economic outlook remains uncertain as central banks around the world raise interest rates in an attempt to reduce inflation. But before we dive into our 2023 outlook, let’s review our 2022 predictions.

Last year, we made three predictions for 2022:

  • Plans to Reach Long-Term Data Center Sustainability Goals Begin to Materialize

This absolutely occurred, as sustainability became the number one topic for the data center industry in 2022. Energy efficiency, heat reuse, and water consumption were publically discussed topics, with regulators and industry stakeholders all getting heavily involved. At Dell’Oro Group, we published a whitepaper on Achieving Sustainable Data Center Growth and identified sustainability as one of the leading drivers of growth in our July 2022 5-year data center physical infrastructure forecast. The topic of data center sustainability won’t be going away anytime soon.

  • Liquid Cooling Adoption Momentum Continues as POC Deployments Proliferate and Early Adopters Begin Larger Roll Outs

Momentum for liquid cooling did not continue, it accelerated, with revenue growing an estimated 54% to $259 million in 2022. Most importantly, the ecosystem of IT equipment manufacturers, system integrators, and data center end users are understanding the high-level value proposition and need to plan for liquid cooling solutions in the near term. This means a growing ecosystem of products and solutions to be born in liquid are quickly coming down the pipeline.

  • Supply Chain Resiliency and Integrated Solutions Drive Mergers, Acquisitions, and Partnerships

This prediction didn’t turn out as expected in 2022. Supply chain constraints and inflation outpaced nearly everyone’s expectations. This has led to rising interest rates and economic uncertainty, with DCPI vendors focusing on internal operations and execution. Legrand’s acquisition of USystems to add thermal management infrastructure to their portfolio came the closest to the M&A activity we expected to see more of. Where we did see significant activity in the M&A space was from private equity (PE) acquiring ownership in data center colocation providers to help finance their capital-intensive data center expansion plans.

Now, looking ahead to 2023, Dell’Oro Group is predicting the following for data center physical infrastructure in 2023:

Prediction 1: There will be no data center physical infrastructure recession in 2023

As a result of increasing inflation throughout 2022, central banks around the world have been raising interest rates, leading to macroeconomic uncertainty in 2023. This resulted in tech-related layoffs in 2H22 which have continued into early 2023. However, these have primarily correlated to the consumer-linked segments of these tech companies so far. There have been a handful of data center construction jobs canceled or rescoped, but we believe record low data center vacancy rates and historically high DCPI vendor backlogs will ultimately drive market growth in 2023. Currently, our preliminary forecast for 2023 is an estimated 9% Y/Y DCPI revenue growth.

Prediction 2: Power availability will force a rethink of data center energy storage and on-site power generation architectures

Data center sustainability has largely been focused on the topics of energy efficiency, reducing water usage, and enabling heat reuse so far. The topic of power availability has been discussed to a certain extent, but we predict it will become increasingly discussed in 2023 as power constraints in key markets inhibit new data center construction. We predict this will materialize in altering data center power architectures in three ways.

The first way this will materialize is in long-term energy storage deployments, namely lithium-ion batteries in the duration of 2 – 4 hour runtimes. Historically, backup energy storage for data centers has been in the runtime of 5 – 10 minutes. However, with increasing renewable energy deployments and grid-interactive UPS capabilities, the benefits of long-term energy storage are becoming clear.

Second, the conversation on hydrogen fuel cells will continue to evolve. The “EcoEdge PrimePower” (E2P2) project, led by a consortium of seven organizations including Equinix and Vertiv with funding from the Clean Hydrogen Partnership, is due for a major milestone, and test, in 2023. Vertiv is expected to release a proof-of-concept fuel cell solution, combining a 100 kW fuel cell module, a UPS, and lithium-ion batteries.  The potential success, or lack thereof for this project, will have a major impact on the timeline and potential use of hydrogen fuel cells in the data center industry.

Lastly, a new entrant into the power availability conversation for data centers, SMRs, or small modular nuclear reactors, with power generation capabilities from 50 – 300 MWs will garner increasing mind share. SMRs can theoretically generate an appropriate amount of reliable electricity for data centers, in a relatively small footprint with zero scope 2 carbon emissions. This makes their development intriguing for the data centers industry, where they could potentially evolve into the holy grail of sustainability-minded on-site power generation. We won’t see any deployments until the end of this decade at the earliest, but the data center industry will start laying the groundwork for what that will look like.

Prediction 3: A major hyperscale will make the plunge with a 10+ MW immersion cooling deployment

Our 3rd and final prediction for data center physical infrastructure is related to liquid cooling. We’ve seen significant momentum for direct liquid (coldplate) and immersion cooling technologies in 2022. The one thing missing for immersion cooling and what we are predicting we’ll see in the second half of 2023, is our first public, large-scale immersion cooling deployment from a top hyperscale cloud service provider. This will serve as a breakthrough moment for immersion cooling and help set the stage for mainstream adoption in the years to come. This doesn’t mean that momentum in direct liquid cooling will slow, or that air cooling is going away anytime soon, but the transition to liquid cooling will certainly get underway.


Watch This Video:

What’s next for Data Center Physical Infrastructure market in 2023?

[wp_tech_share]
[wp_tech_share]

The OCP Global Summit was back to an exclusively in-person event in 2022. The community was as excited as ever to get together in person, with 3500+ people in attendance for an all-time record of attendance. In this new blog, exclusively for OCP, Lucas Beran, Principal Analyst for Data Center Physical Infrastructure market, will discuss the three key takeaways from the event.

[wp_tech_share]

 

At the end of April, Nokia, a fairly new entrant to the data center switch space, made the groundbreaking announcement that the company will be supplying its 7250 IXR networking gear to Microsoft, the third-largest Cloud Service Provider (SP).

As I noted in my 2022 prediction blog published earlier this year, I have been anticipating a fair number of new switch vendor insertions at the large hyperscalers in 2022, as the 400 Gbps upgrade cycle starts to materialize outside of Google and Amazon. Silicon diversity would be one of the major reasons for these potential changes in the vendor landscape, as these hyperscalers need to keep pricing pressure on Broadcom, the dominant merchant silicon supplier to date. Supply challenges further accelerated the need for silicon diversity. However, what is intriguing is that Nokia’s 7250 IXR is based on Broadcom’s merchant silicon, not Nokia’s FP5 proprietary chips. So what will Nokia bring to the table?

 

What’s in it for Microsoft?

Although Nokia is a fairly new entrant in the data center switch space, the company is among the leading vendors in the router market and in several other Telecom SP segments. Clearly, Nokia has significant experience in systems design, which – as we learned from the company’s spokesperson – allowed it to achieve power savings at a system level. As a reminder, as network speeds move to 400 Gbps and beyond, power consumption becomes one of the most constraining factors that limits what Cloud SPs can build and deploy in their data centers. In fact, Microsoft already faced this challenge with its 400 Gbps deployment, as it had to wait for Broadcom’s Jericho 2C+ chips that consume less power than their prior generation of Jericho 2 counterparts.

Furthermore, Nokia has made significant contributions to the SONIC ecosystem. (SONIC is the open-source software built by Microsoft that runs in its data center networks.) We view this Microsoft data center win as a reward for the company’s contribution. In fact, this quid pro quo relationship expands well beyond the data center win into several other areas. For example, Nokia is also working with Microsoft on developing 4G LTE and 5G private wireless for the enterprise segment. This collaboration brings together Nokia’s virtualized radio access network (vRAN) and multi-access edge cloud (MEC) with the Azure Private Edge platform.

Additionally, Nokia has the potential to leverage its coherent optics technology; which the firm obtained with its Elenion acquisition to drive cost and power savings at a system level for data center interconnect (DCI) applications.

Last, but not least, although Nokia’s 7250 IXR is built on Broadcom’s silicon which does not satisfy the silicon diversity requirement, it will nonetheless provide Microsoft with another route to access Broadcom chips, which is critical in a supply-constrained environment.

 

Where will Nokia’s 7250 IXR be deployed?

The initial deployment of Nokia’s modular switches will occur in the spine, which Microsoft refers to as Tier 2, but may expand to DCI applications at a later stage. As a reminder, Microsoft has been deploying predominantly Arista in Spine/DCI but has also recently qualified Cisco (with its silicon one-based 8000 chassis). Nokia will also supply fixed form factors for Top-of-Rack (ToR) applications. It is worth noting that Microsoft has always had a multi-vendor strategy for its ToR applications, where volume is high but the margin is thin. So far, the company has deployed a mix of Cisco, Dell, and Mellanox (Nvidia).

 

What does this mean for incumbent vendors?

While we view this announcement as a major win for Nokia and as validation of its competitive positioning in the data center switch market, we believe that Microsoft will strive to keep its existing suppliers happy and provide them with enough motivation to compete for its business. Our interviews revealed that Arista is expected to remain the preferred supplier for spine/DCI applications at Microsoft during the 400 Gbps upgrade cycle. Additionally, we expect Microsoft to go through major expansion and upgrade activities this year and that its data center spending will be strong enough to benefit all vendors – incumbents as well as new entrants.

For more details and insights on cloud service providers’ data center network design and a list of suppliers, please contact us at dgsales@delloro.com.

[wp_tech_share]

 

Data centers are the backbone of our digital lives, enabling the real-time processing of and aggregation of data and transactions, as well as the seamless delivery of applications to both enterprises and their end customers. Data centers have been able to grow to support ever-increasing volumes of data and transaction processing thanks in large part to software-based automation and virtualization, allowing enterprises and hyperscalers alike to adapt quickly to changing workload volumes as well as physical infrastructure limitations.

Despite their phenomenal growth and innovation, the principles of which are being integrated into service provider networks, data centers of all sizes are about to undergo a significant expansion as they are tasked with processing blockchain, bitcoin, IoT, gigabit broadband, and 5G workloads. In our latest forecast, published earlier this month, we expect worldwide data center capex to reach $350 B by 2026, representing a five-year projected growth rate of 10%. We also forecast hyperscale cloud providers to double their data center spending over the next five years.

Additionally, enterprises are all becoming smarter about how to balance and incorporate their private clouds, public clouds, and on-premises clouds for the most optimal and efficient processing of workloads and application requests. Similar to highly-resilient service provider networks, enterprises are realizing that the distribution of workload processing allows them to scale faster with more redundancy. Despite the general trend towards migrating to the cloud, enterprises will continue to invest in on-premises infrastructure to handle workloads that involve sensitive data, as well as those applications that are very latency-sensitive.

As application requests, change orders, equipment configuration changes, and other general troubleshooting and maintenance requests continue to increase, anticipating and managing the necessary changes in multi-cloud environments becomes exceedingly difficult. Throw in the need to quickly identify and troubleshoot network faults at the physical layer and you have a recipe for a maintenance nightmare and, more importantly, substantial revenue loss due to the cascading impact of fragmented networks that are only peripherally integrated.

Although automation and machine learning tools have been available for some time, they are often designed to automate application delivery within one of the multiple cloud environments, not across multiple clouds and multiple network layers. Automating IT processes across both physical and virtual environments and across the underlying network infrastructure, compute and storage resources have been a challenge for some time. Each layer has its own distinct set of issues and requirements.

New network rollouts or service changes resulting in network configuration changes are typically very labor-intensive and frequently yield faults in the early stages of deployment that require significant man-hours of labor.

Similarly, configuration changes sometimes result in redundant or mismatched operations due to the manual entry of these changes. Without a holistic approach to automation, there is no way to verify or prevent the introduction of conflicting network configurations.

Finally—and this is just as true of service provider networks as it is of large enterprises and hyperscale cloud providers—detecting network faults is often a time-consuming process, principally because network faults are often handled passively until they are located and resolved manually. Traditional alarm reporting followed by manual troubleshooting must give way to proactive and automatic network monitoring that quickly detects network faults and uses machine learning to rectify them without any manual intervention whatsoever.

 

Automating a Data Center’s Full Life Cycle

As the size and complexity of data centers continue to increase and as workload and application changes increase, the impact on the underlying network infrastructure can be difficult to predict. Various organizations both within and outside the enterprise have different requirements that all must somehow be funneled into a common platform to prevent conflicting changes to the application delivery layer all the way to the network infrastructure. These organizations can also have drastically different timeframes for the expected completion of changes largely due to siloed management of different portions of the data center, as well as different diagnostic and troubleshooting tools in use by the network operations team and the IT infrastructure teams.

In addition to pushing on their equipment vendor and systems integrator partners to deliver platforms that solve these challenges, large enterprises also want platforms that give them the ability to automate the entire lifecycle of their networks. These platforms use AI and machine learning to build a thorough and evolving view of underlying network infrastructure to allow enterprises to:

    • Support automatic network planning and capacity upgrades by modeling how the addition of workloads will impact current and future server requirements as well as the need to add switching and routing capacity to support application delivery.
    • Implement network changes automatically, reducing the need for manual intervention and thereby reducing the possibility of errors.
    • Constantly provide detailed network monitoring at all layers and provide proactive fault location, detection, and resolution while limiting manual intervention.
    • Simplify the service and application provisioning process by providing a common interface that then translates requests into desired network changes.

Ultimately, one of the key goals of these platforms is to create a closed-loop between network management, control, and analysis capabilities so that changes in the upper-layer services and applications can drive defined changes in the underlying network infrastructure automatically. In order for this to become a reality in increasingly complex data center network environments, these platforms must provide some critical functions, including:

    • Providing a unified data model and data lakes across multiple cloud environments and multi-vendor ecosystems
      • This function has been a long-standing goal of large enterprises and telecommunications service providers for years. Ending the swivel-chair approach to network management and delivering error-free network changes with minimal manual intervention are key functions of any data center automation platform.
    • Service orchestration across multiple, complex service flows
      • This function has also been highly sought-after by large enterprises and service providers alike. For service providers, SDN overlays were intended to add in these functions and capabilities into their networks. Deployments have yielded mixed, but generally favorable results. Nevertheless, the principles of SDN continue to proliferate into other areas of the network, largely due to the desire to streamline and automate the service provisioning process. The same can be said for large enterprises and data center providers.

Although these platforms are intended to serve as a common interface across multiple business units and network layers, their design, and deployment can be modular and gradual. If a large enterprise wants to migrate to a more automated model, it can do so at a pace that is suited to the organization’s needs. The introduction of automation can be done first at the network infrastructure layer and then introduced to the application layer. Over time, with AI and machine learning tools aggregating performance data across both network layers, correlations between application delivery changes and their impact on network infrastructure can be determined more quickly. Ultimately, service and network lifecycle management can be simplified and expanded to cover hybrid cloud or multi-vendor environments.

We believe that these holistic platforms that bridge the worlds of telecommunications service providers and large enterprise data centers will play a key role in helping automate data center application delivery by providing a common window into the application delivery network as well as the underlying network infrastructure. The result will be the more efficient use of network resources, a reduction in the time required to make manual configuration changes to the network, a reduction in the programming load for IT departments, and strict compliance with SLA guarantee to key end customers and application provider partners.