[wp_tech_share]

The data center industry is estimated to have consumed 205 terawatt-hours (TWh) or ~1% of the world’s energy consumption in 2018. Other industry estimates peg that rate higher at up to ~2%. Despite these different estimates, one thing is clear: the decade-old fear of runaway growth in data center energy consumption has proved to be unfounded. Hyperscale cloud service providers (CSPs) have largely managed that concern, with the help of industry vendors, through IT virtualization and higher utilization of power and cooling infrastructure. At the same time, enterprises data center operations, while historically less efficient, have transitioned to CSPs.

However, these estimates were calculated before the global COVID-19 pandemic, which saw the world embrace virtual collaboration, remote learning, and accelerated automation through artificial intelligence (AI) and machine learning (ML). While these trends materialized throughout 2020, rendering the industry (barely) able to meet demand, questions resurfaced about managing future energy consumption. For this reason, data center sustainability has become the most pressing issue in the data center industry, one in which data center physical infrastructure vendors believe they can play a critical role.

As part of Dell’Oro Group’s upcoming Data Center Physical Infrastructure program, we will focus on technologies that enable sustainable data center growth. That’s why data center thermal management, which consumes 30% to 40% of a data center’s annual energy consumption, second only to compute, is the logical starting place. Today, air-based, thermal management infrastructure is predominantly used. However, as rack power densities are on the rise to support accelerated computing hardware (such as GPUs and FPGAs), air-cooling efficiencies and limits are being reached. Liquids are a much more effective and efficient medium for transferring heat. For this reason, the data center industry is exploring different ways to safely bring liquids into the data center.

That’s why, when I had an opportunity to see CGG’s High Performance Compute Center, I experienced a level of nervousness and excitement that I haven’t felt in some time prior to touring a data center. This was the first time I have been inside a liquid immersion-cooled facility, supported by Green Revolution Cooling’s (GRC) infrastructure. GRC is a known leader in immersion-cooling technology, in addition to Asperitas, Submer, and other vendors. Visiting my first immersion-cooled facility felt more like a trip to Mars than the type of data center I’ve spent my entire career getting to know.

Although the data center industry treats liquid cooling as though its use for computing is new, it has actually been around for decades. It dates back to the 1990s, when it was used to cool IBM mainframes. Immersion cooling seeks to solve a similar problem today – removing heat directly at the source – but through a different method. A coolant distribution unit (CDU) is used to pump a liquid – usually some kind of mineral oil – to a rack manifold, where it fills and circulates the liquid through the rack (sometimes referred to as a vat or tank). Servers, which require some modification, are then vertically immersed in the liquid to capture and remove 100% of the generated heat. Right now, the big question being asked by the data center industry is how different does immersion cooling makes my data center?

CGG Doubles Compute Capacity with Immersion Cooling

Walking into the CGG High Performance Compute Center, any notion that I was headed to Mars was quickly dispelled. It looked like a conventional data center with a raised floor and traditional infrastructure, from the UPS down to the rack power distribution units (rPDU). The big difference was the horizontal immersion racks as opposed to vertical ones. As I observed the room, I quickly noticed was how quiet it was. CDU pumps produced the only noise. Things were quiet enough to have a conversation with the person standing next to me. The horizontal immersion racks created an open feeling, allowing me to see around the entire room.

However, a friendlier operating environment isn’t what drove CGG to adopt immersion cooling. The company had reached its limits of space, power, and cooling. In order to expand computing capacity, CGG needed more space and power or a new thermal-management solution. And the new thermal management solution – immersion cooling – did not disappoint. In the same floor space and power footprint, CGG was able to double its computing capacity. Additionally, a significant portion of the existing infrastructure was utilized, while deploying immersion racks in scalable, 100 kW cooling-capacity increments. As a result, CGG had no downtime and only limited capital expenditures (CAPEX) during the transition to immersion cooling.

These benefits aren’t unique to CGG’s deployment of immersion cooling. In fact, they can be achieved by many players in the data center industry struggling with space, power, or cooling constraints. To quantify the benefits, CAPEX for construction of a new immersion-cooled data center relative to a traditional air-cooled build can be reduced by 20%. This is the result of eliminating certain infrastructure, such as chillers or air handlers, in addition to smaller-sized electrical infrastructure, such as UPSs, switch gears, and power distribution.

The case for immersion cooling becomes even more compelling when considering operational expenditures (OPEX). Immersion-cooling systems use less power as a result of removing server fans, air handling units, and chilled water systems. Lower-power consumption for thermal management means reduced annual energy costs. Additionally, with fewer moving parts in an immersion-cooling solution, maintenance costs are also reduced. In total, immersion cooling OPEX costs can decrease by up to 33% compared to traditional air-cooled data center builds. From a total cost of ownership (TCO) perspective over the 10-year life of a data center, it’s achievable for an immersion-cooled data center to cost half as much as a traditional air-cooled build.

Immersion Cooling Brings Small Changes to Data Center Operations

So, what’s the catch? The human element of operations in the mission-critical, data center industry can’t be overlooked. Data center uptime is measured by the number of nines (e.g., 99.9% v. 99.9999% uptime), as downtime can translate into hundreds of thousands of dollars – or even millions – in lost revenue. Historically, this had led to slow adoption of new technologies. Early adopters are often driven by need, as is the case with liquid cooling for HPC. But, with increased adoption of accelerated compute, many other companies are already struggling or are expected to struggle with the limits of air-cooling in the near future.

In my visit to CGG’s High Performance Compute Center, I was most eager to learn about the “quirks” of immersion cooling. The biggest difference from air-cooled builds is in server maintenance. Servers have to be pulled out of the oil by hand or using a small, overhead lift. They can then be laid across the tank while work is performed, either immediately or after a short period of drip drying. After maintenance is complete, the server is simply immersed back into the rack.

Other operational differences that data center owners and operators must consider are:

  • Containment of the oil in which servers are immersed is top of mind. For CGG, this didn’t appear to be a problem. Different combinations of rack and row and room containment are used to manage any dripping when removing servers. It’s definitely handy to keep a roll of oil-absorbent towels around but no major spills have occurred.
  • Stickers imprinted with a server’s serial number can come loose during immersion. This seemed to be the biggest potential headache. If a sticker comes loose, it doesn’t cause any damage to the immersion cooling system due to the filtration system. However, it’s possible for a missing sticker to impact asset management. Some immersion-ready servers already utilize a pull-tag system. This eliminates the issue. Development of oil-resistant stickers is also being explored.
  • Cable management isn’t more complex for immersion cooling, just different. CGG utilizes multiple generations of GRC immersion racks, which reflect the evolution of rPDU and network switch placement. They have moved between dry space in the rack and mounted on the back of the tank. GRC’s latest immersion-cooling product, the ICEraQ 10, utilizes dry space in the top-rear of the rack for rPDUs with networking switches mounted on the front behind a panel.
  • Lastly, beware of crickets. It turns out that crickets have a taste for the particular immersion oil GRC uses, so an open bay door may lead to an extra visitor. Just like a loose serial number sticker, there is no threat of damage – just an unexpected find when opening the rack lid.
Immersion Cooling Answers the Call for Sustainable Data Centers of the Future

The engineered benefits of immersion cooling can’t be denied – higher utilization of space and power, while achieving lower CAPEX and OPEX relative to a traditional air-cooled facility. However, I didn’t need to visit an immersion-cooled facility to understand the cost savings. My biggest takeaway was correction of my misconception that an immersion-cooled data center would be dramatically different from an air-cooled facility. It was familiar, like other data centers I have toured. The only difference in physical infrastructure was the rack itself. IT infrastructure is mounted vertically, as opposed to horizontally. Immersion-ready servers are available today with expanding partnerships between chip, server, and immersion vendors working on the next generation of compute. While planning for a few operational differences that need to occur, to my surprise, necessary adjustments are relatively minor. So can immersion cooling be a part of the solution that supports sustainable data centers of the future? After my visit to CGG’s High Performance Compute Center, I believe it just might be.

This November, Dell’Oro Group will launch a new Data Center Physical Infrastructure subscription program. As the program’s lead analyst, I will dig deeper into the market outlook, growth drivers, and the competitive landscape of the data center physical infrastructure market. I will quantify industry trends and developments, providing a timely, accurate, and detailed analysis. To learn more about Dell’Oro Group’s new Data Center Physical Infrastructure program, please contact us at dgsales@delloro.com.

[wp_tech_share]

 

Dell’Oro published an update to the Ethernet Controller & Adapter 5-Year Forecast report, July 2021. Revenue for the worldwide Ethernet controller and adapter market is projected to increase at a 4% compound annual growth rate (CAGR) from 2020 to 2025, reaching nearly $3.2 billion. The increase is partly driven by the migration to server access speed of 100 Gbps and higher.

The ramp of 25 Gbps port shipments has been strong since the availability of 28 Gbps SerDes in 2016. 25 Gbps has already displaced 10 Gbps to become the dominant speed in revenue, as 25 Gbps gains broad adoption across Cloud service providers (SPs) and high-end enterprises. However, we project that 100 and 200 Gbps speed ports to overtake that of 25 Gbps in revenue as early as 2023.

We identify the market and technology drivers below that are likely to drive the adoption of next-generation server connectivity based on 100 Gbps and beyond:

  • 50 Gbps ports, based on two 28 Gbps SerDes lanes, have been deployed in mainstream among some of the major Cloud SPs. However, with the exponential growth of network traffic and proliferation of cloud computing, the Top 4 US Cloud SPs are demanding even higher server access speeds than the rest of the market. The availability of 56 Gbps SerDes since late 2018 has prompted some of the Top 4 US Cloud SPs to upgrade their networks to 400 Gbps, with upgrades in server network connectivity to 100 Gbps for general-purpose computing in progress.
  • Higher server access speeds of up to 200 Gbps, based on two lanes of 112 Gbps SerDes, could begin to ramp for general-purpose computing for the Top 4 US Cloud SPs following network upgrades 800 Gbps as early as 2022.
  • The increase in demand for bandwidth-hungry AI applications will continue to push the boundaries of server connectivity. Today, 100 Gbps is commonly used to interconnect accelerated servers, while general-purpose servers are connected at 25 or 50 Gbps. As 100 Gbps become the standard connection for general-purpose in several years for the major Cloud SPs, accelerated servers may be connected at twice the data rate at 200 Gbps.

To learn more about the Ethernet Controller and Adapter market, or if you need to access the full report, please contact us at dgsales@delloro.com.

About the Report

The Dell’Oro Group Ethernet Controller and Adapter 5-Year Forecast Report provides a complete, in-depth analysis of the market with tables covering manufacturers’ revenue; average selling prices; and unit and port shipments by speed (1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, 50 Gbps, and 100 Gbps) for Ethernet and Fibre Channel Over Ethernet (FCoE) controllers and adapters. The report also covers Smart NIC and InfiniBand controllers and adapters. To purchase this report, please contact us at dgsales@delloro.com.

[wp_tech_share]

 

Dell’Oro published an update to the Data Center Capex 5-Year Forecast report in July 2021. Server spending is forecast to grow at a compound annual growth rate of 11 percent over five-year, comprising nearly half the data center capex by 2025.

The pandemic resulted in strong demand for computing and digital technologies due to a shift in enterprise and consumer behaviors. Current semiconductor foundry capacity is not adequate to meet the recent surge in global demand. The cost of servers and other data center equipment is projected to rise sharply in the near term partly due to the global semiconductor shortages. An increase of server average selling prices (ASPs) could approach the double-digit level that was observed in 2018, which was another period of tight supply and high demand. However, in the longer term, we anticipate that supply and demand dynamics could reach equilibrium and that technology transitions could drive market growth. We identify the following technology trends that shape our five-year forecast:

  • CPU Refresh Cycles: Intel and AMD both have an aggressive roadmap to introduce new platform refreshes as the processor race heats up. Both the Intel Sapphire Rapids and AMD EPYC Genoa, expected in 2022, will pack more processor cores and memory channels, and support the latest interfaces such as CXL, DDR5, and PCIe Gen 5 that could enable denser server form-factors and new architectures.
  • Accelerated Computing: A new class of accelerated servers densely packed with co-processors that are optimized for application-specific workloads, such as artificial intelligence and machine learning, is emerging. Some Cloud service providers such as Amazon and Google, have deployed accelerated servers using internally developed AI chips, while other Cloud service providers and enterprises have commonly deployed solutions based on GPUs and FPGAs. We estimate that attach rate of servers with accelerators to grow to 13 percent by 2025
  • Edge Computing: Certain applications—such as cloud gaming, autonomous driving, and industrial automation—are latency-sensitive, requiring Multi-Access Edge Compute, or MEC, nodes to be situated at the network edge, where sensors are located. Unlike cloud computing, which has been replacing enterprise data centers, edge computing creates new market opportunities for novel use cases.

With the evolution of CPU platforms along with and proliferation of accelerated computing, we anticipate data centers will be better optimized to process application-specific workloads with fewer, but more powerful and denser servers, increasing the total available market through higher server ASPs. Edge computing, on the other hand, will increase the available market with greenfield deployment of servers distributed edge locations. To access the full Data Center Capex report, please contact us at dgsales@delloro.com.

About the Report

Dell’Oro Group’s Data Center Capex 5-Year Forecast Report details the data center infrastructure capital expenditures of each of the ten largest Cloud service providers, as well as the Rest-of-Cloud, Telco, and Enterprise customer segments. Allocation of the data center infrastructure capex for servers, storage systems, and other auxiliary data center equipment is provided. The report also discusses the market and technology trends that can shape the forecast.

[wp_tech_share]

Cloud-Delivered Security to Grow 21 Percent CAGR and Hit $10 Billion by 2025

We just issued the latest edition of our 5-year forecast (2021-2025) for the Network and Security and Data Center Appliance (NSDCA) Market that spans Firewalls, Secure Web Gateways (SWGs), Email Security, Application Delivery Controllers (ADCs), and Web Application Firewalls (WAFs). Nearly 18 months since the COVID-19 pandemic began, the worst of the market turbulence appears behind us. Increased vaccination rates–albeit not fast enough for some countries and regions–have led to an unwinding of lockdown mandates and boosted economic activity. In addition, economic stimuli from central governments have provided additional market tailwinds.

After an anemic 2020, where revenue growth was just 3% year-over-year (Y/Y), we forecast a return to low double-digit growth in 2021 and 2022, and then high single-digit after that through the end of our forecast window (2025).  This revenue growth slightly exceeds the historical revenue growth rate, averaging 8% Y/Y, due to the pent-up demand created during 2020, the recent economic stimuli, and the continued high priority placed on security, creating favorable market conditions.

On a form factor basis, we believe that products sold in a cloud-delivered SaaS (Software-as-a-Service) form factor will grow at a 21% compound annual growth rate (CAGR), reaching nearly $10 B in 2025.  In contrast, the roughly $12 B physical appliance market is anticipated to grow nearly 3% CAGR by 2025.

We attribute the expected strong performance in the SaaS form factor due to the following factors:

  • Elasticity: The elasticity of SaaS solutions–namely, the ease, swiftness, and scaling of deployments–is impossible to match with physical appliances.
  • Cloud-indigenous: As enterprises pivot to embrace cloud architectures and the Internet becomes an extension of the corporate network, SaaS-based solutions are better suited.
  • Nexus of Innovation: The elasticity and cloud-indigenousness of SaaS-based solutions have afforded vendors the ability to innovate and offer new services to their customers rapidly. Examples include zero-trust network architectures and, more recently, the marriage of security and networking services as SASE solutions. (We have published an Advanced Research Report on SASE in which we analyzed the intersections of SWGs, Firewalls, and SD-WAN. Please contact us, if interested in procuring a copy).
  • Economic: Many enterprises are choosing to move away from the traditional capital expenditure (CAPEX) and depreciation model associated with physical appliances toward the operational expenditure (OPEX) subscription model strongly related to SaaS-based solutions.

Our report describes market dynamics by individual segment–including Firewalls, SWG, Email Security, ADC, and WAFs–and shows how each is expected to contribute to the overall SaaS-based revenue picture.  There will be clear winners and others that lag.

About the Report

The Dell’Oro Group Network Security & Data Center Appliance market 5-Year Forecast Report offers a complete overview of the industry with tables covering manufacturers’ revenue, units shipped and average selling prices for Application Delivery Controller, WAN Optimization Appliances, and Network Security Appliances. Each of these markets is further segmented by Physical and Virtual technologies. The Network Security Appliance market is also segmented by: Content Security, Firewall, IDS and IPS, and VPN and SSL. To purchase this report, please contact us by email at dgsales@delloro.com.

[wp_tech_share]

800 Gbps adoption rate expected to be faster than 400 Gbps, composing more than 25% of data center switch ports by 2025

Since the onset of COVID-19, we have predicted that the data center switch market will be for the most part resilient to the effects of the pandemic and that it will quickly recover from its low, single-digit revenue decline in 2020. We continue to believe that the Ethernet switch data center market will return to growth in 2021 and be able to exceed its 2019 pre-pandemic revenue level.

Following are key takeaways from the July 2021 Five-Year Ethernet Switch Data Center forecast:

    • Our interviews with end-users and system and component vendors suggest that the pandemic has amplified the importance of the network and accelerated multi-year digital transformation projects. These trends are expected to bring major changes to data center networks and potentially generate additional market revenue.
    • Despite our optimism, our interviews with major vendors revealed that a number of them are already operating at full manufacturing capacity and that supply challenges will continue through the remainder of the year with a potentially more pronounced impact on market performance and the pricing environment. If this is true, our forecast may prove to be too high, as it doesn’t currently take into consideration the impact of these various supply issues.
    • Through our latest interviews with the large Cloud service providers (SPs), we have learned of a number of changes that may impact network architectures when they migrate to next-generation speeds. These changes will be driven by a limited power budget and new AI/ML applications, which may require different network topologies. These hyperscalers will make different choices in terms of network chips, switch radix, number of network tiers, and –ultimately – network speeds. We expect this diversity to increase when Cloud SPs build next-generation networks, as some will focus more on latency improvements while others will focus on power. Ultimately, however, all SPs will focus on cost reduction. Additional discussion about these possible changes and their associated effects may be found in our forecast report.
    • Optics have always played an important role in enabling speed migration on data center switches. With the transition to 400 Gbps and beyond, however, the role played by optics will become even more crucial for a number of reasons. First, because of their increased price, optics for 400 Gbps speeds and higher are expected to compose about 60% to 70% of network spending (compared with less than 50% for speeds lower than 400 Gbps). For this reason, some switch vendors are planning to use the optics opportunity to capture a higher portion of network spending. Second, optics may displace some dense wavelength division multiplexing (DWDM) transport systems for certain Data Center Interconnect (DCI) use cases. Last, but not least, while pluggable (as opposed to embedded) optics are currently the form factor of choice, they may potentially exhibit some thermal and density issues as we approach speeds of 1.6 Tbps and higher. All of these possible changes in optics and their corresponding impact on the data center switch market are addressed in greater detail in our report.
    • Data Center Switch market forecast - 400 Gbps vs 800 Gpbs Port Shipments - DellOroGroup.JPGWe predict that 800 Gbps adoption will be quick, surpassing 400 Gbps ports in 2024 (Figure). 800 Gbps deployments will be propelled by the availability of 100 Gbps SerDes and will not require 800 GE MAC. As a reminder, our forecast reflects port-switch capacity, regardless of how the port is configured. We expect early 800 Gbps ports to be used in breakout mode either as 8×100 Gbps or as 2×400 Gbps. (Breakout applications support many use cases, such as aggregation, shuffle, better fault tolerance, and bigger Radix.) The anticipated rapid adoption of 800 Gbps will be propelled by: 1) availability of 800 Gbps optics with a significantly lower cost per bit than two discrete 400 Gbps optics; and 2) lower cost per bit at a system level, as 800 Gbps will allow consuming 25.6 Tbps chips in a 1U form factor with 32 ports of 800 Gbps. These systems will have a better cost per bit than their equivalent 400 Gbps (which requires 2 U chassis to fit 64 ports). Since economics drive adoption, we believe that 800 Gbps will be more rapidly adopted than 400 Gbps.

To access the full report for details about revenue, units, pricing, speeds, regions, market segments, etc., please contact us at dgsales@delloro.com

 

About the Report

The Dell’Oro Group Ethernet Switch – Data Center Five Year Forecast Report provides a comprehensive overview of market trends, including tables covering manufacturers’ revenue, port shipments, and average-selling prices for modular, fixed, and managed and unmanaged by port speed. We report on 1000 Mbps and the following Gbps port speeds: 10, 25, 40, 50, 100, 200, 400, and 800. We also provide forecasts by region and market segment, including Top-4 U.S. Cloud SPs, Top-3 Chinese Cloud SPs, Telco SPs, Rest of Cloud, Large Enterprises, and Rest of Enterprises.

July 2021 5-Year Forecast DC Switch Market