[wp_tech_share]

Artificial intelligence (AI) is currently having a profound impact on the data center industry. This impact can be attributed to OpenAI’s launch of ChatGPT in late 2022, which rapidly gained popularity for its remarkable ability to provide sophisticated and human-like responses to queries. As a result, generative AI, a subset of AI technology, became the focal point of discussions across industry events, earnings presentations, and vendor ecosystem discussions in the first half of 2023. The excitement is warranted, as generative AI has already caused tens of billions of dollars of investments, and is forecast to continue to lift Data Center Capex to over $500 Billion by 2027. However, due to the significant expansion of computing power required for training and deploying large language models (LLMs) that support generative AI applications, it will require architectural changes for data centers.

While the hardware required to support such AI applications is new to many, there is a segment of the data center industry that has already been deploying such infrastructure for years. This segment is often known as the high-performance computing (HPC) or supercomputing industry. Historically, this market segment has primarily been supported by governments and higher education to deploy some of the world’s most complex and sophisticated computer systems.

What generative AI is doing that is new is proliferating AI applications and the infrastructure to support them, to the much wider enterprise and service provider markets. Learning from the HPC industry gives us an idea of what that infrastructure may start to look like.

Figure 1: AI Hardware Implications

 

AI Infrastructure Needs More Power and Liquid Cooling

To summarize the implications shown in Figure 1, AI workloads will require more computing power and higher networking speeds. This will lead to higher rack power densities, which has significant implications for Data Center Physical Infrastructure (DCPI). For facility power infrastructure, also referred to as grey space, architectural changes are expected to be limited. AI workloads should increase demand for backup power (UPS) and power distribution to the IT rack (Cabinet PDU and Busway), but it won’t mandate any significant technology changes. Where AI Infrastructure will lead to a transformational impact on DCPI is in a data center’s white space.

First, due to the substantial power consumption of AI IT hardware, there is a need for higher power-rated rack PDUs. At these power ratings, the costs associated with potential failures or inefficiencies can be high. This is expected to push end users towards the adoption of intelligent rack PDUs, with the ability to remotely monitor and manage power consumption and environment factors. These rack PDUs cost many magnitudes higher than basic rack PDUs, which don’t give an end user the ability to monitor or manage their rack power distribution.

Even more transformative for data center architectures is the necessity of liquid cooling to manage higher heat loads produced by next-generation CPUs and GPUs to run AI workloads. Liquid cooling, both direct liquid cooling and immersion cooling, has been growing in adoption in the wider data center industry, which is expected to accelerate alongside the deployment of AI infrastructure. However, given the historically long runway associated with adopting liquid cooling, we anticipate that the influence of generative AI on liquid cooling will be limited in the near term. It remains possible to deploy the current generation of IT infrastructure with air-cooling but at the expense of hardware utilization and efficiency.

To address this challenge, some end-users are retrofitting their existing facilities with closed-loop air-assisted liquid cooling systems. Such infrastructure can be a version of a rear door heat exchanger (RDHx) or direct liquid cooling that utilizes a liquid to capture the heat generated within the rack or server, and reject it at the rear of the rack or server, directing it into a hot aisle. This design allows data center operators to leverage some advantages of liquid cooling without significant investments to redesign a facility. However, to achieve the desired efficiency of AI hardware at scale, purpose-built liquid-cooled facilities will be required. We expect the current interest in liquid cooling will start to materialize in deployments by 2025, with liquid cooling revenues forecast to approach $2 Billion by 2027.

Power Availability May Disrupt the AI Hype

Plans to incorporate AI workloads in future data center construction are already materializing. This was the primary reason for our recent upward revision to our Data Center Physical Infrastructure market 5-Year outlook, with revenue growth now forecast to grow at a 10% CAGR to 2027. But, despite all the prospective market growth AI workloads are expected to generate for the data center industry, there are some notable factors that could slow that growth. At the top of that list is power availability. The Covid-19 pandemic accelerated the pace of digitalization, spurring a wave of new data center construction. However, as that demand materialized, supply chains struggled to keep up, resulting in data center physical infrastructure lead times beyond a year at their peak. Now, as supply chain constraints are easing, DCPI vendors are working through elevated backlogs and starting to reduce lead time.

Yet, demand for AI workloads is forming another wave of growth for the data center industry. This double-shot of growth has generated a discrepancy between the growing energy needs of the data center industry and the pace at which utilities can supply power to the desired locations. Consequently, this is leading to data center service providers to explore a “Bring Your Own Power” model as a potential solution. While the feasibility of this model is still being determined, data center providers are thirsty for an innovative approach to support their long-term growth strategies, with the surge in AI Workloads being a central driver.

As the need for more DCPI is balanced against available power, one thing is clear — AI is ushering in a new era for DCPI. In this era, DCPI will not only play a critical role in enabling data center growth but will also define performance, cost and help achieve progress towards sustainability. This is a distinct shift from the historical role DCPI played, particularly compared to the industry nearly a decade ago when DCPI was almost an afterthought.

With this tidal wave of AI growth quickly approaching, it’s critical to address DCPI requirements within your AI strategy. Failing to do so might result in AI IT hardware with nowhere to get plugged in.

[wp_tech_share]

Growing interest among operators to use PON technologies to offer enterprise customers an alternative to traditional Ethernet services is increasing 25GS-PON-capable OLT ports being deployed into service provider networks. Because of the increase in total 25GS-ready ports, as well as the consensus that a growing percentage of those ports will be used to deliver enterprise and leased line services, we have increased our forecasts for 25GS-PON equipment revenue (both OLT ports and ONTs).

In our most recent forecast, published in July, we increased cumulative 25GS-PON equipment revenue between 2022 and 2025 from $315M to $588M worldwide, with the majority of revenue coming from the North American and Western European markets. While that increase is significant by itself, it’s important to bear in mind that cumulative XGS-PON equipment spent during that same period will easily push $7.7B. But XGS-PON will be the dominant technology across residential FTTH networks, whereas 25G-PON will be used strategically by operators for high-end residential services, enterprises, campus environments, access network aggregation, and wholesale connections.

Through the end of 1Q23, a total of 550K 25Gbps-Capable OLT ports have been delivered to the market, largely via combo cards and optics that can support 2.5Gbps GPON, XGS-PON, and 25GS-PON from the same hardware and using the same ODN. If we assume that an average of 100-200K 25GS-capable OLT ports are purchased by service providers every quarter, by the end of 2023, there will be >1M 25GS-capable OLT ports. Continuing that incremental increase through 2025 yields over 2 million 25GS-capable OLT ports purchased by service providers. Further, let’s assume that a low single-digit percentage of those total ports are turned up to deliver enterprise services. The potential net result is anywhere from 500k-700k OLT ports in service delivering enterprise, wholesale, and mobile transport services.

That is the relatively modest strategy behind 25GS-PON: To finally expand the applicability of PON technologies beyond residential networks. Though it has been discussed by vendors and operators for years, we are finally seeing that many operators have earmarked PON as a network-flattening technology across their residential, enterprise, mobile transport, and wholesale networks. Though there certainly have been instances of operators using GPON for mobile backhaul and business-class Internet access, those use cases have been relatively limited. The combination of XGS-PON and 25GS-PON is really the first to give operators the flexibility they require to be able to address many customers and applications across the shared infrastructure. While some operators envision sharing an ODN across these use cases, others prefer to separate their ODNs because of concerns around security and significantly different SLAs. Nevertheless, PON technologies beyond XGS-PON are already central components of a larger discussion around simplifying access and edge network connectivity.

Though the ITU has determined that single channel 50G PON as defined in its G.hsp.50pmd specification is the next generation technology it will move forward with, the increasing use cases for PON combined with those use case requirements for additional speeds beyond what XGS-PON can provide have opened the door for 25GS PON as a potentially important tool in operators’ toolboxes. The current strength in fiber buildouts and the need to address new use cases today has resulted in a list of operators who simply can’t wait for 50G PON to be fully standardized, tested, and productized. As such, other industry standards group, including the Broadband Forum, are working with 25GS-PON and looking at developing testing and interoperability standards for the technology.

While standards bodies have traditionally defined which technologies get adopted and when there are certainly cases where operators have placed their thumbs on the scales in favor of a preferred option. These choices don’t generally go against what the standards bodies recommend or are working towards. Instead, they satisfy a more immediate internal requirement that doesn’t mesh with the proposed standardization, certification, and product availability timeline defined by the standards bodies and participating equipment suppliers.

Larger operators, including AT&T, BT Openreach, Comcast, and Deutsche Telekom, have also become far more comfortable over the last few years defining standards and pushing them through other industry organizations, such as ONF and the Broadband Forum. These operators know they have the scale, market potential, and, most importantly, internal technology and product development engineering teams to drive standards and thereby influence the product roadmaps of their incumbent equipment suppliers.

And that’s what appears to be happening with 25GS-PON. The growing list of service providers taking part in the 25GS-PON MSA has a general consensus around their PON technology choices: Use GPON and XGS-PON today for the bulk of your residential FTTH deployments, and then add in 25GS-PON using the same equipment and ODN where it makes strategic sense.

This strategy is no different from other access technology strategies and deployment models seen in the past. From ADSL to ADSL2+, VDSL to G.fast, and GPON to XGS-PON, broadband access networks are in a constant state of upgrade. It just so happens that they are now being extended to support other use cases and other end customers. The PON market, as well, has always been one offering different technology options to suit each operator’s unique use case requirements and competitive dynamics. That flexibility is proving to be particularly beneficial in today’s hypercompetitive broadband environment, in which each operator might have a different starting point when it comes to fiber deployments, but likely has similar goals when it comes to subscriber acquisition and revenue generation. In this environment, many operators have clearly said that they simply can’t wait on a promising technology when they need to establish their market presence today. And so, the vendor ecosystem has responded again with options that can steer them down a path to success.

[wp_tech_share]

Since 2020, a change in network usage patterns has imposed new requirements on IT infrastructure. Enterprises, educational institutions, and governments have experienced a seismic shift in the way they operate. Some organizations now have an entirely remote workforce. Other businesses have hybrid models, with a variety of work-from-home and work-in-the-office permutations. Even companies with exclusively on-site employees have enabled new video applications. Videoconferencing improves employee efficiency but also swamps the network with traffic, exposing network performance problems.

Defining the Future of Campus Networks

Amidst these profound changes in work patterns, enterprises are renewing their strategic IT plans. Companies must ensure that investments in their communications infrastructure support their current work patterns–but also that they are on a path to meet their future needs. Luckily, while enterprises are focusing on understanding today’s requirements, IEEE committees are playing a foundational role in developing IT standards for the future.

The IEEE 802 standards committee is responsible for the evolution of local, metropolitan, and other area networks. They tend to work with the two lower layers of the OSI reference model (the Data Link and Physical layers) and refer to the IETF’s work to define the upper layers.

For example, the evolution of Wireless LAN protocols, as defined by the 802.11 WLAN working group, has been addressing organizations’ hunger for more wireless bandwidth in campus networks. With each successive 802.11 version, enhancements to modulation and coding schemes have increased spectral efficiency and lowered interference. Each WLAN standard has increased its maximum theoretical link rate, with Wi-Fi 7’s maximum rate over 75 times that of Wi-Fi 4, shown below.

 

However, the IEEE 802.11 organization focuses on more than increasing throughput. Made up of a multitude of discussion groups, study groups and more formal work groups, the IEEE is working to improve IoT (Internet of Things) functions, reliability, latency, power consumption and security of the LAN. All of these new capabilities should be considered by enterprises that are committed to transforming their networks.

Organizations Begin Their Network Transformations

To meet the dramatic shift in employee work behaviors, companies are rethinking the optimal use of office space. In its 2022 Occupancy Benchmarking Program, the CBRE (a global leader in commercial real estate services and investments), found that 87% of commercial real estate occupiers surveyed from across the world wanted to optimize their real estate portfolios.  In the survey, real estate occupiers identified the need to invest in technology that integrated physical and virtual work experience.

Many different enterprise verticals are investing in IT infrastructure to meet new requirements. For instance:

  • Multinational banks with high volumes of video conference traffic.
  • Municipal Governments with wireless-first, smart city roadmaps.
  • Real estate owners and operators providing high-end WLAN coverage to their tenants in dense urban environments.
  • Universities transitioning to a Wi-Fi-only model for their students and staff and preparing for immersive learning by means of AR/VR applications.
  • Manufacturers interested in integrating WLAN in their operations, requiring low-latency and deterministic connectivity.
  • Retail operations revolutionizing processes such as self-checkout, inventory management and product labelling.

From our discussions with systems integrators, manufacturers, service providers and enterprises, we have identified five key trends that will reshape the enterprise LAN over the next three to five years.

1. A Wi-Fi First strategy

Prior to 2020, many IT departments worked with a standard metric of “number of Ethernet ports per desk”. For companies with employees working from home or in a hybrid model, this metric is no longer valid.

Wi-Fi first implies the deployment of low-density Access Points (APs) to provide connectivity in areas where there had previously been Ethernet ports, such as dorm rooms or low-density cubicles. Wi-Fi first can also involve covering common areas with high-density, high-performance APs to accommodate surges in traffic, such as in conference rooms or stadiums. Finally, a Wi-Fi first strategy often involves providing WLAN signals in new areas that had never had connectivity before; for example, urban centers, company patios, or school gymnasiums.

In addition to ensuring that the WLAN is delivering high bandwidth with low interference, an enterprise must ensure that the network backbone can support the traffic. Organizations’ strategic IT plans must include a provision for the growing bandwidth of WLAN uplink ports.

Most enterprise APs shipped today are equipped with a 1 Gbps port. However, APs supporting the latest standards are capable of higher data transfer rates; they can support 2.5 Gbps, 5 Gbps or even 10 Gbps interfaces. As Wi-Fi 7 is adopted in the market, we expect 10 Gbps ports to grow considerably, allowing higher bandwidth applications to operate in the LAN.

2. A Smarter Network Means Efficiency and Automation

With new demands on the network, organizations need a better understanding of how their facilities are being used. A wide range of applications and services are available to provide insights into meeting room occupancy, environmental readings, and the location of assets.

To enable these insights, enterprises are integrating more and more “things”, instead of just “people”, onto their LANs. The IoT can involve wired devices, such as security cameras and monitors for video conference rooms. The IoT can also rely on wireless devices, such as occupancy sensors, electronic labels, or environmental sensors.

Some devices, such as video cameras or VR headsets, can increase LAN traffic considerably. However, organizations also need to consider the growing need for Power over Ethernet (PoE) ports on their campus switches. These ports are required to deliver more power to high performance APs, as well as to devices such as cameras. We expect that the percentage of switch ports that support PoE will continue to rise as the demand for high-end devices grows.

In addition to feeding applications with data to improve enterprise efficiency, the next generation of campus technology allows for the automation of network management. AI-Ops refers to features that use advanced analytics to simplify network operations, helping to filter alarms, predict network performance issues, or even automatically suggest and apply fixes to network problems.

The head of IT of one organization with which we spoke was amazed that activating AI-Ops features in the campus LAN uncovered existing network configuration problems that were previously undetected; these problems had been affecting quality of service for years. In addition to improving the user experience, AI-Ops reduced the number of trouble tickets by 95%.

3. Emphasis on Sustainability

Enterprises concerned with the environment are analyzing every step in their value chains to eliminate waste, decrease dependence on non-renewable resources, and reduce power consumption.

Initiatives that environmentally conscious enterprises are taking in their LANs include:

  • Configuring Energy Efficient Ethernet (EEE) on switch ports, which moves ports to a low-power state when they are not carrying traffic.
  • Replacing high-capacity copper cable with fiber. Fiber-optic Ethernet cables can support 10 Gbps and higher, and they can withstand longer distances with lower losses.
  • Flattening the network hierarchy and reducing the number of switches in the network.
  • Purchasing equipment made of recycled materials and packaged in a sustainable manner.
  • Moving to commercial models (such as Campus Network as a Service) that incorporate the re-purposing of old IT equipment when it is replaced.

4. Low-Latency Communications

WLAN revenues generated from sales to manufacturing companies grew by more than $500 Million in 2022, an increase that exceeds the growth in any other vertical that we track. Industries that adopt wireless infrastructure for their industrial processes often need low-latency, deterministic communications.

In November 2018, the IEEE 802.11 Real Time Applications Topic Interest Group (RTA-TIG) published a report outlining the usage model and technical requirements of an array of real-time applications. The report cites a wide range of applications for industrial systems. Applications categorized as “Class B”̶ including AR/VR and remote Human-Machine Interaction ̶ had a latency bound requirement of between 10 and 1 ms, with “latency bound” defined as the worst-case one-way latency measured at the application layer.

Other verticals, apart from manufacturing, will also require low-latency capabilities. For instance, VR or AR applications relying on interactive video are relevant to logistics, education, and retail verticals.

As low-latency applications become more common, deploying Wi-Fi 7 will be an important initiative for enterprises. A study at Virginia Tech showed that Wi-Fi 7, with its inclusion of Multi Link Operations (MLO), lowers the latency of communications by allowing devices to operate in multiple bands simultaneously. Enterprises can also benefit from Wi-Fi 7’s ability to support a diversity of channel widths.  By means of the judicious assignment of certain channels to latency-sensitive applications, enterprises will be able to lower the latency for the users who are most sensitive to this parameter.

In addition to upgrading to Wi-Fi 7, enterprises may further lower latency by investing in local computing infrastructure to avoid processing data from latency-sensitive applications in the cloud.

5. A Network That Prioritizes Experience

In its spring 2023 survey of office occupiers, CBRE determined that the average utilization rate of office space in Asia Pacific was 65% and, in North American and Europe, was below 60%. These low office utilization rates are the main reason that the quantity of video traffic on the LAN has exploded. Employees now take videoconferencing capabilities for granted, in their daily interactions with colleagues and with their customers.

The reliance on videoconference puts the spotlight on the network performance. A user of a popular videoconference application can require up to 3.3 Mbps of bandwidth for a meeting with 6 participants and content sharing. As the number of concurrent videoconferences grows, the bandwidth expands accordingly, and network congestion becomes apparent, impeding employees’ ability to communicate effectively. Now that doing business deals over videoconference is a regular occurrence, a dip in video quality can affect a company’s revenues.

To ensure that employees can rely on high-quality videoconferencing, enterprises are adding capacity to their networks, but they are also taking other approaches. IT departments are collecting data from end-user devices, videoconference applications, and the network operations platforms, and using Machine Learning to identify the source of network problems as well as for resolution suggestions. Networking equipment schedulers can also be enhanced to optimize video streams or to improve the performance for certain groups of users, for specific applications, or for special events. Enhancements to support the high bandwidth of today’s video applications will lay the groundwork for the next generation of applications using very high resolution and volumetric video.

Campus Networks Must be Ready to Support Future Applications

An organization’s strategic IT plan will cover the five themes discussed above to varying degrees, depending on the different use cases and priorities. The need to increase bandwidth will be a common element of all the plans.

Although 1 Gbps ports will remain the speed of the majority of campus switch ports shipped over the next few years, we predict the growth of higher-speed ports in the LAN. This push to higher capacity links, shown below, will be driven by the need to connect branch offices at high speeds, by the elevated traffic generated by campus applications, and by the deployment of Wi-Fi 7 APs.

To build an IT strategic plan that will stand the test of time, enterprises must consider that their network traffic patterns will evolve along will their mode of work operations, whether it be mainly work-from-home, hybrid, or fully on-premises. The IEEE has laid a foundation of next generation campus IT functionality to meet the objectives of an organization’s IT plan, such as providing higher visibility into the usage of resources, improving the efficiency of workers, and increasing the sustainability of operations. Underlying all requirements is the need for greater bandwidth to the branch, in the LAN, and directly to end users. By moving to 10 Gbps in the campus, enterprises are taking an important step in readying their network for the future.

[wp_tech_share]

As analysts, we are increasingly asked to speculate on how Artificial Intelligence will impact various segments of enterprise and service provider networks. My colleague, Stefan Pongratz, began our collective responses to these requests with an excellent piece on how AI will impact mobile RAN investments.

For my piece on how AI will impact broadband networks, I thought I would start by posing the question to ChatGPT, the generative AI platform, to see what it would suggest as AI’s impact on broadband networks and services. My specific query was “How will artificial intelligence impact broadband networks and services?” Here are the verbatim results:

Artificial Intelligence (AI) is poised to have a significant impact on broadband networks and services in several ways. Some of the key impacts include:

  • Network Optimization: AI can be used to optimize the performance of broadband networks by analyzing data in real-time and making adjustments to network configurations. This can lead to better utilization of network resources, reduced congestion, and improved overall network efficiency.
  • Predictive Maintenance: AI-powered analytics can help predict potential network failures or issues before they happen. By analyzing data from various network components, AI can identify patterns that indicate impending problems, allowing for proactive maintenance and reducing downtime.
  • Enhanced Customer Service: AI can be employed in customer service applications to provide personalized support, answer customer queries, and troubleshoot common issues. AI chatbots can handle a significant portion of customer interactions, improving response times and overall customer satisfaction.
  • Quality of Service (QoS) Improvement: AI can monitor network performance and prioritize certain types of traffic based on user demands. This can ensure that critical services, such as video conferencing or online gaming receive higher priority and a better quality of service.
  • Network Security: AI can play a crucial role in identifying and mitigating cybersecurity threats on broadband networks. By analyzing vast amounts of network data, AI can detect unusual patterns, potential attacks, and vulnerabilities, helping to enhance network security.
  • Network Traffic Management: AI can optimize network traffic routing, especially in scenarios with dynamic traffic patterns. It can analyze real-time data to redirect traffic through the most efficient paths, reducing latency and improving user experience.
  • Content Optimization: AI can analyze user behavior and preferences to personalize content delivery. By understanding individual preferences, AI can recommend content, advertisements, and services that are more likely to resonate with specific users.
  • Network Planning and Expansion: AI can assist in planning and expanding broadband networks by analyzing demographic and usage data. This can help service providers determine where to invest in network infrastructure, ensuring that resources are allocated efficiently.
  • Resource Allocation: AI can help allocate network resources based on demand and usage patterns, dynamically adjusting bandwidth to accommodate peak periods and preventing network congestion.
  • Network Virtualization: AI can be utilized in software-defined networking (SDN) and network function virtualization (NFV) to optimize the allocation of virtual resources, making networks more flexible, scalable, and cost-effective.

While AI offers numerous benefits to broadband networks and services, it also presents challenges, such as ensuring data privacy, addressing ethical concerns, and maintaining transparency in decision-making algorithms. As AI continues to evolve, its impact on broadband networks and services is likely to become even more profound.

This might be dangerous to say, but I couldn’t have said it any better myself. All of the network impacts ChatGPT identified are definitely aligned with what we hear from broadband service providers around the world. This is especially true when it comes to proactive network and infrastructure management.

We are already seeing a small, but growing list of operators using machine learning tools as a first step in improving the overall reliability of their broadband access networks. Comprehensive, active monitoring of fiber broadband networks and optical assets has generally been reserved for long-haul and metro Ethernet networks. But for broadband access networks, fiber monitoring has generally been limited to identifying physical fiber cuts or bends and then rolling a truck so a technician can use a handheld OTDR or PON power meter to determine the location of the physical issues. Broadband network operators have always struggled with balancing cost and reliability, which is why they have typically relied on reactive toolsets as opposed to proactive ones.

But with the increased investment levels and long-term strategic significance of fiber broadband networks for both enterprise and residential applications, service providers are increasingly introducing machine learning and AI platforms to help them anticipate and correct network issues before they are impactful. Self-healing broadband networks are the goal.

Beyond ChatGPT’s fairly astute responses, there are also other impacts that AI will have on broadband network spending and services:

AI will Result in Additional Subsidies to Expand Broadband Accessibility and Affordability

COVID-19 was the first of a two-part wave of governments understanding the need for their citizens to have access to broadband and, in many cases, subsidizing the expansion of broadband networks to reach previously unserved locations as well as subsidizing the affordability of those services to learn, work, and engage in commerce from home.

AI—particularly generative AI—is the second part of that wave that will keep governments investing in the broadband networks and services of the future. Somewhat lost amidst all the speculation of how transformative generative AI will be to GDP, as well as how individuals even interact with the Internet and each other, is the fact that no government and no service provider, for that matter, wants to be known as the entity that left its citizens or its subscribers behind.

Therefore, we expect legislators in many countries will push for additional investments to be made to expand the availability and affordability of broadband services. Along those lines, AI tools will prove very useful in the critical task of mapping and identifying locations and communities that lack necessary broadband speeds. In the US, for example, AI tools are being used throughout the BEAD (Broadband Equity Access and Deployment) process to get the most accurate determination of broadband availability at the census block level to start. Ultimately, these datasets can be further parsed so that availability and performance can be determined at a per-street level. The goal, of course, is to ensure that the capital is used as efficiently as possible to eliminate broadband deserts. But AI tools will eventually help governments and service providers determine where their speeds and service levels might not be evolving quickly enough to support the needs of their communities and ensure that a broadband divide doesn’t become an AI divide.

The combination of AI and the Metaverse will Drive Increasing Traffic Requirements

The metaverse is often cited as a reason why service providers need to deploy fiber networks despite today’s applications and content generally not taxing those connections. Though the metaverse will ultimately have an impact on broadband service requirements in both enterprise and residential networks, it is the combination of generative AI and the metaverse that will really be a catalyst for speed growth and continued latency and reliability improvements.

In gaming, VR, and AR applications, the combination of generative AI and the metaverse will dramatically improve how users interact with their environments. The ability to use natural language to create new worlds or to navigate those worlds while also being able to request statistics about those environments in real-time will result in a whole new universe of content creators, and game and application designers. Their ability to successfully create and interact with their 3D and immersive environments will depend largely on their connectivity.

Obviously, the data centers running the real-time engines powering these immersive environments will experience the biggest demand. But there is expected to be some distribution of processing at the device level and at the edge of networks, which means that broadband capacity and throughput will also have to scale up based on users’ requirements.

The impact of using natural language to search, shop, and interact online, as well as to control in-home or in-building IoT sensors, for example, will have a significant impact on overall traffic growth. Where online searches used to be fairly-static requests for particular URLs, using natural language to make similar requests, while a convenience for users, requires significantly more language model processing and broadband connections that can support high downstream and upstream speeds.

And it goes without saying that securing these interactions will be critical, which will also introduce additional bandwidth requirements as well as SLAs and service tiers that match subscribers’ levels of risk tolerance.