[wp_tech_share]

With fiber deployments accelerating around the world and with operators seemingly on a daily basis announcing additional fiber expansion projects, there is no question that the competition for broadband subscribers and revenue is intensifying faster than some operators would prefer. Because of that intensification and because of the time and cost required for fiber network deployments, operators are increasingly using a range of technologies for their fiber networks. System vendors have made this easier by adopting combo optics and combo cards that can support a range of technologies, from 2.5G GPON to 25G PON, and potentially beyond. Equipment vendors have heard the call from their operator customers that they need to have every tool available to them to succeed in a highly-competitive environment.

Although operators are, for the most part, still in the early stages of deploying gigabit and multi-gigabit services using XGS-PON, their fiber expansions are opening up additional opportunities for applications and new addressable customers that already require speeds beyond what XGS-PON can provide. For example, large enterprises and campus environments, which have typically been served by point-to-point Ethernet connections, are increasingly being passed by PON ODNs, especially those enterprises that are adjacent to residential neighborhoods.

Though the ITU (International Telecommunication Union) has determined that single channel 50G PON as defined in its G.hsp.50pmd specification is the next generation technology it will move forward with, the increasing use cases for PON combined with those use cases requirements for additional speeds beyond what XGS-PON can provide have opened the door for 25G PON as an important tool in operators’ toolboxes. The current strength in fiber buildouts and the need to address new use cases today has resulted in a list of operators who simply can’t wait for 50G PON to be fully standardized, tested, and productized.

In today’s hypercompetitive broadband market, timing and the availability of the right technology tools are everything. Although 50G PON provides a tremendous theoretical boost in speeds, the timeline for its availability is still an open question. China Mobile is on the record saying that it will begin limited deployments of 50G PON beginning in 2023. However, the CAICT (China Academy of Information and Communications Technology) has stated that it believes mass-market deployments of 50G PON won’t occur until the second half of this decade. With XG-PON deployments just hitting their stride as of last year and a typical deployment cycle of around 5-7 years for each new technology, the CAICT’s estimate seems to be more realistic. Even if we split the difference, the market is still looking at 2025 as the earliest point at which 50G PON sees meaningful deployments for residential applications.

One of the biggest challenges to overcome for all 50G PON component suppliers and equipment vendors is the increased optical power budget required. Additionally, the proposed integration of DSPs (Digital Signal Processors) is a significant change, as they have not been required in PON technologies before. Incorporating DSPs theoretically allows for the use of lower-cost 25G optics, which are widely available and mature. DSPs allow for the support of both OOK (On-Off keying) and OFDMA (Orthogonal Frequency Division Multiple Access). This support is critical for operators as it allows them to re-use their existing ODN (Optical Distribution Network) and not have to make significant and costly changes that could impact thousands of subscribers.

DSP-enhanced PON technologies are already being put through their paces, with China Mobile having demonstrated transmission rates of 41G downstream and 16G upstream in a hybrid environment using a 50G PON ONT as well as a 10G PON ONT. Meanwhile, Nokia has demonstrated 100G PON in conjunction with Vodafone at a lab in Germany. Both trials occurred in 2021 and more proof-of-concept work is expected throughout this year.

This brings us back to 25G PON. Although the traditional method of developing a technology for wide-scale deployment is to work through one of the primary standards bodies (ITU and IEEE), that avenue was closed to Nokia and other component suppliers and service providers who were interested in seeing both 25G PON and 50G PON standardized through the ITU, as well as accelerating the availability of 25G PON technologies to bridge the gap between today’s 10G technologies and tomorrow’s 50G and 100G options. So, the collection of vendors and operators organized the 25GS-PON MSA (Multi-Source Agreement) to develop standards, define interoperability, and generally help to evolve the technology outside the traditional standards organizations. The group’s members include AT&T, Chorus, Chunghwa Telecom, Cox Communications, NBN, Opticomm, and Proximus—service providers with the collective buying power to make the R&D effort worthwhile for the growing list of component and equipment vendors who are also members.

CableLabs, which focuses on developing standards and technologies on behalf of its cable operator members, is also a member of the MSA. Just like their telco counterparts, cable operators are trying to determine their bandwidth requirements in residential networks over the next few years, so having a choice among technology options is important. But unlike telcos, cable operators also have to determine whether they will satisfy these future bandwidth requirements with DOCSIS 4.0 and their existing coax plant or whether they will do so with fiber. In both cases, 25G PON is being examined as both a residential technology beyond current 10G DPoE (DOCSIS Provisioning over EPON) options and also as a potential aggregation technology for both remote PHY and remote MACPHY nodes.

CableLabs is also working on its own initiatives, including single wavelength 100G Coherent PON, which is seen as an ideal long-term option for cable operators who have wide ranges of fiber span lengths (up to 80km) and need spit ratio sizes that are more akin to today’s service group sizes of 200-500 homes passed per node. Nevertheless, the timeline for 100G Coherent PON, like 50G PON, is still being determined.

 

Expanding use cases for PON driving the need for 25G

Beyond the uncertain timing of 50G PON, as well as the desire for technology choice, one of the primary reasons for the short-term demand for 25G PON is simply the desire to use PON in applications that go well beyond traditional residential broadband access. It is actually in these applications where 25G PON will see the most deployments, particularly within the next 2-3 years.

Enterprise services have typically been point-to-point Ethernet connections. But as operators expand their PON ODNs to support residential and small-medium business applications, 25G PON can be implemented to deliver symmetric 10G connections, comparable or better than what enterprises are accustomed to. Because 25G PON has been designed to co-exist with both GPON and XGS-PON, service providers can have the flexibility of using the same OLT to deliver both high- and low-SLA traffic or they can split that traffic and customer base across multiple OLTs. Either way, the existing ODN remains intact.

Additionally, service providers are also interested in 25G PON their 5G transport networks, particularly in the case of small cell transport. Though LTE networks never resulted in the type of volume deployments of PON equipment to support backhaul, there is more consensus that the PON technology options available now provide the bandwidth (symmetric 10G) along with the latency requirements necessary to support 5G services and corresponding SLAs.

 

Clear Upgrade Path

Though standards bodies have traditionally defined which technologies get adopted and when there are certainly cases where operators have placed their thumbs on the scales in favor of a preferred option. These choices don’t generally go against what the standards bodies recommend or are working towards. Instead, they satisfy a more immediate internal requirement that doesn’t mesh with the proposed standardization, certification, and product availability timeline defined by the standards bodies and participating equipment suppliers.

Larger operators, including AT&T, BT Openreach, Comcast, and Deutsche Telekom, have also become far more comfortable over the last few years defining standards and pushing them through other industry organizations, such as ONF and the Broadband Forum. These operators know they have the scale and market potential to drive standards and thereby influence the product roadmaps of their incumbent equipment suppliers. There are always others waiting in the wings or the threat of moving to completely virtualized, white box solutions that would reduce the revenue opportunity for said vendors.

And that’s what appears to be happening with 25G PON. Service providers that are part of the MSA are certainly voting with their pocketbooks. Nokia, for its part, has made things quite simple for these operators: Use GPON and XGS-PON today for the bulk of your residential FTTH deployments, and then add in 25G PON using the same equipment and ODN where it makes strategic sense. Nokia does indeed seem to be seeding the market, has reported a cumulative total of 200k 25G-Ready PON OLT ports through 3Q21, with a bigger jump expected in the fourth quarter.

Nokia realizes it must make hay now while the timeline around 50G PON remains in flux and demonstrations of its performance in labs remain limited.

But the PON market has always been one offering different technology options to suit each operator’s unique use case requirements and competitive dynamics. That flexibility is proving to be particularly beneficial in today’s hypercompetitive broadband environment, in which each operator might have a different starting point when it comes to fiber deployments, but likely has similar goals when it comes to subscriber acquisition and revenue generation. In this environment, many operators have clearly said that they simply can’t wait on a promising technology when they need to establish their market presence today. And so, the vendor ecosystem has responded again with options that can steer them down a path to success.

[wp_tech_share]

 

Data centers are the backbone of our digital lives, enabling the real-time processing of and aggregation of data and transactions, as well as the seamless delivery of applications to both enterprises and their end customers. Data centers have been able to grow to support ever-increasing volumes of data and transaction processing thanks in large part to software-based automation and virtualization, allowing enterprises and hyperscalers alike to adapt quickly to changing workload volumes as well as physical infrastructure limitations.

Despite their phenomenal growth and innovation, the principles of which are being integrated into service provider networks, data centers of all sizes are about to undergo a significant expansion as they are tasked with processing blockchain, bitcoin, IoT, gigabit broadband, and 5G workloads. In our latest forecast, published earlier this month, we expect worldwide data center capex to reach $350 B by 2026, representing a five-year projected growth rate of 10%. We also forecast hyperscale cloud providers to double their data center spending over the next five years.

Additionally, enterprises are all becoming smarter about how to balance and incorporate their private clouds, public clouds, and on-premises clouds for the most optimal and efficient processing of workloads and application requests. Similar to highly-resilient service provider networks, enterprises are realizing that the distribution of workload processing allows them to scale faster with more redundancy. Despite the general trend towards migrating to the cloud, enterprises will continue to invest in on-premises infrastructure to handle workloads that involve sensitive data, as well as those applications that are very latency-sensitive.

As application requests, change orders, equipment configuration changes, and other general troubleshooting and maintenance requests continue to increase, anticipating and managing the necessary changes in multi-cloud environments becomes exceedingly difficult. Throw in the need to quickly identify and troubleshoot network faults at the physical layer and you have a recipe for a maintenance nightmare and, more importantly, substantial revenue loss due to the cascading impact of fragmented networks that are only peripherally integrated.

Although automation and machine learning tools have been available for some time, they are often designed to automate application delivery within one of the multiple cloud environments, not across multiple clouds and multiple network layers. Automating IT processes across both physical and virtual environments and across the underlying network infrastructure, compute and storage resources have been a challenge for some time. Each layer has its own distinct set of issues and requirements.

New network rollouts or service changes resulting in network configuration changes are typically very labor-intensive and frequently yield faults in the early stages of deployment that require significant man-hours of labor.

Similarly, configuration changes sometimes result in redundant or mismatched operations due to the manual entry of these changes. Without a holistic approach to automation, there is no way to verify or prevent the introduction of conflicting network configurations.

Finally—and this is just as true of service provider networks as it is of large enterprises and hyperscale cloud providers—detecting network faults is often a time-consuming process, principally because network faults are often handled passively until they are located and resolved manually. Traditional alarm reporting followed by manual troubleshooting must give way to proactive and automatic network monitoring that quickly detects network faults and uses machine learning to rectify them without any manual intervention whatsoever.

 

Automating a Data Center’s Full Life Cycle

As the size and complexity of data centers continue to increase and as workload and application changes increase, the impact on the underlying network infrastructure can be difficult to predict. Various organizations both within and outside the enterprise have different requirements that all must somehow be funneled into a common platform to prevent conflicting changes to the application delivery layer all the way to the network infrastructure. These organizations can also have drastically different timeframes for the expected completion of changes largely due to siloed management of different portions of the data center, as well as different diagnostic and troubleshooting tools in use by the network operations team and the IT infrastructure teams.

In addition to pushing on their equipment vendor and systems integrator partners to deliver platforms that solve these challenges, large enterprises also want platforms that give them the ability to automate the entire lifecycle of their networks. These platforms use AI and machine learning to build a thorough and evolving view of underlying network infrastructure to allow enterprises to:

    • Support automatic network planning and capacity upgrades by modeling how the addition of workloads will impact current and future server requirements as well as the need to add switching and routing capacity to support application delivery.
    • Implement network changes automatically, reducing the need for manual intervention and thereby reducing the possibility of errors.
    • Constantly provide detailed network monitoring at all layers and provide proactive fault location, detection, and resolution while limiting manual intervention.
    • Simplify the service and application provisioning process by providing a common interface that then translates requests into desired network changes.

Ultimately, one of the key goals of these platforms is to create a closed-loop between network management, control, and analysis capabilities so that changes in the upper-layer services and applications can drive defined changes in the underlying network infrastructure automatically. In order for this to become a reality in increasingly complex data center network environments, these platforms must provide some critical functions, including:

    • Providing a unified data model and data lakes across multiple cloud environments and multi-vendor ecosystems
      • This function has been a long-standing goal of large enterprises and telecommunications service providers for years. Ending the swivel-chair approach to network management and delivering error-free network changes with minimal manual intervention are key functions of any data center automation platform.
    • Service orchestration across multiple, complex service flows
      • This function has also been highly sought-after by large enterprises and service providers alike. For service providers, SDN overlays were intended to add in these functions and capabilities into their networks. Deployments have yielded mixed, but generally favorable results. Nevertheless, the principles of SDN continue to proliferate into other areas of the network, largely due to the desire to streamline and automate the service provisioning process. The same can be said for large enterprises and data center providers.

Although these platforms are intended to serve as a common interface across multiple business units and network layers, their design, and deployment can be modular and gradual. If a large enterprise wants to migrate to a more automated model, it can do so at a pace that is suited to the organization’s needs. The introduction of automation can be done first at the network infrastructure layer and then introduced to the application layer. Over time, with AI and machine learning tools aggregating performance data across both network layers, correlations between application delivery changes and their impact on network infrastructure can be determined more quickly. Ultimately, service and network lifecycle management can be simplified and expanded to cover hybrid cloud or multi-vendor environments.

We believe that these holistic platforms that bridge the worlds of telecommunications service providers and large enterprise data centers will play a key role in helping automate data center application delivery by providing a common window into the application delivery network as well as the underlying network infrastructure. The result will be the more efficient use of network resources, a reduction in the time required to make manual configuration changes to the network, a reduction in the programming load for IT departments, and strict compliance with SLA guarantee to key end customers and application provider partners.