[wp_tech_share]

OFC 2022 was held in San Diego, California with a large number of active participants at the show, filling the exhibit halls after two years of mostly virtual attendance. However, to accommodate those unable to attend in person, OFC held many virtual sessions. I was one of those remotely attending. Even though I attended OFC virtually, I think my experience, while different than attending in person, was really good. That is, I learned a lot. Here are four of the things I learned and found the most interesting at OFC 2022.

The top of my list was the announcement by EFFECT Photonics that the company was buying the coherent DSP and FEC technology from Viasat. This combination brings together the most valuable components in any coherent transponder: InP-based photonic integrated circuit that includes a high-performance tunable laser, modulator, amplifier, and receiver all on one chip along with the high-speed digital electronics. The only items that EFFECT Photonics will need to source are the TIA and Driver when producing coherent pluggable optics in the future. To put this in perspective, one of the key attributes for both Acacia’s and Inphi’s value was having all of these technologies in-house.

The second thing I learned during OFC was the volume of coherent DSPs shipped by Cisco (Acacia), but maybe more importantly, how fast the ramp of shipments is occurring for the company’s newest coherent DSP (Greylock) that are used for 400 Gbps pluggable optics. During OFC, Cisco announced that cumulative shipments of the company’s 600 Gbps-capable DSP (Pico) by port volume was 100k, which converts to 50k DSP chips since each DSP supports two ports. The Pico DSP is primarily used for metro and long haul spans that require the best-performing optics. Cisco, also, announced that the cumulative shipments of Greylock was at 50k with nearly half shipped in the most recent fiscal quarter and with most being sold in a 400ZR QSFP-DD; The remainder is used in 400 Gbps CFP2-DCO. This is a very fast ramp for Greylock, considering it was introduced over a year after Pico.

The third item of focus at OFC this year seemed to center around what comes after 400ZR. While there was talk about the progress of 400ZR and the possibility of 800ZR in a few years, I felt the discussions were more about 400ZR+. It seems 400ZR+ will continue to be a marketing term and not a standard. That is to say, companies were announcing better-performing 400ZR+ compared to competitors. And as you know, better performance, product differentiation usually translates to non-standard. However, one thing in common is that the vendors are producing 400ZR+ in a QSFP-DD plug. I had originally thought that 400ZR+ would generally be used in a CFP2 package due to thermal requirements, but many of these companies have solved that problem and can deliver better-performing 400 Gbps with just an extra watt or two of power. Of course, this makes me wonder (out loud), would operators be willing to forgo the standards-based 400ZR with a 120 km limit for a non-standard based 400ZR+ with span limits that could exceed 600 km if it is also a QSFP-DD and consumes only a couple watts more? I think we all know the direction Windstream chose with the partnership with II-VI (400 ZR+ in QSFP-DD to enable the use of a ROADM line system).

The last item I want to mention is the excellent tutorials and classes that OFC holds every year where people volunteer their time to share what they do and to present informational sessions. I wasn’t able to watch all of the sessions that OFC recorded and made available to virtual participants, but the ones I watched were all done well and the presenters did an excellent job. One, in particular, that stood out for me was a session by Alexander Nikolaidis of Meta (Facebook) called Building a Global Content Provider Network at Scale. He gave a really good overview and understanding of how the company thinks through building and scaling a large content delivery backbone. He walked the audience through the choices and trade-offs that are considered. Interestingly, many of these choices and trade-offs are similar to those made by the largest telecom operators. So, at the end of the day, the challenges and choices for scaling a large backbone network aren’t that different whether it’s a large Internet content provider or a tier-one communication service provider.

[wp_tech_share]

 

Huawei Loses Some Ground — Still Leads $100 B Telecom Equipment Market

We just wrapped up the 4Q21 reporting period for all the Telecommunications Infrastructure programs covered at Dell’Oro Group, including Broadband Access, Microwave & Optical Transport, Mobile Core Network (MCN), Radio Access Network (RAN), and SP Router & Switch. The data contained in these reports suggests that total year-over-year (Y/Y) revenue growth slowed in the fourth quarter to 2%, however, this was not enough to derail full-year trends.

Preliminary estimates suggest the overall telecom equipment market advanced 7% in 2021, recording a fourth consecutive year of growth, underpinned by surging wireless revenues and healthy demand for wireline-related equipment spurred on by double-digit growth both in RAN and Broadband Access. Total worldwide telecom equipment revenues approached $100 B, up more than 20% since 2017.

In addition to challenging comparisons, we attribute the weaker momentum in the fourth quarter to external factors including COVID-19 restrictions and supply chain disruptions.

The analysis contained in these reports suggests the collective global share of the leading suppliers remained relatively stable between 2020 and 2021, with the top seven vendors comprising around 80% of the total market.

 

2021 Worldwide Telecom Equipment Revenue

 

Ongoing efforts by the US government to curb the use of Huawei’s equipment is impacting the company’s position outside of China. Even so, Huawei continued to lead the global market, underscoring its grip on the Chinese market, depth of its telecom portfolio, and resiliency with existing footprints.

 

2021 Telecom Equipment Revenue by Region DellOroGroup

 

Initial readings suggest the playing field is more even outside of China, with Ericsson and Nokia essentially tied at 20% and Huawei accounting for around 18% of the market.

The relative growth rates have been revised upward for 2022 to reflect new supply chain and capex data. Still, global telecom equipment growth is expected to moderate from 7% in 2021 to 4% in 2022.

2021 Excluding China Telecom Equipment Revenue

 

Risks are broadly balanced. In addition to the direct and indirect impact of the war in Ukraine and the broader implications across Europe and the world, the industry is still contending with COVID-19 restrictions and supply chain disruptions. At the same time, wireless capex is expected to surge in the US this year.

[wp_tech_share]
Open RAN ended 2021 on a solid footing. Preliminary estimates suggest that total Open RAN revenues—including O-RAN and OpenRAN radios and baseband—more than doubled for the full year 2021, ending at a much higher level than had been expected going into the year. Adoption has been mixed, however. In this blog, we review three Open RAN-related topics: (1) a recap of 2021, (2) Mobile World Congress (MWC) takeaways, and (3) expectations for 2022.

2021 Recap

Looking back to the outlook we outlined a year ago, full-year Open RAN revenues accelerated at a faster pace than we originally expected. This gap in the output ramp is primarily the result of higher prices. LTE and 5G macro volumes were fairly consistent with expectations, but the revenue per Open RAN base stations was higher than we modeled going into 2021, especially with regard to brownfield networks. Asymmetric investment patterns between the radio and the baseband also contributed to the divergence, though this is expected to normalize as deployments increase. In addition, we underestimated the 5G price points with some of the configurations in both the Japanese and US markets.

Not surprisingly, the Asia-Pacific (APAC) region dominated the Open RAN market in 2021, supported by large-scale greenfield OpenRAN and brownfield O-RAN deployments in Japan.

From a technology perspective, LTE dominated the revenue mix initially but 5G NR is now powering the majority of investments, reflecting progress both in APAC and North America.

Source: NTT DoCoMo

Mobile World Congress (MWC) Barcelona 2022

Open RAN revenues are coming in ahead of schedule, bolstering the narrative that operators want open interfaces. Meanwhile, the progress of the technology, especially with some of the non-traditional or non-top 5 RAN suppliers has perhaps not advanced at the same pace. This, taken together with the fact that the bulk of the share movements in the RAN market is confined to traditional suppliers, is resulting in some concerns about the technology gap between the traditional RAN and emerging suppliers. A preliminary assessment of Open RAN-related radio and baseband system, component, and partnership announcements at the MWC 2022 suggests this was a mixed bag, with some suppliers announcing major portfolio enhancements.

Among the announcements that most stood out is the one relating to Mavenir’s OpenBeam radio platform. After focusing initially on software and vRAN, Mavenir decided the best way to accelerate the O-RAN ecosystem is to expand its own scope to include a broad radio portfolio. The recently announced OpenBeam family includes multiple O-RAN 7.2 macro and micro radio products supporting mmWave, sub 6 GHz Massive MIMO, and sub 6 GHz Non-Massive MIMO.

Source: Mavenir

NEC announced a major expansion of its O-RAN portfolio, adding 18 new O-RUs, covering both Massive MIMO and non-Massive MIMO (4T4R, 8T8R, 32T32R, 64T64R). NEC also recently announced its intention to acquire Blue Danube.

Another major announcement was Rakuten Symphony’s entry into the Massive MIMO radio market. Rakuten Symphony is working with Qualcomm, with the objective of having a commercial Massive MIMO product ready by the end of 2023.

Fujitsu also announced multiple enhancements to its macro and small cell Open RAN portfolio including new mid-band O-RAN compliant Massive MIMO radios with 2022 availability.

Recent Massive MIMO announcements should help to dispel the premise that the O-RAN architecture is not ideal for wide-band sub-6 GHz Massive MIMO deployments. We are still catching up on briefings, so it is possible that we missed some updates. But for now, we believe there are six non-top 5 RAN suppliers with commercial or upcoming O-RAN Sub-6 GHz Massive MIMO GA: Airspan, Fujitsu, Mavenir, NEC, Rakuten Symphony, and Saankhya Labs.

Putting things into the appropriate perspective, we estimate that there are more than 20 suppliers with commercial or pending O-RAN radio products, most prominently: Acceleran*, Airspan, Askey*, Baicells*, Benetel*, BLiNQ*, Blue Danube, Comba, CommScope*, Corning*, Ericsson, Fairwaves, Fujitsu, JMA*, KMW, Mavenir, MTI, NEC, Nokia, Parallel Wireless, Rakuten Symphony, Saankhya Labs, Samsung, STL, and Verana Networks* (with the asterisk at the end of a name indicating small cell only).

The asymmetric progress between basic and advanced radios can be partially attributed to the power, energy, and capex tradeoffs between typical GPP architectures and highly optimized baseband using dedicated silicon. As we discussed in a recent vRAN blog, both traditional and new macro baseband component suppliers—including Marvell, Intel, Qualcomm, and Xilinx—announced new solutions and partnerships at the MWC Barcelona 2022 event, promising to close the gap. Dell and Marvell’s new open RAN accelerator card offers performance parity with traditional RAN systems, while Qualcomm and HPE have announced a new accelerator card that will allegedly reduce operator TCO by 60%.

2022 Outlook

Encouraged by the current state of the market, we have revised our Open RAN outlook upward for 2022, to reflect the higher baseline. After more than doubling in 2021, the relative growth rates are expected to slow somewhat, as more challenging comparisons with some of the larger deployments weigh on the market. Even with the upward short-term adjustments, we are not making any changes at this time to the long-term forecast. Open RAN is still projected to approach 15% of total RAN by 2026.

In summary, although operators want greater openness in the RAN, there is still much work ahead to realize the broader Open RAN vision, including not just open interfaces but also improved supplier diversity. Recent Open RAN activities—taken together with the MWC announcements—will help to ameliorate some of these concerns about the technology readiness, though clearly not all. Nonetheless, MWC was a step in the right direction. The continued transition from PowerPoint to trials and live networks over the next year should yield a fuller picture.

[wp_tech_share]

With fiber deployments accelerating around the world and with operators seemingly on a daily basis announcing additional fiber expansion projects, there is no question that the competition for broadband subscribers and revenue is intensifying faster than some operators would prefer. Because of that intensification and because of the time and cost required for fiber network deployments, operators are increasingly using a range of technologies for their fiber networks. System vendors have made this easier by adopting combo optics and combo cards that can support a range of technologies, from 2.5G GPON to 25G PON, and potentially beyond. Equipment vendors have heard the call from their operator customers that they need to have every tool available to them to succeed in a highly-competitive environment.

Although operators are, for the most part, still in the early stages of deploying gigabit and multi-gigabit services using XGS-PON, their fiber expansions are opening up additional opportunities for applications and new addressable customers that already require speeds beyond what XGS-PON can provide. For example, large enterprises and campus environments, which have typically been served by point-to-point Ethernet connections, are increasingly being passed by PON ODNs, especially those enterprises that are adjacent to residential neighborhoods.

Though the ITU (International Telecommunication Union) has determined that single channel 50G PON as defined in its G.hsp.50pmd specification is the next generation technology it will move forward with, the increasing use cases for PON combined with those use cases requirements for additional speeds beyond what XGS-PON can provide have opened the door for 25G PON as an important tool in operators’ toolboxes. The current strength in fiber buildouts and the need to address new use cases today has resulted in a list of operators who simply can’t wait for 50G PON to be fully standardized, tested, and productized.

In today’s hypercompetitive broadband market, timing and the availability of the right technology tools are everything. Although 50G PON provides a tremendous theoretical boost in speeds, the timeline for its availability is still an open question. China Mobile is on the record saying that it will begin limited deployments of 50G PON beginning in 2023. However, the CAICT (China Academy of Information and Communications Technology) has stated that it believes mass-market deployments of 50G PON won’t occur until the second half of this decade. With XG-PON deployments just hitting their stride as of last year and a typical deployment cycle of around 5-7 years for each new technology, the CAICT’s estimate seems to be more realistic. Even if we split the difference, the market is still looking at 2025 as the earliest point at which 50G PON sees meaningful deployments for residential applications.

One of the biggest challenges to overcome for all 50G PON component suppliers and equipment vendors is the increased optical power budget required. Additionally, the proposed integration of DSPs (Digital Signal Processors) is a significant change, as they have not been required in PON technologies before. Incorporating DSPs theoretically allows for the use of lower-cost 25G optics, which are widely available and mature. DSPs allow for the support of both OOK (On-Off keying) and OFDMA (Orthogonal Frequency Division Multiple Access). This support is critical for operators as it allows them to re-use their existing ODN (Optical Distribution Network) and not have to make significant and costly changes that could impact thousands of subscribers.

DSP-enhanced PON technologies are already being put through their paces, with China Mobile having demonstrated transmission rates of 41G downstream and 16G upstream in a hybrid environment using a 50G PON ONT as well as a 10G PON ONT. Meanwhile, Nokia has demonstrated 100G PON in conjunction with Vodafone at a lab in Germany. Both trials occurred in 2021 and more proof-of-concept work is expected throughout this year.

This brings us back to 25G PON. Although the traditional method of developing a technology for wide-scale deployment is to work through one of the primary standards bodies (ITU and IEEE), that avenue was closed to Nokia and other component suppliers and service providers who were interested in seeing both 25G PON and 50G PON standardized through the ITU, as well as accelerating the availability of 25G PON technologies to bridge the gap between today’s 10G technologies and tomorrow’s 50G and 100G options. So, the collection of vendors and operators organized the 25GS-PON MSA (Multi-Source Agreement) to develop standards, define interoperability, and generally help to evolve the technology outside the traditional standards organizations. The group’s members include AT&T, Chorus, Chunghwa Telecom, Cox Communications, NBN, Opticomm, and Proximus—service providers with the collective buying power to make the R&D effort worthwhile for the growing list of component and equipment vendors who are also members.

CableLabs, which focuses on developing standards and technologies on behalf of its cable operator members, is also a member of the MSA. Just like their telco counterparts, cable operators are trying to determine their bandwidth requirements in residential networks over the next few years, so having a choice among technology options is important. But unlike telcos, cable operators also have to determine whether they will satisfy these future bandwidth requirements with DOCSIS 4.0 and their existing coax plant or whether they will do so with fiber. In both cases, 25G PON is being examined as both a residential technology beyond current 10G DPoE (DOCSIS Provisioning over EPON) options and also as a potential aggregation technology for both remote PHY and remote MACPHY nodes.

CableLabs is also working on its own initiatives, including single wavelength 100G Coherent PON, which is seen as an ideal long-term option for cable operators who have wide ranges of fiber span lengths (up to 80km) and need spit ratio sizes that are more akin to today’s service group sizes of 200-500 homes passed per node. Nevertheless, the timeline for 100G Coherent PON, like 50G PON, is still being determined.

 

Expanding use cases for PON driving the need for 25G

Beyond the uncertain timing of 50G PON, as well as the desire for technology choice, one of the primary reasons for the short-term demand for 25G PON is simply the desire to use PON in applications that go well beyond traditional residential broadband access. It is actually in these applications where 25G PON will see the most deployments, particularly within the next 2-3 years.

Enterprise services have typically been point-to-point Ethernet connections. But as operators expand their PON ODNs to support residential and small-medium business applications, 25G PON can be implemented to deliver symmetric 10G connections, comparable or better than what enterprises are accustomed to. Because 25G PON has been designed to co-exist with both GPON and XGS-PON, service providers can have the flexibility of using the same OLT to deliver both high- and low-SLA traffic or they can split that traffic and customer base across multiple OLTs. Either way, the existing ODN remains intact.

Additionally, service providers are also interested in 25G PON their 5G transport networks, particularly in the case of small cell transport. Though LTE networks never resulted in the type of volume deployments of PON equipment to support backhaul, there is more consensus that the PON technology options available now provide the bandwidth (symmetric 10G) along with the latency requirements necessary to support 5G services and corresponding SLAs.

 

Clear Upgrade Path

Though standards bodies have traditionally defined which technologies get adopted and when there are certainly cases where operators have placed their thumbs on the scales in favor of a preferred option. These choices don’t generally go against what the standards bodies recommend or are working towards. Instead, they satisfy a more immediate internal requirement that doesn’t mesh with the proposed standardization, certification, and product availability timeline defined by the standards bodies and participating equipment suppliers.

Larger operators, including AT&T, BT Openreach, Comcast, and Deutsche Telekom, have also become far more comfortable over the last few years defining standards and pushing them through other industry organizations, such as ONF and the Broadband Forum. These operators know they have the scale and market potential to drive standards and thereby influence the product roadmaps of their incumbent equipment suppliers. There are always others waiting in the wings or the threat of moving to completely virtualized, white box solutions that would reduce the revenue opportunity for said vendors.

And that’s what appears to be happening with 25G PON. Service providers that are part of the MSA are certainly voting with their pocketbooks. Nokia, for its part, has made things quite simple for these operators: Use GPON and XGS-PON today for the bulk of your residential FTTH deployments, and then add in 25G PON using the same equipment and ODN where it makes strategic sense. Nokia does indeed seem to be seeding the market, has reported a cumulative total of 200k 25G-Ready PON OLT ports through 3Q21, with a bigger jump expected in the fourth quarter.

Nokia realizes it must make hay now while the timeline around 50G PON remains in flux and demonstrations of its performance in labs remain limited.

But the PON market has always been one offering different technology options to suit each operator’s unique use case requirements and competitive dynamics. That flexibility is proving to be particularly beneficial in today’s hypercompetitive broadband environment, in which each operator might have a different starting point when it comes to fiber deployments, but likely has similar goals when it comes to subscriber acquisition and revenue generation. In this environment, many operators have clearly said that they simply can’t wait on a promising technology when they need to establish their market presence today. And so, the vendor ecosystem has responded again with options that can steer them down a path to success.

[wp_tech_share]

 

Data centers are the backbone of our digital lives, enabling the real-time processing of and aggregation of data and transactions, as well as the seamless delivery of applications to both enterprises and their end customers. Data centers have been able to grow to support ever-increasing volumes of data and transaction processing thanks in large part to software-based automation and virtualization, allowing enterprises and hyperscalers alike to adapt quickly to changing workload volumes as well as physical infrastructure limitations.

Despite their phenomenal growth and innovation, the principles of which are being integrated into service provider networks, data centers of all sizes are about to undergo a significant expansion as they are tasked with processing blockchain, bitcoin, IoT, gigabit broadband, and 5G workloads. In our latest forecast, published earlier this month, we expect worldwide data center capex to reach $350 B by 2026, representing a five-year projected growth rate of 10%. We also forecast hyperscale cloud providers to double their data center spending over the next five years.

Additionally, enterprises are all becoming smarter about how to balance and incorporate their private clouds, public clouds, and on-premises clouds for the most optimal and efficient processing of workloads and application requests. Similar to highly-resilient service provider networks, enterprises are realizing that the distribution of workload processing allows them to scale faster with more redundancy. Despite the general trend towards migrating to the cloud, enterprises will continue to invest in on-premises infrastructure to handle workloads that involve sensitive data, as well as those applications that are very latency-sensitive.

As application requests, change orders, equipment configuration changes, and other general troubleshooting and maintenance requests continue to increase, anticipating and managing the necessary changes in multi-cloud environments becomes exceedingly difficult. Throw in the need to quickly identify and troubleshoot network faults at the physical layer and you have a recipe for a maintenance nightmare and, more importantly, substantial revenue loss due to the cascading impact of fragmented networks that are only peripherally integrated.

Although automation and machine learning tools have been available for some time, they are often designed to automate application delivery within one of the multiple cloud environments, not across multiple clouds and multiple network layers. Automating IT processes across both physical and virtual environments and across the underlying network infrastructure, compute and storage resources have been a challenge for some time. Each layer has its own distinct set of issues and requirements.

New network rollouts or service changes resulting in network configuration changes are typically very labor-intensive and frequently yield faults in the early stages of deployment that require significant man-hours of labor.

Similarly, configuration changes sometimes result in redundant or mismatched operations due to the manual entry of these changes. Without a holistic approach to automation, there is no way to verify or prevent the introduction of conflicting network configurations.

Finally—and this is just as true of service provider networks as it is of large enterprises and hyperscale cloud providers—detecting network faults is often a time-consuming process, principally because network faults are often handled passively until they are located and resolved manually. Traditional alarm reporting followed by manual troubleshooting must give way to proactive and automatic network monitoring that quickly detects network faults and uses machine learning to rectify them without any manual intervention whatsoever.

 

Automating a Data Center’s Full Life Cycle

As the size and complexity of data centers continue to increase and as workload and application changes increase, the impact on the underlying network infrastructure can be difficult to predict. Various organizations both within and outside the enterprise have different requirements that all must somehow be funneled into a common platform to prevent conflicting changes to the application delivery layer all the way to the network infrastructure. These organizations can also have drastically different timeframes for the expected completion of changes largely due to siloed management of different portions of the data center, as well as different diagnostic and troubleshooting tools in use by the network operations team and the IT infrastructure teams.

In addition to pushing on their equipment vendor and systems integrator partners to deliver platforms that solve these challenges, large enterprises also want platforms that give them the ability to automate the entire lifecycle of their networks. These platforms use AI and machine learning to build a thorough and evolving view of underlying network infrastructure to allow enterprises to:

    • Support automatic network planning and capacity upgrades by modeling how the addition of workloads will impact current and future server requirements as well as the need to add switching and routing capacity to support application delivery.
    • Implement network changes automatically, reducing the need for manual intervention and thereby reducing the possibility of errors.
    • Constantly provide detailed network monitoring at all layers and provide proactive fault location, detection, and resolution while limiting manual intervention.
    • Simplify the service and application provisioning process by providing a common interface that then translates requests into desired network changes.

Ultimately, one of the key goals of these platforms is to create a closed-loop between network management, control, and analysis capabilities so that changes in the upper-layer services and applications can drive defined changes in the underlying network infrastructure automatically. In order for this to become a reality in increasingly complex data center network environments, these platforms must provide some critical functions, including:

    • Providing a unified data model and data lakes across multiple cloud environments and multi-vendor ecosystems
      • This function has been a long-standing goal of large enterprises and telecommunications service providers for years. Ending the swivel-chair approach to network management and delivering error-free network changes with minimal manual intervention are key functions of any data center automation platform.
    • Service orchestration across multiple, complex service flows
      • This function has also been highly sought-after by large enterprises and service providers alike. For service providers, SDN overlays were intended to add in these functions and capabilities into their networks. Deployments have yielded mixed, but generally favorable results. Nevertheless, the principles of SDN continue to proliferate into other areas of the network, largely due to the desire to streamline and automate the service provisioning process. The same can be said for large enterprises and data center providers.

Although these platforms are intended to serve as a common interface across multiple business units and network layers, their design, and deployment can be modular and gradual. If a large enterprise wants to migrate to a more automated model, it can do so at a pace that is suited to the organization’s needs. The introduction of automation can be done first at the network infrastructure layer and then introduced to the application layer. Over time, with AI and machine learning tools aggregating performance data across both network layers, correlations between application delivery changes and their impact on network infrastructure can be determined more quickly. Ultimately, service and network lifecycle management can be simplified and expanded to cover hybrid cloud or multi-vendor environments.

We believe that these holistic platforms that bridge the worlds of telecommunications service providers and large enterprise data centers will play a key role in helping automate data center application delivery by providing a common window into the application delivery network as well as the underlying network infrastructure. The result will be the more efficient use of network resources, a reduction in the time required to make manual configuration changes to the network, a reduction in the programming load for IT departments, and strict compliance with SLA guarantee to key end customers and application provider partners.