[wp_tech_share]
follow us in feedly
Share

In March, I attended the 2019 Open Compute Project (OCP) Global Summit at the San Jose Convention Center. The event is growing with 3,600 participants this year, including a broad representation of vendors and end users who make up the OCP community. We continue to see innovation in the server rack for hyperscale Cloud, edge computing, and enterprise environments for OCP-based designs.

Following are three key takeaways in server network connectivity:

 1.  OCP NIC 3.0 (Network Interface Card) specification continues to evolve and is Smart NIC-ready.

The OCP NIC 3.0 specification addresses shortcomings of the OCP NIC 2.0 specification in the areas of the thermal and mechanical profile, connector placement, and board space. Key members, including Broadcom, Facebook, Intel, and Mellanox, contributed to the 3.0 development process. As it currently stands, the OCP NIC 3.0 specification is defined in two form factors: SFF (small form factor) and LFF (large form factor). The LFF form factor is designed to accommodate accelerated processors, such as an ARM SoC or FPGA for Smart NIC applications.

A Smart NIC designed for OCP is a wise future-proofing strategy. In Dell’Oro Group’s 2019 Controller and Adapter Market 5-Year Forecast January report, I projected that Smart NIC will become a $500 M market by 2023, representing 20 percent of the total controller and adapter market.  Furthermore, most of the earlier adopters of Smart NICs are hyperscale and telecom data centers are also expected to widely deploy OCP-based designs within the server rack.

2.  The introduction of 56 Gbps PAM-4 NICs enables server connectivity to 400 Gbps networks.

Another important development is the availability of Ethernet adapter products with 56 Gbps PAM-4 SerDes lanes by Broadcom (NetExtreme), Intel (800 series Columbiaville), and Mellanox (ConnectX6). All are available in the OCP 3.0 form factor. The SerDes lane transition from 28 Gbps NRZ to 56 Gbps PAM-4 will enable Ethernet connectivity up to 100 GbE (based on 2 SerDes lanes) or 200 GbE (based on 4 SerDes lanes). We see strong demand for server connectivity at 100 GbE and higher speeds, especially by Tier 1 Cloud service providers, as this segment transitions to 400 GbE networking at the top-of-rack (ToR) switch over the next one to two years. (See Dell’Oro’s press release,“Cloud Service Providers Drove Demand Volatility of High-Speed Network Adapters”)

3.  Multi-host NICs have the potential to streamline and densify server connector connectivity.

It is exciting to see multi-host NICs gaining additional support from vendors. This technology has the ability to streamline the network by reducing ToR connections while providing a dense compute rack architecture. Mellanox was first to market with multi-host NICs for Yosemite servers, which provide 50 Gbps Ethernet connectivity to four server nodes. At OCP, both Broadcom and Netronome announced network adapter products supporting multi-host connectivity for the Yosemite platform. Broadcom’s announcements are based on the NetExterme series with the Thor chipset, which provides single and multi-host connectivity for up to 200 GbE with a PAM-4 solution. Netronome’s solution, the Agilio CX, is also a Smart NIC that provides connectivity up to 50 GbE.

I believe that OCP will continue to grow in strength as the industry transitions from off-the-shelf equipment to open designs optimized to end-users’ technical and cost-of-ownership requirements.