In the first blog in this series we talked about programmable fabrics and their use causes. In this blog we’ll look at what a programmable fabric actually looks like.
The following diagram shows the high-level architecture of a programmable fabric:
The programmable fabric can be broken down into two main layers, the control plane and the data plane.
The control plane layer is responsible for configuring and managing the data plane and is normally more centrally located, i.e., one per PoP or region.
The control plane is normally divided into three separate domains – Fabric, Telemetry & Configuration and Management – to allow them to scale independently. However, they could be implemented in one software controller, for example in a small-scale implementation.
1. Fabric Controller
The Fabric Controller controls the loading and programming of the data plane pipeline using the P4 Runtime interface to communicate with the data plane’s programmable forwarding engine as shown in the diagram below.
There will be a number of controller applications or “network functions” that talk to the fabric controller to control various aspects of the programmable fabric.
The Fabric Management applications manage the underlying network fabric setup and configuration. It can also be thought of as a number of virtualized switch and router network functions that provide the underlying network fabric using the programmable fabric.
The Fabric Management applications rely on user plane functionality being implemented in the P4 pipeline in the PFE.
The NF control plane uses a CUPS (Control User Plane Separation) methodology to implement the control plane portion of a Network Function while the user plane functions are pushed down into the “data plane node” as described in this document.
2. Telemetry Controller
The Telemetry Controller allows applications (i.e. Fault Management) to collect telemetry on the network elements in the programmable fabric using the Programmable Fabric’s gNMI streaming interface. It is expected that other applications will use things like machine learning to provide more intelligent decisions and provide control loop feedback into the Fabric Controller applications to provide pre-emptive service reconfiguration and repair as we move towards autonomous networks.
3. Configuration and Management Controller
The Configuration and Management Controller will provide applications with common north bound interfaces and models for the configuration and management of the programmable fabric.
The OpenConfig group provides a set of network data models that allow network functions to be managed using a common set of tools and protocols. The gNMI and gNOI interfaces use the OpenConfig models to allow efficient access to configure and manage the network functions in the Programmable Fabric.
The data plane does the bulk of the network traffic forwarding only sending exception or control packets up to the control plane for processing (i.e. DHCP for a new IPoE session in a BNG-c). While the data plane might normally be thought of as a standalone network switch in the network it could also be a SmartNIC in a compute server that allows the programmable fabric to be extended up into the server (i.e. using P4 to define a pipeline in an FPGA SmartNIC).
The data plane is normally made up of several components:
1. Data Plane Node (DPN): is used to describe the hardware that houses the data plane forwarding function (i.e. all the components below). This could be a stand alone network switch with a PFE like Intel/Barefoot’s Tofino chip or a compute server with a P4 based SmartNIC like Intel’s PAC N3000.
2. Data Plane Agent (DP-Agent): provides the standardised north bound data plane interfaces (i.e. P4 Runtime, gNMI and gNOI) that allow the control plane network functions to communicate with the data plane. An example implementation of the DP-Agent is the ONF’s Stratum project.
3. Network Function user plane (NF-u): the user plane portions of network functions can be defined in the programmable pipeline (i.e. using P4 for example) and then loaded into the PFE to process packets. These functions are programmed by their control plane counters parts (i.e. BNG-c, UPF-c, Fabric Manager-c) in order to handle the bulk of the traffic in the PFE without needing to go up to the control plane for processing.
4. Programmable Forwarding Engine (PFE): the actual hardware that does the packet forwarding. Some examples of a PFE could be the P4 based switch chipset like Intel/Barefoot’s Tofino chipset, or another could be an FPGA based SmartNIC using P4 to define the packet forwarding pipeline.
Dell Technologies is committed to driving disaggregation and innovation through open architectures and the competitiveness this brings to our customer’s networks. The high-level architecture described in this blog is in line with the Open Networking Forum’s Stratum and NG-SDN projects and provides open building blocks that allow telecommunication providers to build open, scalable and cost effective edge solutions.
The following diagram shows the high-level architecture of a programmable fabric:
Control Plane Layer
The control plane layer is responsible for configuring and managing the data plane and is normally more centrally located, i.e., one per PoP or region.
The control plane is normally divided into three separate domains – Fabric, Telemetry & Configuration and Management – to allow them to scale independently. However, they could be implemented in one software controller, for example in a small-scale implementation.
1. Fabric Controller
The Fabric Controller controls the loading and programming of the data plane pipeline using the P4 Runtime interface to communicate with the data plane’s programmable forwarding engine as shown in the diagram below.
There will be a number of controller applications or “network functions” that talk to the fabric controller to control various aspects of the programmable fabric.
The Fabric Management applications manage the underlying network fabric setup and configuration. It can also be thought of as a number of virtualized switch and router network functions that provide the underlying network fabric using the programmable fabric.
The Fabric Management applications rely on user plane functionality being implemented in the P4 pipeline in the PFE.
The NF control plane uses a CUPS (Control User Plane Separation) methodology to implement the control plane portion of a Network Function while the user plane functions are pushed down into the “data plane node” as described in this document.
2. Telemetry Controller
The Telemetry Controller allows applications (i.e. Fault Management) to collect telemetry on the network elements in the programmable fabric using the Programmable Fabric’s gNMI streaming interface. It is expected that other applications will use things like machine learning to provide more intelligent decisions and provide control loop feedback into the Fabric Controller applications to provide pre-emptive service reconfiguration and repair as we move towards autonomous networks.
3. Configuration and Management Controller
The Configuration and Management Controller will provide applications with common north bound interfaces and models for the configuration and management of the programmable fabric.
The OpenConfig group provides a set of network data models that allow network functions to be managed using a common set of tools and protocols. The gNMI and gNOI interfaces use the OpenConfig models to allow efficient access to configure and manage the network functions in the Programmable Fabric.
Data Plane Layer
The data plane does the bulk of the network traffic forwarding only sending exception or control packets up to the control plane for processing (i.e. DHCP for a new IPoE session in a BNG-c). While the data plane might normally be thought of as a standalone network switch in the network it could also be a SmartNIC in a compute server that allows the programmable fabric to be extended up into the server (i.e. using P4 to define a pipeline in an FPGA SmartNIC).
The data plane is normally made up of several components:
1. Data Plane Node (DPN): is used to describe the hardware that houses the data plane forwarding function (i.e. all the components below). This could be a stand alone network switch with a PFE like Intel/Barefoot’s Tofino chip or a compute server with a P4 based SmartNIC like Intel’s PAC N3000.
2. Data Plane Agent (DP-Agent): provides the standardised north bound data plane interfaces (i.e. P4 Runtime, gNMI and gNOI) that allow the control plane network functions to communicate with the data plane. An example implementation of the DP-Agent is the ONF’s Stratum project.
3. Network Function user plane (NF-u): the user plane portions of network functions can be defined in the programmable pipeline (i.e. using P4 for example) and then loaded into the PFE to process packets. These functions are programmed by their control plane counters parts (i.e. BNG-c, UPF-c, Fabric Manager-c) in order to handle the bulk of the traffic in the PFE without needing to go up to the control plane for processing.
4. Programmable Forwarding Engine (PFE): the actual hardware that does the packet forwarding. Some examples of a PFE could be the P4 based switch chipset like Intel/Barefoot’s Tofino chipset, or another could be an FPGA based SmartNIC using P4 to define the packet forwarding pipeline.
Dell Technologies is committed to driving disaggregation and innovation through open architectures and the competitiveness this brings to our customer’s networks. The high-level architecture described in this blog is in line with the Open Networking Forum’s Stratum and NG-SDN projects and provides open building blocks that allow telecommunication providers to build open, scalable and cost effective edge solutions.
0 comments:
Post a Comment