Internet-Draft | PCECC | March 2022 |
Li, et al. | Expires 8 September 2022 | [Page] |
The Path Computation Element (PCE) is a core component of a Software-Defined Networking (SDN) system. It can compute optimal paths for traffic across a network and can also update the paths to reflect changes in the network or traffic demands. PCE was developed to derive paths for MPLS Label Switched Paths (LSPs), which are supplied to the head end of the LSP using the Path Computation Element Communication Protocol (PCEP).¶
SDN has a broader applicability than signaled MPLS traffic-engineered (TE) networks, and the PCE may be used to determine paths in a range of use cases including static LSPs, segment routing (SR), Service Function Chaining (SFC), and most forms of a routed or switched network. It is, therefore, reasonable to consider PCEP as a control protocol for use in these environments to allow the PCE to be fully enabled as a central controller.¶
A PCE as a Central Controller (PCECC) can simplify the processing of a distributed control plane by blending it with elements of SDN and without necessarily completely replacing it. This document describes general considerations for PCECC deployment and examines its applicability and benefits, as well as its challenges and limitations, through a number of use cases. PCEP extensions required for stateful PCE usage are covered in separate documents.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 8 September 2022.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The Path Computation Element (PCE) [RFC4655] was developed to offload the path computation function from routers in an MPLS traffic- engineered (TE) network. It can compute optimal paths for traffic across a network and can also update the paths to reflect changes in the network or traffic demands. Since then, the role and function of the PCE have grown to cover a number of other uses (such as GMPLS [RFC7025]) and to allow delegated control [RFC8231] and PCE-initiated use of network resources [RFC8281].¶
According to [RFC7399], Software-Defined Networking (SDN) refers to a separation between the control elements and the forwarding components so that software running in a centralized system, called a controller, can act to program the devices in the network to behave in specific ways. A required element in an SDN architecture is a component that plans how the network resources will be used and how the devices will be programmed. It is possible to view this component as performing specific computations to place traffic flows within the network given knowledge of the availability of network resources, how other forwarding devices are programmed, and the way that other flows are routed. This is the function and purpose of a PCE, and the way that a PCE integrates into a wider network control system (including an SDN system) is presented in [RFC7491].¶
[RFC8283] introduces the architecture for the PCE as a central controller as an extension to the architecture described in [RFC4655] and assumes the continued use of PCEP as the protocol used between the PCE and PCC. [RFC8283] further examines the motivations and applicability for PCEP as a Southbound Interface (SBI) and introduces the implications for the protocol.¶
[RFC9050] introduces the procedures and extensions for PCEP to support the PCECC architecture [RFC8283].¶
This draft describes the various other usecases for the PCECC architecture.¶
The following terminology is used in this document.¶
In the following sections, several use cases are described, showcasing scenarios that benefit from the deployment of PCECC.¶
As per [RFC8283], in some cases, the PCE-based controller can take responsibility for managing some part of the MPLS label space for each of the routers that it controls, and it may taker wider responsibility for partitioning the label space for each router and allocating different parts for different uses, communicating the ranges to the router using PCEP.¶
[RFC9050] describe a mode where LSPs are provisioned as explicit label instructions at each hop on the end-to-end path. Each router along the path must be told what label forwarding instructions to program and what resources to reserve. The controller uses PCEP to communicate with each router along the path of the end-to-end LSP. For this to work, the PCE- based controller will take responsibility for managing some part of the MPLS label space for each of the routers that it controls. An extension to PCEP could be done to allow a PCC to inform the PCE of such a label space to control. (See [I-D.li-pce-controlled-id-space] for a possible PCEP extension to support advertisement of the MPLS label space to the PCE to control.)¶
[RFC8664] specifies extensions to PCEP that allow a stateful PCE to compute, update or initiate SR-TE paths. [I-D.ietf-pce-pcep-extension-pce-controller-sr] describes the mechanism for PCECC to allocate and provision the node/prefix/ adjacency label (SID) via PCEP. To make such allocation PCE needs to be aware of the label space from Segment Routing Global Block (SRGB) or Segment Routing Local Block (SRLB) [RFC8402] of the node that it controls. A mechanism for a PCC to inform the PCE of such a label space to control is needed within PCEP. The full SRGB/SRLB of a node could be learned via existing IGP or BGP-LS mechanism too.¶
Further, there have been various proposals for Global Labels in MPLS, the PCECC architecture could be used as means to learn the label space of nodes, and could also be used to determine and provision the global label range.¶
Optionally, the PCECC could determine the shared MPLS global label range for the network.¶
Segment Routing (SR) leverages the source routing paradigm. Using SR, a source node steers a packet through a path without relying on hop-by-hop signaling protocols such as LDP or RSVP-TE. Each path is specified as an ordered list of instructions called "segments". Each segment is an instruction to route the packet to a specific place in the network, or to perform a specific service on the packet. A database of segments can be distributed through the network using a routing protocol (such as IS-IS or OSPF) or by any other means. PCEP (and PCECC) could be one such means.¶
[RFC8664] specify the SR specific PCEP extensions. PCECC may further use PCEP protocol for SR SID (Segment Identifier) distribution to the SR nodes (PCC) with some benefits. If the PCECC allocates and maintains the SID in the network for the nodes and adjacencies; and further distributes them to the SR nodes directly via the PCEP session has some advantage over the configurations on each SR node and flooding via IGP, especially in a SDN environment.¶
When the PCECC is used for the distribution of the node SID and adjacency SID, the node SID is allocated from the SRGB of the node. For the allocation of adjacency SID, the allocation is from the SRLB of the node as described in [I-D.ietf-pce-pcep-extension-pce-controller-sr].¶
[RFC8355] identifies various protection and resiliency usecases for SR. Path protection lets the ingress node be in charge of the failure recovery (used for SR-TE). Also protection can be performed by the node adjacent to the failed component, commonly referred to as local protection techniques or fast-reroute (FRR) techniques. In case of PCECC, the protection paths can be pre-computed and setup by the PCE.¶
The following example illustrate the use case where the node SID and adjacency SID are allocated by the PCECC.¶
192.0.2.1/32 +----------+ | R1(1001) | +----------+ | +----------+ | R2(1002) | 192.0.2.2/32 +----------+ * | * * * | * * *link1| * * 192.0.2.4/32 * | *link2 * 192.0.2.5/32 +-----------+ 9001| * +-----------+ | R4(1004) | | * | R5(1005) | +-----------+ | * +-----------+ * | *9003 * + * | * * + * | * * + +-----------+ +-----------+ 192.0.2.3/32 | R3(1003) | |R6(1006) |192.0.2.6/32 +-----------+ +-----------+ | +-----------+ | R8(1008) | 192.0.2.8/32 +-----------+¶
Each node (PCC) is allocated a node-SID by the PCECC. The PCECC needs to update the label map of each node to all the nodes in the domain. On receiving the label map, each node (PCC) uses the local routing information to determine the next-hop and download the label forwarding instructions accordingly. The forwarding behavior and the end result is same as IGP based Node-SID in SR. Thus, from anywhere in the domain, it enforces the ECMP-aware shortest-path forwarding of the packet towards the related node.¶
For each adjacency in the network, PCECC can allocate an Adj-SID. The PCECC sends PCInitiate message to update the label map of each Adj to the corresponding nodes in the domain. Each node (PCC) download the label forwarding instructions accordingly. The forwarding behavior and the end result is similar to IGP based "Adj-SID" in SR.¶
These mechanism are described in [I-D.ietf-pce-pcep-extension-pce-controller-sr].¶
In this mode of the solution, the PCECC just need to allocate the node SID (without calculating the explicit path for the SR path). The ingress of the forwarding path just need to encapsulate the destination node SID on top of the packet. All the intermediate nodes will forward the packet based on the destination node SID. It is similar to the LDP LSP.¶
R1 may send a packet to R8 simply by pushing an SR header with segment list {1008} (Node SID for R8). The path would be the based on the routing/nexthop calculation on the routers.¶
SR-TE paths may not follow an IGP SPT. Such paths may be chosen by a PCECC and provisioned on the ingress node of the SR-TE path. The SR header consists of a list of SIDs (or MPLS labels). The header has all necessary information so that, the packets can be guided from the ingress node to the egress node of the path; hence, there is no need for any signaling protocol. For the case where strict traffic engineering path is needed, all the adjacency SID are stacked, otherwise a combination of node-SID or adj-SID can be used for the SR-TE paths.¶
Note that the bandwidth reservations is only guaranteed at controller and through the enforce of the bandwidth admission control. As for the RSVP-TE LSP case, the control plane signaling also does the link bandwidth reservation in each hop of the path.¶
The SR traffic engineering path examples are explained as bellow:¶
Note that the node SID for each node is allocated from the SRGB and adjacency SID for each link are allocated from the SRLB for each node.¶
Example 1:¶
R1 may send a packet P1 to R8 simply by pushing an SR header with segment list {1008}. Based on the best path, it could be: R1-R2-R3-R8.¶
Example 2:¶
R1 may send a packet P2 to R8 by pushing an SR header with segment list {1002, 9001, 1008}. The path should be: R1-R2-link1-R3-R8.¶
Example 3:¶
R1 may send a packet P3 to R8 via R4 by pushing an SR header with segment list {1004, 1008}. The path could be : R1-R2-R4-R3-R8¶
The local protection examples for SR TE path are explained below:¶
Example 4: local link protection:¶
Example 5: local node protection:¶
[RFC8402] defines Segment Routing architecture, which uses a SR Policy to steer packets from a node through a ordered list of segments. The SR Policy could be configured on the headed or instantiated by an SR controller. The SR architecture does not restrict how the controller programs the network. The options are Network Configuration Protocol (NETCONF), PCEP, and BGP. SR Policy can be based on either SR-MPLS or SRv6 dataplane.¶
A SR Policy architecture is described in [I-D.ietf-spring-segment-routing-policy]. An SR Policy is a framework that enables the instantiation of an ordered list of segments on a node for implementing a source routing policy for the steering of traffic for a specific purpose (e.g. for a specific SLA) from that node.¶
A SR Policy is identified through the tuple <headend, color, endpoint>. In the context of a specific headend, one may identify an SR policy by the <color, endpoint> tuple.¶
The headend is the node where the policy is instantiated/implemented. The endpoint indicates the destination of the policy. The color is a 32-bit numerical value that associates the SR Policy with an intent or objective.¶
A SR Policy should have one or more Candidate Paths. A candidate path is the unit for signaling of an SR Policy to a headend via protocol extensions like [I-D.ietf-pce-segment-routing-policy-cp] or BGP SR Policy [I-D.ietf-idr-segment-routing-te-policy]. Each candidate path must have one or mode Segment-Lists. A Segment- List represents a specific source-routed path to send traffic from the headend to the endpoint of the corresponding SR Policy.¶
A candidate path is either dynamic, explicit, or composite. For PCECC use case a candidate path should be either dynamic (i.e. when PCE provides its according to specific optimization objective) or composite (a composite candidate path construct enables the combination of SR Policies, each with explicit candidate paths and/or dynamic candidate paths with potentially different optimization objectives and constraints).¶
[I-D.ietf-pce-segment-routing-policy-cp] defines a new ASSOCIATION type that binds previously separated LSPs in the PCEP (Candidate Paths) into common SR Policy hierarchy. This is applicable in the PCECC scenario as well.¶
Further one could also use the PCECC mechanism directly to create an SR policy container at the PCC by defining a new CCI for it. The advantage of that approach would be to allow SR Policy to be created without signaling candidate paths.¶
In the Section 3.2 the case of SR path via PCECC is discussed. Although those cases give the simplicity and scalability, but there are existing functionalities for the traffic engineering path such as the bandwidth guarantee, monitoring where SR based solution are complex. Also there are cases where the depth of the label stack is an issue for existing deployment and certain vendors.¶
So to address these issues, PCECC architecture also support the TE LSP functionalities. To achieve this, the existing PCEP can be used to communicate between the PCECC and nodes along the path. This is similar to static LSPs, where LSPs can be provisioned as explicit label instructions at each hop on the end-to-end path. Each router along the path must be told what label-forwarding instructions to program and what resources to reserve. The PCE-based controller keeps a view of the network and determines the paths of the end-to-end LSPs, and the controller uses PCEP to communicate with each router along the path of the end-to-end LSP.¶
Very often many service providers use TE tunnels for solving issues with non-deterministic paths in their networks. One example of such applications is usage of TEs in the mobile backhaul (MBH). Consider the following topology -¶
TE1 --------------> +---------+ +--------+ +--------+ +--------+ +------+ +---+ | Access |----| Access |----| AGG 1 |----| AGG N-1|----|Core 1|--|SR1| | SubNode1| | Node 1 | +--------+ +--------+ +------+ +---+ +---------+ +--------+ | | | ^ | | Access | Access | AGG Ring 1 | | | | SubRing 1 | Ring 1 | | | | | +---------+ +--------+ +--------+ | | | | Access | | Access | | AGG 2 | | | | | SubNode2| | Node 2 | +--------+ | | | +---------+ +--------+ | | | | | | | | | | | | | | | +----TE2----|-+ | +---------+ +--------+ +--------+ +--------+ +------+ +---+ | Access | | Access |----| AGG 3 |----| AGG N |----|Core N|--|SRn| | SubNodeN|----| Node N | +--------+ +--------+ +------+ +---+ +---------+ +--------+¶
This MBH architecture uses L2 access rings and sub-rings. L3 starts at the aggregation layer. For the sake of simplicity, the figure shows only one access sub-ring, access ring and aggregation ring (AGG1...AGGN), connected by Nx10GE interfaces. Aggregation domain runs its own IGP. There are two Egress routers (AGG N-1,AGG N) that are connected to the Core domain via L2 interfaces. Core also have connections to service routers, RSVP-TEs are used for MPLS transport inside the ring. There could be at least 2 tunnels (one way) from each AGG router to egress AGG routers. There are also many L2 access rings connected to AGG routers.¶
Service deployment made by means of either L2VPNs (VPLS) or L3VPNs. Those services use MPLS TE as transport towards egress AGG routers. TE tunnels could be also used as transport towards service routers in case of seamless MPLS based architecture in the future.¶
There is a need to solve the following tasks:¶
Since other tasks are already considered by other PCECC use cases, in this section, the focus is on load balancing (LB) task. LB task could be solved by means of PCECC in the following way:¶
There are various signaling options for establishing Inter-AS TE LSP: contiguous TE LSP [RFC5151], stitched TE LSP [RFC5150], nested TE LSP [RFC4206].¶
Requirements for PCE-based Inter-AS setup [RFC5376] describe the approach and PCEP functionality that are needed for establishing Inter-AS TE LSPs.¶
[RFC5376] also gives Inter- and Intra-AS PCE Reference Model that is provided below in shorten form for the sake of simplicity.¶
The PCECC belonging to different domain can co-operate to setup inter-AS TE LSP. The stateful H-PCE [RFC8751] mechanism could also be used to first establish a per-domain PCECC LSP. These could be stitched together to form inter-AS TE LSP as described in [I-D.ietf-pce-stateful-interdomain].¶
For the sake of simplicity, here after the focus is on a simplified Inter-AS case when both AS1 and AS2 belong to the same service provider administration. In that case Inter and Intra-AS PCEs could be combined in one single PCE if such combined PCE performance is enough for handling all path computation request and setup. There is a potential to use a single PCE for both ASes if the scalability and performance are enough. The PCE would require interfaces (PCEP and BGP-LS) to both domains. PCECC redundancy mechanisms are described in [RFC8283]. Thus routers in AS1 and AS2 (PCCs) can send PCEP messages towards same PCECC.¶
In a case of PCECC Inter-AS TE scenario where service provider controls both domains (AS1 and AS2), each of them have own IGP and MPLS transport. There is a need is to setup Inter-AS LSPs for transporting different services on top of them (Voice, L3VPN etc.). Inter-AS links with different capacity exist in several regions. The task is not only to provision those Inter-AS LSPs with given constrains but also calculate the path and pre-setup the backup Inter-AS LSPs that will be used if primary LSP fails.¶
As per the Figure 4, LSP1 from R1 to R3 goes via ASBR1 and ASBR3, and it is the primary Inter-AS LSP. R1-R3 LSP2 that go via ASBR5 and ASBR6 is the backup one. In addition there could also be a bypass LSP setup to protect against ASBR or inter-AS link failure.¶
After the addition of PCECC functionality to PCE (SDN controller), PCECC based Inter-AS TE model SHOULD follow as PCECC usecase for TE LSP as requirements of [RFC5376] with the following details:¶
The multicast LSPs can be setup via the RSVP-TE P2MP or mLDP protocols. The setup of these LSPs may require manual configurations and complex signaling when the protection is considered. By using the PCECC solution, the multicast LSP can be computed and setup through centralized controller which has the full picture of the topology and bandwidth usage for each link. It not only reduces the complex configurations comparing the distributed RSVP-TE P2MP or mLDP signaling, but also it can compute the disjoint primary path and secondary P2MP path efficiently.¶
It is assumed the PCECC is aware of the label space it controls for all nodes and make allocations accordingly.¶
+----------+ | R1 | Root node of the multicast LSP +----------+ |6000 +----------+ Transit Node | R2 | branch +----------+ * | * * 9001* | * *9002 * | * * +-----------+ | * +-----------+ | R4 | | * | R5 | Transit Nodes +-----------+ | * +-----------+ * | * * + 9003* | * * +9004 * | * * + +-----------+ +-----------+ | R3 | | R6 | Leaf Node +-----------+ +-----------+ 9005| +-----------+ | R8 | Leaf Node +-----------+¶
The P2MP examples are explained here, where R1 is root and R8 and R6 are the leaves.¶
The packet forwarding involves -¶
In this section we describe the end-to-end managed path protection service as well as the local protection with the operation management in the PCECC network for the P2MP/MP2MP LSP.¶
An end-to-end protection principle can be applied for computing backup P2MP or MP2MP LSPs. During computation of the primary multicast trees, PCECC server may also take the computation of a secondary tree into consideration. A PCE may compute the primary and backup P2MP (or MP2MP) LSP together or sequentially.¶
+----+ +----+ Root node of LSP | R1 |--| R11| +----+ +----+ / + 10/ +20 / + +----------+ +-----------+ Transit Node | R2 | | R3 | +----------+ +-----------+ | \ + + | \ + + 10| 10\ +20 20+ | \ + + | \ + | + \ + +-----------+ +-----------+ Leaf Nodes | R4 | | R5 | (Downstream LSR) +-----------+ +-----------+¶
In the example above, when the PCECC setup the primary multicast tree from the root node R1 to the leaves, which is R1->R2->{R4, R5}, at same time, it can setup the backup tree, which is R1->R11->R3->{R4, R5}. Both the these two primary forwarding tree and secondary forwarding tree will be downloaded to each routers along the primary path and the secondary path. The traffic will be forwarded through the R1->R2->{R4, R5} path normally, and when there is a node in the primary tree fails (say R2), then the root node R1 will switch the flow to the backup tree, which is R1->R11->R3->{R4, R5}. By using the PCECC, the path computation and forwarding path downloading can all be done without the complex signaling used in the P2MP RSVP-TE or mLDP.¶
In this section we describe the local protection service in the PCECC network for the P2MP/MP2MP LSP.¶
While the PCECC sets up the primary multicast tree, it can also build the back LSP among PLR, the protected node, and MPs (the downstream nodes of the protected node). In the cases where the amount of downstream nodes are huge, this mechanism can avoid unnecessary packet duplication on PLR and protect the network from traffic congestion risk.¶
+------------+ | R1 | Root Node +------------+ . . . +------------+ Point of Local Repair/ | R10 | Switchover Point +------------+ (Upstream LSR) / + 10/ +20 / + +----------+ +-----------+ Protected Node | R20 | | R30 | +----------+ +-----------+ | \ + + | \ + + 10| 10\ +20 20+ | \ + + | \ + | + \ + +-----------+ +-----------+ Merge Point | R40 | | R50 | (Downstream LSR) +-----------+ +-----------+ . . . .¶
In the example above, when the PCECC setup the primary multicast path around the PLR node R10 to protect node R20, which is R10->R20->{R40, R50}, at same time, it can setup the backup path R10->R30->{R40, R50}. Both the these two primary forwarding path and secondary bypass forwarding path will be downloaded to each routers along the primary path and the secondary bypass path. The traffic will be forwarded through the R10->R20->{R40, R50} path normally, and when there is a node failure for node R20, then the PLR node R10 will switch the flow to the backup path, which is R10->R30->{R40, R50}. By using the PCECC, the path computation and forwarding path downloading can all be done without the complex signaling used in the P2MP RSVP-TE or mLDP.¶
As described in [RFC8283], traffic classification is an important part of traffic engineering. It is the process of looking at a packet to determine how it should be treated as it is forwarded through the network. It applies in many scenarios including MPLS traffic engineering (where it determines what traffic is forwarded onto which LSPs); segment routing (where it is used to select which set of forwarding instructions to add to a packet); and SFC (where it indicates along which service function path a packet should be forwarded). In conjunction with traffic engineering, traffic classification is an important enabler for load balancing. Traffic classification is closely linked to the computational elements of planning for the network functions just listed because it determines how traffic load is balanced and distributed through the network. Therefore, selecting what traffic classification should be performed by a router is an important part of the work done by a PCECC.¶
Instructions can be passed from the controller to the routers using PCEP. These instructions tell the routers how to map traffic to paths or connections. Refer [RFC9168].¶
Along with traffic classification, there are few more question that needs to be considered once the path is setup -¶
These are out of scope of this document.¶
As per [RFC8402], with Segment Routing (SR), a node steers a packet through an ordered list of instructions, called segments. Segment Routing can be applied to the IPv6 architecture with the Segment Routing Header (SRH) [RFC8754]. A segment is encoded as an IPv6 address. An ordered list of segments is encoded as an ordered list of IPv6 addresses in the routing header. The active segment is indicated by the Destination Address of the packet. Upon completion of a segment, a pointer in the new routing header is incremented and indicates the next segment.¶
As per [RFC8754], an SRv6 Segment is a 128-bit value. "SRv6 SID" or simply "SID" are often used as a shorter reference for "SRv6 Segment". Further details are in An illustration is provided in [RFC8986] where SRv6 SID is represented as LOC:FUNCT.¶
[I-D.ietf-pce-segment-routing-ipv6] extends [RFC8664] to support SR for IPv6 data plane. Further a PCECC could be extended to support SRv6 SID allocation and distribution.¶
2001:db8::1 +----------+ | R1 | +----------+ | +----------+ | R2 | 2001:db8::2 +----------+ * | * * * | * * *link1| * * 2001:db8::4 * | *link2 * 2001:db8::5 +-----------+ | * +-----------+ | R4 | | * | R5 | +-----------+ | * +-----------+ * | * * + * | * * + * | * * + +-----------+ +-----------+ 2001:db8::3 | R3 | |R6 |2001:db8::6 +-----------+ +-----------+ | +-----------+ | R8 | 2001:db8::8 +-----------+¶
In this case, PCECC could assign the SRv6 SID (in form of a IPv6 address) to be used for node and adjacency. Later SRv6 path in form of list of SRv6 SID could be used at the ingress. Some examples -¶
Service Function Chaining (SFC) is described in [RFC7665]. It is the process of directing traffic in a network such that it passes through specific hardware devices or virtual machines (known as service function nodes) that can perform particular desired functions on the traffic. The set of functions to be performed and the order in which they are to be performed is known as a service function chain. The chain is enhanced with the locations at which the service functions are to be performed to derive a Service Function Path (SFP). Each packet is marked as belonging to a specific SFP, and that marking lets each successive service function node know which functions to perform and to which service function node to send the packet next. To operate an SFC network, the service function nodes must be configured to understand the packet markings, and the edge nodes must be told how to mark packets entering the network. Additionally, it may be necessary to establish tunnels between service function nodes to carry the traffic. Planning an SFC network requires load balancing between service function nodes and traffic engineering across the network that connects them. As per [RFC8283], these are operations that can be performed by a PCE-based controller, and that controller can use PCEP to program the network and install the service function chains and any required tunnels.¶
A possible mechanism would be to add support for SFC based central control instruction that would be able to instruct the following to the each SFF along the SFP -¶
PCECC can play the role for setting the traffic classification rules at the classifier to impose the NSH as well as downloading the forwarding instructions to each SFFs along the way so that they could process the NSH and forward accordingly. Instructions to the service classifier handle the context header, meta data etc.¶
It is also possible to support SFC with SR in conjunction with or without NSH such as [I-D.ietf-spring-nsh-sr] and [I-D.ietf-spring-sr-service-programming]. PCECC technique can also be used for service function related segments and SR service policies.¶
[RFC8735] describes the scenarios, and suggestions for the "Centrally Control Dynamic Routing (CCDR)" architecture, which integrates the merit of traditional distributed protocols (IGP/BGP), and the power of centrally control technologies (PCE/SDN) to provide one feasible traffic engineering solution in various complex scenarios for the service provider. [RFC8821] defines the framework for CCDR traffic engineering within Native IP network, using Dual/Multi-BGP session strategy and CCDR architecture. PCEP protocol can be used to transfer the key parameters between PCE and the underlying network devices (PCC) using PCECC technique. The central control instructions from PCECC to identify which prefix should be advertised on which BGP session.¶
Bit Index Explicit Replication (BIER) [RFC8279] defines an architecture where all intended multicast receivers are encoded as a bitmask in the multicast packet header within different encapsulations. A router that receives such a packet will forward the packet based on the bit position in the packet header towards the receiver(s) following a precomputed tree for each of the bits in the packet. Each receiver is represented by a unique bit in the bitmask.¶
BIER-TE [I-D.ietf-bier-te-arch] shares architecture and packet formats with BIER. BIER-TE forwards and replicates packets based on a BitString in the packet header, but every BitPosition of the BitString of a BIER-TE packet indicates one or more adjacencies. BIER-TE Path can be derived from a PCE and used at the ingress as described in [I-D.chen-pce-bier].¶
Further, PCECC mechanism could be used for the allocation of bits for the BIER router for BIER as well as for the adjacencies for BIER-TE. PCECC based controller can use PCEP to instruct the BIER capable routers the meaning of the bits as well as other fields needed for BIER encapsulation. The PCECC could be used to program the BIER router with various parameters used in the BIER encapsulation such as BIER subdomain-ID, BFR-ID, BIER Encapsulation etc etc for both node and adjacency.¶
This document does not require any action from IANA.¶
[RFC8283] describes how the security considerations for a PCE-based controller are little different from those for any other PCE system. That is, the operation relies heavily on the use and security of PCEP, so due consideration should be given to the security features discussed in [RFC5440] and the additional mechanisms described in [RFC8253]. It further lists the vulnerability of a central controller architecture, such as a central point of failure, denial of service, and a focus for interception and modification of messages sent to individual Network Elements (NEs).¶
As per [RFC9050], the use of Transport Layer Security (TLS) in PCEP is recommended, as it provides support for peer authentication, message encryption, and integrity. It further provides mechanisms for associating peer identities with different levels of access and/or authoritativeness via an attribute in X.509 certificates or a local policy with a specific accept-list of X.509 certificates. This can be used to check the authority for the PCECC operations.¶
It is expected that each new document that is produced for a specific use case will also include considerations of the security impacts of the use of a PCE-based central controller on the network type and services being managed.¶
We would like to thank Adrian Farrel, Aijun Wang, Robert Tao, Changjiang Yan, Tieying Huang, Sergio Belotti, Dieter Beller, Andrey Elperin and Evgeniy Brodskiy for their useful comments and suggestions.¶
This section list some more advanced use cases of PCECC that were discussed and could be worked on in future.¶
One of the main advantages for PCECC solution is that it has backward compatibility naturally since the PCE server itself can function as a proxy node of MPLS network for all the new nodes which may no longer support the signaling protocols.¶
As it is illustrated in the following example, the current network could migrate to a total PCECC controlled network gradually by replacing the legacy nodes. During the migration, the legacy nodes still need to signal using the existing MPLS protocol such as LDP and RSVP-TE, and the new nodes setup their portion of the forwarding path through PCECC directly. With the PCECC function as the proxy of these new nodes, MPLS signaling can populate through network as normal.¶
Example described in this section is based on network configurations illustrated using the following figure:¶
+------------------------------------------------------------------+ | PCE DOMAIN | | +-----------------------------------------------------+ | | | PCECC | | | +-----------------------------------------------------+ | | ^ ^ ^ ^ | | | PCEP | | PCEP | | | V V V V | | +--------+ +--------+ +--------+ +--------+ +--------+ | | | NODE 1 | | NODE 2 | | NODE 3 | | NODE 4 | | NODE 5 | | | | |...| |...| |...| |...| | | | | Legacy |if1| Legacy |if2|Legacy |if3| PCECC |if4| PCECC | | | | Node | | Node | |Enabled | |Enabled | | Enabled| | | +--------+ +--------+ +--------+ +--------+ +--------+ | | | +------------------------------------------------------------------+ Example: PCECC Initiated LSP Setup In the Network Migration¶
In this example, there are five nodes for the TE LSP from head end (Node1) to the tail end (Node5). Where the Node4 and Node5 are centrally controlled and other nodes are legacy nodes.¶
As described in [RFC8283], various network services may be offered over a network. These include protection services (including Virtual Private Network (VPN) services (such as Layer 3 VPNs [RFC4364] or Ethernet VPNs [RFC7432]); or Pseudowires [RFC3985]. Delivering services over a network in an optimal way requires coordination in the way that network resources are allocated to support the services. A PCE-based central controller can consider the whole network and all components of a service at once when planning how to deliver the service. It can then use PCEP to manage the network resources and to install the necessary associations between those resources.¶
In the case of L3VPN, VPN labels can be assigned and distributed through the PCECC PCEP among the PE router instead of using the BGP protocols.¶
Example described in this section is based on network configurations illustrated using the following figure:¶
+-------------------------------------------+ | PCE DOMAIN | | +-----------------------------------+ | | | PCECC | | | +-----------------------------------+ | | ^ ^ ^ | |PWE3/L3VPN | PCEP PCEP|LSP PWE3/L3VPN|PCEP | | V V V | +--------+ | +--------+ +--------+ +--------+ | +--------+ | CE | | | PE1 | | NODE x | | PE2 | | | CE | | |...... | |...| |...| |.....| | | Legacy | |if1 | PCECC |if2|PCCEC |if3| PCECC |if4 | Legacy | | Node | | | Enabled| |Enabled | |Enabled | | | Node | +--------+ | +--------+ +--------+ +--------+ | +--------+ | | +-------------------------------------------+ Example: Using PCECC for L3VPN and PWE3¶
In the case PWE3, instead of using the LDP signaling protocols, the label and port pairs assigned to each pseudowire can be assigned through PCECC among the PE routers and the corresponding forwarding entries will be distributed into each PE routers through the extended PCEP protocols and PCECC mechanism.¶
[I-D.cbrt-pce-stateful-local-protection] describes the need for the PCE to maintain and associate the local protection paths for the RSVP-TE LSP. Local protection requires the setup of a bypass at the PLR. This bypass can be PCC-initiated and delegated, or PCE-initiated. In either case, the PLR MUST maintain a PCEP session to the PCE. The Bypass LSPs need to mapped to the primary LSP. This could be done locally at the PLR based on a local policy but there is a need for a PCE to do the mapping as well to exert greater control.¶
This mapping can be done via PCECC procedures where the PCE could instruct the PLR to the mapping and identify the primary LSP for which bypass should be used.¶
MapReduce model of distributed computations in computing clusters is widely deployed. In Hadoop 1.0 architecture MapReduce operations on big data in the Hadoop Distributed File System (HDFS), where NameNode has the knowledge about resources of the cluster and where actual data (chunks) for particular task are located (which DataNode). Each chunk of data (64MB or more) should have 3 saved copies in different DataNodes based on their proximity.¶
Proximity level currently has semi-manual allocation and based on Rack IDs (Assumption is that closer data are better because of access speed/smaller latency).¶
JobTracker node is responsible for computation tasks, scheduling across DataNodes and also have Rack-awareness. Currently transport protocols between NameNode/JobTracker and DataNodes are based on IP unicast. It has simplicity as pros but has numerous drawbacks related with its flat approach.¶
It is clear that we should go beyond of one DC for Hadoop cluster creation and move towards distributed clusters. In that case we need to handle performance and latency issues. Latency depends on speed of light in fiber links and also latency introduced by intermediate devices in between. The last one is closely correlated with network device architecture and performance. Current performance of NPU based routers should be enough for creating distribute Hadoop clusters with predicted latency. Performance of SW based routers (mainly as VNF) together with additional HW features such as DPDK are promising but require additional research and testing.¶
Main question is how can we create simple but effective architecture for distributed Hadoop cluster?¶
There is research [MAP-REDUCE] which show how usage of multicast tree could improve speed of resource or cluster members discovery inside the cluster as well as increase redundancy in communications between cluster nodes.¶
Is traditional IP based multicast enough for that? We doubt it because it requires additional control plane (IGMP, PIM) and a lot of signaling, that is not suitable for high performance computations, that are very sensitive to latency.¶
P2MP TE tunnels looks much more suitable as potential solution for creation of multicast based communications between NameNode as root and DataNodes as leaves inside the cluster. Obviously these P2MP tunnels should be dynamically created and turned down (no manual intervention). Here, the PCECC comes to play with main objective to create optimal topology of each particular request for MapReduce computation and also create P2MP tunnels with needed parameters such as bandwidth and delay.¶
This solution would require to use MPLS label based forwarding inside the cluster. Usage of label based forwarding inside DC was proposed by Yandex [MPLS-DC]. Technically it is already possible because MPLS on switches is already supported by some vendors, MPLS also exists on Linux and OVS.¶
+--------+ | APP | +--------+ | NBI (REST API,...) | PCEP +----------+ REST API +---------+ +---| PCECC |----------+ | Client |---|---| | | +---------+ | +----------+ | | | | | | | +-----|---+ |PCEP| | +--------+ | | | | | | | | | | | | REST API | | | | | | | | | | | +-------------+ | | | | +----------+ | Job Tracker | | | | | | NameNode | | | | | | | | | +-------------+ | | | | +----------+ +------------------+ | +-----------+ | | | | |---+-----P2MP TE--+-----|-----------| | +----------+ +----------+ +----------+ | DataNode1| | DataNode2| | DataNodeN| |TaskTraker| |TaskTraker| .... |TaskTraker| +----------+ +----------+ +----------+¶
Communication between JobTracker, NameNode and PCECC can be done via REST API directly or via cluster manager such as Mesos.¶
Phase 1: Distributed cluster resources discovery During this phase JobTracker and NameNode SHOULD identify and find available DataNodes according to computing request from application (APP). NameNode SHOULD query PCECC about available DataNodes, NameNode MAY provide additional constrains to PCECC such as topological proximity, redundancy level.¶
PCECC SHOULD analyze the topology of distributed cluster and perform constrain based path calculation from client towards most suitable NameNodes. PCECC SHOULD reply to NameNode the list of most suitable DataNodes and their resource capabilities. Topology discovery mechanism for PCECC will be added later to that framework.¶
Phase 2: PCECC SHOULD create P2MP LSP from client towards those DataNodes by means of PCEP messages following previously calculated path.¶
Phase 3. NameNode SHOULD send this information to client, PCECC informs client about optimal P2MP path towards DataNodes via PCEP message.¶
Phase 4. Client sends data blocks to those DataNodes for writing via created P2MP tunnel.¶
When this task will be finished, P2MP tunnel could be turned down.¶
Following authors contributed text for this document and should be considered as co-authors: Luyuan Fang Expedia, Inc. United States of America Email: luyuanf@gmail.com Chao Zhou HPE Email: chaozhou_us@yahoo.com Boris Zhang Telus Communications Email: Boris.zhang@telus.com Artem Rachitskiy Mobile TeleSystems JLLC Nezavisimosti ave., 95 220043, Minsk Belarus Email: arachitskiy@mts.by Anton Gulida LLC "Lifetech" Krasnoarmeyskaya str., 24 220030, Minsk Belarus Email: anton.gulida@life.com.by¶