SDN/NFV Transport Network and Computing platform for 5G/IoT Services

The ADRENALINE testbed® is an SDN/NFV packet/optical transport network and edge/core cloud platform for end-to-end 5G and IoT services, designed and developed by the CTTC Optical Networks and Systems Department for experimental research on high-performance and large-scale intelligent optical transport networks. It allows researchers, system vendors and operators to evaluate, experimentally in conditions close to production systems, all aspects related to cloud computing in distributed environments with multiple geographically split data centers, while jointly managing storage, computing and networking resources.

New_ADRENALINE_testbed_v4.0

The main deployment of the ADRENALINE Testbed (as shown in the Figure above) has been designed in order to capture several facts. New IoT and 5G services require the orchestration of heterogeneous resources in constrained-placement environments; network operators will deploy variable sized data-centers in different locations interconnected by transport networks and a common operational methodology regardless of data center size should be adopted. The different parts of the testbed represent the main network segments in an end-to-end scenario, and the main components of the testbed are:

A fixed/flexi-grid DWDM core network with white box ROADM/OXC nodes and software-defined optical transmission (SDOT) technologies to deploy sliceable-bandwidth variable transceivers (S-BVTs) and programmable optical systems (EOS platform). The optical core network includes a photonic mesh network with 4 nodes (2 ROADMs and 2 OXCs) and 5 bidirectional DWDM amplified optical links of up to 150 km (610 km of G.652 and G.655 optical fiber deployed in total). The S-BVT implements flexible programmable (SDN-enabled) optical transmission systems, based on modular transceiver architectures. Key advanced functionalities can be enabled and tested to fulfill the dynamic requirements and flexibility challenges of future optical networks. This includes spectral manipulation and rate/distance adaptability for optimal spectrum/resource usage, as each transceiver module is capable to generate a multi-format variable rate/distance data flow (slice). From the control plane perspective, the optical core network is based on the SDN paradigm. The photonic mesh network nodes (i.e., ROADMs and OXCs) are controlled with an active stateful PCE (AS-PCE) on top of a distributed GMPLS control plane for path computation and provisioning. The AS-PCE acts as unique interfacing element for the T-SDN orchestrator. By means of SDN agents, the S-BVT can be programmed and (re)-configured to adaptively transmit over the suitable optical network path. In addition, advanced (self)-performance monitoring is available on-demand in the platform.

A packet transport network for the edge (access) and metro segments for traffic aggregation and switching of Ethernet flows with QoS, and alien wavelength transport to the optical core network. It is based on cost-effective OpenFlow switches deployed on COTS and using Open vSwitch (OVS) technology. There are a total of ten OpenFlow switches distributed in the edge (access) and metro (aggregation) network segments. The edge packet transport network is composed of four edge nodes (providing external connectivity  to radio equipment such as base stations and IoT access gateways), and two OpenFlow switches located in the COs. The edge nodes are lightweight industrialized servers based on Intel Next Unit of Computing (NUC) since they have to fit in cell-site or street cabinets. The metro packet transport network is composed of 4 OpenFlow switches. The two nodes connected to the optical core network are packet switches based on OVS but with a 10 Gb/s XFP tunable transponder as alien wavelengths. Both the edge and metro segments are controlled with two OpenDayLight (ODL) SDN controllers using OpenFlow.

A distributed core and edge cloud platform for the deployment of VNFs and Virtualized application Functions (VAFs). The core cloud infrastructure composed of a core-DC with high-performance computing (HPC) servers and an intra-DC packet network with alien wavelength transport to the optical core network. The edge cloud infrastructure is composed of micro-DCs in the edge nodes and small-DCs in the COs. The distributed core and edge cloud platform is composed by one core-DC, two small-DCs, and four micro-DCs, leveraging virtual machines (VM) and container-based technologies oriented to offer the appropriate compute resources depending on the network locations. Specifically, VM-centric host virtualization, largely studied in the scope of large data centres, is used for the core-DC and small-DCs, and container-based technology, less secure but lightweight, for micro-DCs. The core-DC is composed of three compute nodes (HPC servers with a hypervisor to deploy and run VMs) and each small-DC with one compute node. The four micro-DCs are integrated in the edge nodes, together with the OpenFlow switch. The intra-DC packet network of the core-DC, is composed of four OpenFlow switches deployed on COTS hardware and OVS technology as well. Two out of the four OpenFlow switches are equipped with a 10 Gb/s XFP tunable transponder connecting to the optical core network as alien wavelengths. The four OpenFlow switches are controlled by an SDN controller running ODL responsible for the Intra-DC network connectivity. The distributed cloud computing platform is controlled using three OpenStack controller nodes, one for controlling the four compute nodes (computing, image and networking services) of core-DC, another for the two compute nodes of the small-DCs, and the last for the four compute nodes of the micro-DCs

An SDN/NFV control and orchestration system to provide global orchestration of the multi-layer (packet/optical) network resources and distributed cloud infrastructure resources,. It is composed of a cloud orchestrator, a transport SDN orchestrator, and an NFN network service platform (i.e., NFV orchestrator & VNF Managers). The Cloud Orchestrator sits on top of the multiple DC controllers to deploy general cloud services across distributed DC infrastructures (micro, small, core) resources for multiple tenants. Specifically, the cloud orchestrator allows to drive the creation/ migration/ deletion of VM/container (computing service), the storage of disk images (image service), and the management of the VM/container’s network interfaces (networking service) on the required DCs for each tenant. The transport SDN orchestrator  acts as a unified transport network operating system (or controller of controllers) that allows the control at a higher abstracted level, of heterogeneous network technologies regardless of the specific control plane technology employed in each domain through the use of the common Transport API. The Transport API enables to abstract a set of control plane functions used by an SDN controller. This abstraction enables network virtualization, that is, to partition the physical infrastructure and dynamically create, modify or delete multiple co-existing virtual tenant networks (VTN), independent of the underlying transport technology and network protocols. The SDN Orchestrator is also responsible for representing to the tenants an abstracted topology of each VTN (i.e., network discovery) and for enabling the control of the virtual network resources allocated to each VTN as if they were real. The architecture is based on the Application-based Network Operations (ABNO). The ETSI NFV Management and Orchestration (MANO) architectural framework identifies three functional blocks; virtualized Infrastructure Manager (VIM), NFV Orchestrator (NFVO) and VNF Manager (VNFM). The VIM is responsible for controlling and managing the NFVI virtualized compute, storage and networking resources (e.g, Openstack controller). The VNFM is responsible for the lifecycle management (i.e., creation, configuration, and removal) of VNF instances running on top of virtual machines or containers. Finally, the NFVO has two main responsibilities, the orchestration of NFVI resources across multiple VIMs (resource orchestration), and the lifecycle management of network services (network service orchestration). We use open source MANO (OSM)  and SONATA as NFVO.  It is also responsible for the dynamic lifecycle management (provisioning, modification and deletion) of network slices. Each network slice is composed of virtual resources (VTN, computing and storage, and VNF) that exist in parallel and isolated for different tenants (e.g., vertical industries, virtual operators). The NFVO provides per-tenant programmability of the network slices and exposes an abstracted view of the network slice’s virtual resources to each tenant.

The testbed is prepared for the interconnection with other demonstrators, either external facilities (such as he 5GBarcelona initiative (https://5gbarcelona.org/)) or internal additional CTTC testbed facilities such as tthose providing the wireless HetNet and backhaul (EXTRE

ME Testbed® and LENA LTE-EPC protocol stack emulator) and wireless sensors networks.