Just as the evolution of the smart phone was enabled by the transition from 3G to 4G, use cases for connected devices will evolve with 5G. There will be connected appliances, connected cars, connected factories and even connected cities that will each require the low latency and localized processing power of 5G to function. For communications service providers (CSPs), meeting the challenges of 5G isn’t simply a matter of building bigger networks. Instead, they will need to break down their big networks into smaller, disaggregated components that are closer to the source of these connected devices.
In this new network model, the edge becomes increasingly important. Unlike the core-centric network model of the past, 5G networks will require much of the heavy lifting of processing and analytics to be performed closer to the user, running on a very small footprint. In the case of smart factories, the edge could be multi-access edge computing (MEC) within a hosted private 5G network. For online gaming, this could mean adding compute capacity to the edge of the telecom network. In both cases, CSPs will need to deploy these edge resources quickly and cost-effectively if they hope to monetize 5G opportunities.
A purpose-built, cloud native platform for the edge
Container-based cloud platforms are the logical path forward for 5G networks. Containers provide a workable foundation for disaggregating network functions that is highly scalable and easier to manage. Software and hardware are separated using container network functions (CNFs) orchestrated as scalable building blocks via Kubernetes. Standards-based tools allow these components to be remotely configured and managed from a central point of control. Yet, even within cloud environments, edge applications have unique considerations for security, performance, durability, and management. Adding to the complexity is the fact that the edge cloud still has multiple servers, only they are distributed across a wide geographical area versus all sitting in a single data center.
Because the edge has unique requirements, StarlingX was built for the edge first. StarlingX is an open source, telco-grade, cloud environment optimized for distributed edge applications that is gaining industry momentum. It is based on Kubernetes, OpenStack and other open-source cloud technologies and is designed specifically to meet the low latency and small footprint requirements at the edge. In fact, StarlingX is the foundational cloud software used in the real time platform built into the Infrastructure (INF) project of the O-RAN Alliance that enables software developers to develop, test and run O-RAN compliant RAN deployments.
Wind River Studio is a commercial implementation of StarlingX that includes a complete cloud platform, automation and orchestration tools and real-time analytics. Vodafone recently announced they have selected Wind River as a key supplier for their Open RAN rollout starting in the UK. With Wind River Studio, CSPs can safely and quickly deploy an edge cloud environment to support use cases such as MEC, IoT and vRAN on a proven and fully supported commercial platform. Earlier this month, Dell Technologies announced that it has jointly developed a reference architecture with Wind River for CSPs that provides a complete, end-to-end edge solution featuring Wind River Studio and Dell Technologies PowerEdge servers and PowerSwitch networking. This reference architecture delivers everything that CSPs need to quickly deploy, scale and manage resources anywhere in their 5G network.
Achieving lower costs and less complexity
There are many benefits to deploying edge services with StarlingX, but let’s start with the big one: cost. In a recent report, Enterprise Strategy Group (ESG) found that moving to a disaggregated vRAN infrastructure running Wind River Studio’s commercial distribution of StarlingX saved CSPs up to 75% versus traditional appliance-based RAN systems per node. That number was slightly lower (67%) but still impressive for dual-redundancy nodes.
StarlingX reduces operational overhead by providing single-pane-of-glass management and zero-touch provisioning for the entire edge infrastructure. CSPs can manage, configure and automate upgrades/rollbacks of edge servers through standards-based Redfish APIs. This greatly accelerates the deployment time of edge infrastructure, enabling CSPs to go from bare metal to fully functioning edge capabilities from day one. In the development of their reference architecture, Dell Technologies and Wind River validated that it could automate server provisioning and management tasks leveraging Redfish APIs. These Redfish APIs are embedded in the integrated Dell Remote Access Controller which is included with every Dell server. This provides operators with a commercially supported solution that automates the deployment and management of nodes out to the far edge of the operator’s network.
StarlingX also generates real-time network analytics using machine learning tools, which can be used to optimize network configurations, performance, improve capacity planning and even create new services.
StarlingX is an open-source cloud platform built to meet the latency, cost, reliability and management requirements of the far edge. It has an active community around it and the contributors are continuously working on evolving the platform to fulfill the requirements of new edge use cases. Scaling from a single node to thousands of nodes, StarlingX is an advanced edge solution that is being adopted and supported by industry leaders like Vodafone, Wind River and Dell Technologies.
A flood of 5G applications is just around the corner. Give yourself the edge you need to capitalize on 5G revenue opportunities by taking a closer look at StarlingX. If you are already using StarlingX, the community would love to hear from you by taking the StarlingX user community survey.
Source: delltechnologies.com