Resources Intelligence Platform diagram

Virtualized infrastructures are becoming increasingly critical in CSPs' deployments as mobile networks grow and expand from core to edge, and as 5G implementations begin. In contrast to generic IT workloads, NFV has really stringent KPIs, like deterministic performance, high throughput or low latency, which often requires using the underlying infrastructure in inflexible and inefficient ways in order to achieve them. For instance, dataplane VNFs are keeping the servers where they run constantly at a high power-up state, as if they are always operating for peak demand. Similarly, operators deploy critical functions typically in isolation, reserving upfront a large portion of server resources in order to prevent contention from other services, and therefore Service Level Objectives (SLOs) violation. But in this way a large portion of infrastructure remains unutilized.

Intracom Telecom's NFV Resource Intelligence addresses the above challenges employing AI to realize autonomous service assurance. It decides the ideal distribution and configuration of resources fully automatically, in closed-loop forms, and dynamically, under any traffic or colocation condition. In this way, SLOs are always maintained and resources are used cost-efficiently – solving a highly complex challenges that goes beyond human expertise. Intracom Telecom's NFV Resource Intelligence guarantees optimal execution of the virtualized Network Services, and optimal utilization of the infrastructure where they are running.

Energy Consumption icon

Reduces energy energy consumption od DPDK-based packet processing VNFs by thrttling their power according to their actual traffic load

Slicing icon

Protects high-priority services from "noisy neighbors" by carefully slicing and isolating critical hardware resources leveraging advanced hardware technologies

Configuration icon

Automatically discovers optimal resource configurations that deliver certain performance levels for one or more VNFs, specified by the user

Infrastructure Utilization icon

Increase infrastructure utilization by enabling denser VNF placement (colocation) without introducing contention and SLO violations

Autonomous service icon

Realizes autonomous service assurance leveraging AI and closed-loop control.

Observability icon

Delivers maximum observability both for the VNFs and the platform, by exposing rich telemetry to the user via customized dashboards

Multiple logos of KVM platforms

Support multiple types of VNFs: KVM Virtual Machines, native Linux applications, Docker containers, Kubernetes pods

Energy optimization for dataplane VNFs

Dataplane VNFs (e.g. those based on DPDK) have stringent KPIs, like low latency, zero packet loss, and high throughput. One of the techniques used to achieve them is polling, which results in keeping the servers where VNFs run always at a high, power-up state, as if they are operating for peak demand. The reason is that polling keeps the VNFs CPUs constantly busy, running with the highest frequency all the time, even when there is zero traffic to process, thus consuming the maximum possible power. For light, off-peak traffic periods this suggests overprovisioning, because VNFs could be perfectly operated at lower frequencies, consuming less power, without suffering from packet drops or increased latencies.

Note that existing middlewares, like the Linux OS, fail to dynamically adjust the CPU frequencies for DPDK VNFs, as they are oblivious to the actual load a DPDK VNF has at any time -- DPDK VNFs appear to the OS as being always busy, at 100% CPU utilization.

Intracom Telecom's Resource Intelligence Platform features mechanisms to leverage exactly this knowledge, and adapt CPU frequencies according to the load a DPDK VNF has at any time, while always guaranteeing 0 packet drops. This is realized with a fully automated, AI-powered closed-loop mechanism. In this way, the infrastructure is operated at lower power during off-peak or light-traffic periods, without compromising VNF performance.

In experiments conducted with a prototype of the dataplane nodes of the Mobile Core (vEPC) of a Greek Tier-1 CSP, Intracom Telecom's Resource Intelligence Platform achieved a 14% reduction in the total server power consumed over a 24-hour period, without causing any SLO violation at all. In the best case (i.e. period of lightest, non-zero traffic during the night), the power reduction was 35%.

Energy Optimization Diagram

Edge colocation without performance compromises

In the centralized data-centers, CSPs have been traditionally using their infrastructure in conservative ways in order to meet the strict requirements for NFV services. As an example, latency-critical functions are often placed in isolation, reserving upfront a large portion of server resources, in order to prevent other services from using them and potentially introducing contention and SLOs violation. But in this way, a large portion of infrastructure remains unutilized.

The transition from the centralized telco cloud to the distributed edge cloud will make things more complicated. It is expected that RAN and Core services will be running on the same edge servers, together with OTT applications (e.g. streaming services), and new types of edge services. This increased workload diversity introduces uncertainty about how all these application will co-exist on common platforms.

Using isolated deployments as a means to avoid contention suggests a rather wasteful use of resources, which are obviously not abundant in the edge cloud. Therefore, colocation will be the only viable way for operators, but the challenge is how to ensure that all these application will not interfere with each other.

Intracom Telecom’s Resource Intelligence Platform allows increasing the density of network functions running on an edge server, in a way that does not compromise their performance. It employs advanced server mechanisms to slice shared server resources in a fine-grain fashion, and assign them to each network function individually, for their exclusive use. This eliminates contention and allows much denser workload placement. Resource Intelligence Platform features AI algorithms which can identify and allocate the ideal amount of resources for every network function, so they can enjoy performance levels close, or identical to their standalone execution.

In experiments conducted with a virtualized router, a virtualized forwarder (both accepting traffic through an Open vSwitch), and a couple of additional best-effort edge services, all collocated on the same edge server, Intracom Telecom’s solution was able to automatically discover and perform resource allocation to all services, so that the latency of both VNFs was minimized down to their standalone execution (i.e. from 240-250 usec, in the default case, to 170-180 usec).

Edge colocation Diagram