Use Cases

Barefoot Networks uses programmability to enable enterprises, mega-scale data centers, and telco providers to introduce new functions and features without compromising performance.

Programmability allows customers to:

  • Enhance Visibility

    Enhance Visibility

    with the use of new diagnostics, in-band per-packet telemetry and OAM

  • Achieve the Design Intent

    Achieve the Design Intent

    through real-time measurements and the detection and correction of anomalies

  • Increase network reliability

    Increase network reliability

    removing unused features and protocols, simplifying network operations, and focusing network resources on what is important

  • Accelerate Innovation

    Accelerate Innovation

    enabling new applications on any network

World of P4 Advanced Apps

P4 allows customers to implement a new class of high performance data-plane applications and deploy them with unprecedented agility on Tofino. Such applications can be implemented from a pre-built library provided by Barefoot Networks or programmed from scratch and compiled from P4. Barefoot calls it the “World of Advanced Apps,” and the diagram below illustrates some of the main use-cases where such applications can be developed.

Use cases

Advanced Network Telemetry

  • 1 “Which path did my packet take?”
  • 2 “Which rules did my packet follow?”
  • 3 “How long did it queue at each switch?”
  • 4 “Who did it share the queues with?”
  • chipWith its Advanced Network Telemetry, Barefoot Tofino can answer all four questions for the first time. At full line rate. With nanosecond accuracy.

Datacenter networks have evolved over the years in both scale and complexity. Large datacenter networks serving highly volatile workloads are deployed using multi-tier Clos topologies with multiple paths between end-points to scale bandwidth. In addition, modern data-centers leverage network virtualization to automate L2 and L3 topologies with a rich set of L4-L7 services that can be built rapidly in seconds to connect physical servers, virtual machines and containers.

In such a dynamic networking environment, traditional network monitoring techniques like SNMP, based on fetching state from individual network elements through the control plane, are either too restrictive or too slow. Similarly, Netflow and synthetic probes are not accurate enough to detect issues caused by short-lived events or microbursts that can have a serious impact on services and applications. Use of traffic mirroring or physical TAPs is also infeasible due to the large amount of traffic, lack of metadata and history, which makes tracking and correlating events impossible to achieve, especially at large scale. Last but not least, upfront cost and TCO of a Network Packet Broker (NPB) solution can be considerably high.

“In-band Network Telemetry” (INT) enables collection of end-to-end, real-time state information directly in the datapath. A source end-point embeds instructions in packets listing the types of network state to be collected from the network elements. Each network element inserts the requested network state in the packet as it traverses the network. A P4 program can be used as a natural way to express the kind of packet header parsing and modifications required for INT.

Collection of data can now occur on the actual traffic, giving ability to observe and collect real-time, end-to-end network state across virtual and physical networks. This opens limitless possibilities for monitoring your data center, allowing the network team to capture and describe transient issues that arise due to performance bottlenecks, network failures, or configuration errors. This enables network teams to answer the four ground-truth questions that face network operators today.

Advanced Network Telemetry use-case: AT&T

AT&T leads innovation in the service provider space. In the AT&T vision, network transformation is the key element, and can be realized through the disaggregation and standardization of networking components in an open architecture.

Tofino's programmability allowed AT&T to fulfill two key criteria for the project:

First, time to market. Full programmability meant that AT&T controlled its own destiny in writing its new data plane, without the risk of relying on a fixed-function silicon vendor to do the data plane programming work.

Second, programmability provided the foundation on which Inband Network Telemetry could be built. A major service provider like AT&T requires precise control over the types of traffic and types of events it monitors on the network. At this point, only the programmable Tofino chip offers the ability to monitor, at line-rate, any packet header the operator wishes to monitor. Just as important, Tofino gives the operator the ability to add handling immediately for new header types in the future.

By fulfilling these two criteria, Barefoot Networks Tofino and Barefoot Networks Inband Network Telemetry play a key role in this transformation at AT&T, creating a network infrastructure on which AT&T can offer major new services.

Programmability and adaptation helps companies like AT&T keep up with the pace of innovation and make technology transitions economically feasible (see ONS Keynote: The Next Generation of Network Software).

Here are some Customer quotes:

“AT&T put [Tofino] into working production in less than three months. If this were like a medical conference, I would be talking about how we’re moving from x-rays to MRIs. That’s how big a deal this is.”

Andre Fuetsch - President AT&T Labs and Chief Technology Officer at AT&T

AT&T Production Deployment

AT&T Production Deployment

AT&T Production: Barefoot Components

  • chip

    Forwarding Plane

    • switch.p4
    • L2/L3/MPLS features
    • In-Band Network Telemetry (INT)
    • 6.5 Tb/s Tofino - P4 Programmable
  • System

    System

    • Edge-core Wedge 10DBF-65X
    • Snaproute Flexswitch
  • INT Metadata

    INT Metadata

    • Queue size
    • Hop ID
    • Ingress Timestamp
    • Egress Timestamp

AT&T use-case in the News

  • AT&T White Box a Disruptive Force

    Read more
  • AT&T Runs Open Source White Box Switch in its Live Network

    Read more

Layer 4 Load Balancer

Dramatic growth of data traffic has led to increased use of network service appliances and servers in data centers. Network switches and routers have evolved to support multi-terabit capacity, however, appliance and server capacity remains limited to a few tens of gigabits at best, far below the network throughput capacity. To compensate network operators end up allocating considerable resources to the deployment of hardware and software load balancing for scale-out. A scaled-out L4 load balancing architecture is today a mainstream design choice in modern on-premises and hybrid data centers. The result is unnecessary infrastructure cost and the burden of life cycle management for the load balancing infrastructure.

Barefoot Networks layer-4 load-balancing powered by the Tofino chip and P4 programmability bridges the performance gap between multi-terabit switches and gigabit servers and appliances. Until now, bridging this gap has required operators to deploy large numbers of load balancer appliances. With Tofino, load-balancing can be done inside the switch, providing multi-terabit traffic distribution for layer 3 and 4 services and applications. The Barefoot solution supports multiple load-balancing mechanisms, resilient hashing and flexible allocation of hardware resources to load-balance millions of connections.

Tofino Load-Balancer Architecture

Layer-4 load balancing is basically a mapping function from a connection (i.e., the source and VIP IP addresses, the protocol type, and the L4 port numbers) to a server DIP. A DIP pool is managed for each VIP (that is a data-center service) maintaining the VIP-to-DIP pool mappings in a dedicated VIP table.

Hairpinning of traffic to a dedicated appliance or a service rack introduces sub-optimal traffic forwarding. In such deployments, physical and virtual load-balancers often struggle with performance when demand increases (see Figure 1). Especially challenging is to enable and scale-out LB resources in a multi-tenant data center.

Figure 1

With Tofino, a large number of software-based load balancer servers can be replaced by a single modern Tofino based switch, reducing the cost of load balancing by multiple orders of magnitude, with a distributed architecture and optimized traffic path (see Figure 2).

Figure 1

A good load balancer implementation must always map a connection to the same server, even if the pool of servers changes or if the load is spread differently across the pool. This fundamental property of a load-balancer is called per-connection affinity. The challenge is that a data-center load balancer must keep track of millions of connections simultaneously.

Until recently, it was not possible to implement a load balancer with pre-connection affinity with a commercial, off-the-shelf switching ASIC, because high-performance switching ASICs typically cannot maintain L4 connection states with a connection-affinity guarantee. The Barefoot Networks Tofino ASIC provides resources and primitives to guarantee connection affinity even in the presence of concurrent DIP pool changes and millions of connections, while allowing for low latency, and terabit speed.

Benefits of a Tofino-Based Load-Balancer

  • Multi-terabit design for the world’s most demanding environments
  • Embedded endpoint health checks with zero runtime reliance on control plane
  • Guaranteed sub-second detection of pool member failure, and traffic redirection
  • Flexible model powered by P4 programmability allows you to support multiple additional use-cases on top of load balancing, such as DDoS mitigation and DNS caching

Tofino Load-Balancer Use Cases

  • Scaling specialized web services such as SSL accelerators, HTTP compression, and others
  • Scaling of security services such as intrusion prevention system, intrusion detection system, web application firewall and more
  • High performance video distribution / caching

In-Network DDoS Detection

DDoS (distributed denial of service) attacks make a service, machine, or network inaccessible to legitimate users by overloading their resources (network, CPU, memory, and so on) using requests generated from distributed sources.

DDoS attacks can cost millions of dollars for companies and businesses by taking critical services down, and leading to other attacks such as data breaches. The complexity and scale of the DDoS threat is growing each year and can be originated from IoT devices, routers and cloud and directed to organizations, government agencies, and infrastructural services such as DNS.

DDoS detection is inherently hard because the traffic pattern looks similar to legitimate packets. As a result, signature-based detection is not effective. Requests come from many source IPs (either real or spoofed), so filtering based on rate and source do not work either. A distinct characteristic of such attacks is the presence of many connections from many source IPs each sending one or multiple packets.

Figure 1 describes a typical DDoS detection and mitigation solution. The solution has a number of out-of-band DDoS detection appliances that monitor the traffic and redirect suspicious flows to a stateful firewall. In such a design, DDoS detection cannot monitor all of the incoming network traffic because of scale and cost. Thus the operator must set some static mirroring rules on the edge routers to only mirror a portion of the traffic. For example, if the network operator wanted to protect the DNS service, he would mirror the DNS UDP traffic toward the DDoS detection boxes.

Consequently, with heavily distributed and complex attacks, the infrastructure needs to either be capable of scaling out to examine terabits of traffic per second and several million connections, or it must be able to switch to a more selective mode in which it monitors a much smaller fraction of the traffic with lower accuracy.

Figure 1

Barefoot Networks Tofino can calculate the number of unique connections (that is a 5-tuple flows) crossing each Tofino-based device using a method called Approximate Cardinality Counters (ACC). Two approaches for the DDoS detection are possible:

  • 1 The control-plane can periodically fetch and estimate the number of unique connections at a specified time interval, gathering this information from the hardware (control plane polling). If the estimated number of unique connections exceeds a predefined threshold, the control plane can sense a DDoS detection and mirror a copy the traffic to the distributed firewall as illustrated in Figure 2.
  • 2 In order to improve the detection speed without imposing overhead to the control plane, a Tofino implementation can also estimate and compare against a threshold directly in the data plane. In this case, the data plane signals the control plane when such a DDoS detection happens (data-plane push).
Figure 2

Benefits of In-Network DDoS detection

  • A Tofino implementation guarantees high scalability and line-rate performance under any type of attack with minimal consumption of on-chip memory and resources.
  • In-network DDoS detection can be implemented in Tofino with high accuracy and negligible probability for false positives.
  • P4 programmability allows customers flexibility and customization of the DDoS detection methods and mitigation actions.
  • Granular statistics allow customers to quickly identify which applications and services are under attack.
  • When compared with a DDoS solution using NetFlow, a Tofino-based approach is multiple orders of magnitude faster in detecting a DDoS attack (tens of milliseconds vs. tens of seconds).