Network traffic is exploding. What was once busy for a large enterprise or communications company is now the norm. Over a decade has passed since a few hundred gigabits per data center was considered spectacular, and now it’s just another day at the office as networks handle millions of concurrent connections while data carriers transfer petabytes of data across continents on a daily basis. Yet, to the amazement of many, specific designs have emerged to handle all of this.
At the heart of today’s network infrastructure is the network controller, a single chip that orchestrates communication between routing protocols and the flow of packets on a network. But the network controller is more than just a piece of supporting hardware for software-based routing—it fundamentally alters how data moves through the network.
The Hardware Foundation of Modern Routing
General-purpose CPUs are very efficient at following a straight line, processing one thing at a time, and that’s great for lots of applications. But for lots of network processing applications, the CPU becomes a bottleneck. Why is this? For lots of network processing applications, you have thousands of packets coming in simultaneously, and the network controller chip is optimized for these types of applications. It has a specialized silicon architecture that enables many simultaneous processing units to process many packets at the same time. One processing unit might be looking at the packet headers, another might be doing a table lookup, and another might be handling quality of service issues, for example.
Because software routing is so heavy with overhead and would be inordinately slow if implemented sequentially, we implemented it in parallel.
For those who have previously worked within the realm of traditional computing, the memory setup found on most controllers can be somewhat confusing. General purpose computers are typically configured with a combination of fast access cache and slower hard drive storage. The cache is typically optimized for temporal access, or access in terms of time, in order to provide the best possible performance. Conversely, network chips utilize specialized content-addressable memory (CAM) structures to enable instantaneous lookups of routing information without the need to sequentially search through hundreds or thousands of entries.
In many next-generation network infrastructure scenarios, the controllers on servers, storage devices, and virtual machines (VMs) are required to inspect every packet to apply thousands of access control list (ACL) entries, enforce multiple traffic-shaping policies and maintain significant flow statistics. Standard general-purpose processors cannot process at wire-speed to meet these requirements; specialized network controllers are better suited for these tasks.
Packet Processing Pipeline Architecture
Modern network controllers process packets in a pipeline, segmenting the packet into discrete operations, such as switching, policing, and error handling, and distributing these functions across multiple stages. To achieve high throughput while maintaining low latency, network controller designers attempt to increase pipeline depth by supporting multiple functions simultaneously.
The ingress logic on the switch silicon first verifies all received packets on the physical interfaces. Packets found to be corrupted are dropped to prevent overwhelming downstream bandwidth. The packets that are not corrupted are then routed to the classification engines where the packets are inspected to apply the correct policy. The chips process packets very quickly by performing multiple parallel lookups to determine how to process each packet. The silicon checks destination address against routing tables, source address against security policies, and traffic patterns against service requirements for QoS.
As packets make their way through the network pipeline, effective buffer management becomes a crucial component to optimal network operation. Many controllers support multi-level queuing which prevents head-of-line blocking and allows for fairly distributing bandwidth amongst different traffic classes. The Broadcom BCM88284 controller supports three priority levels for each port with up to eight queues per priority level supporting a total of 24 queues per port.
All modifications to packet structure and scheduling are performed at egress, using traffic shaping to constraint output link utilisation to pre-configured rates, and priority queueing to isolate high-priority from best-effort traffic.
Managing Network Scale and Complexity
Large enterprise networks are typically segmented into hundreds of VLANs, may run multiple routing protocols, and have very complex policies for inter-domain communication. Current networking infrastructure has scaled to the point where management of complex networks no longer requires additional large-scale computing resources that a network controller might alleviate. Even with these capabilities, a network controller is able to manage multiple virtual networks with separate forwarding tables.
The answer to increased multi-play services is multi-tenancy with fully isolated or shared functionality for service providers to deliver individual network services to their customers all running off a common infrastructure; furthermore, traffic can be intelligently load-distributed across multiple available physical or logical paths at any single location on the Silva Labs silicon to avert congestion on any single link.
Power Efficiency in High-performance Systems
As data centers consider the cost of power when evaluating new infrastructure, the cost of power to run the equipment can often exceed the actual acquisition cost of the equipment itself. With modern network infrastructure architecture, power management has become an important feature in network controller design. To optimize power utilization, many controllers dynamically adjust the voltage to minimize power consumption based on real-time traffic profiles. Some auto-negotiating ports also can dynamically reduce the interface speed to consume up to 30-40% less power during low-traffic periods, versus a fixed speed port operating at full speed.
Integration With Software-defined Networking
Hardware support for fast updating forwarding tables on distributed devices like switch is growing importance to SDN controllers. Network chips now support not only fast installation of forwarding instructions like in prior SDN networks, but also support for centrally controlled application programming interfaces (APIs) to control planes that can perform real-time, fine-grained processing of packets as they traverse the network. The architecture of the network pipeline can be designed to support particular applications, and future SDN devices may include support for specialized parsing of proprietary or unmanaged services like video distribution, as well as advanced security features and policies that today are not supported by traditional networking hardware.
Our hybrid network infrastructure provides our large enterprise customers with complex high-capacity networks while allowing for the agile and cost effective deployment of services and applications. Networking hardware has come a long way since the simple days of “store-and-forward.”
John Giddings is an expert in app reviews and guides, helping parents and families understand and use digital tools easily. He writes clear, step-by-step articles on apps like ParentPay, showing how to make payments, stay organized, and get the most out of technology. John’s goal is to make complicated apps simple and safe for everyone to use.