Ethernet Switching with Virtual Output Queuing


Imagine a building with three floors - a lobby, a second and a third floor.

This building has an elevator [Think: An Ethernet Switch] that services people [Frames] between the three floors [Ports].

However let’s also pretend that this is a fairly busy little building, and since there’s just one elevator car [Fabric Bus], people entering the lobby are having to queue to get into the elevator while it’s busy transporting earlier-arriving people to their desired floors. So regardless of the destination floor that people wish to go, they’re being held up in an elevator line in the lobby.

At this point some of the people now entering the lobby will see this impasse of a line and decide to turn away (take the stairs, leave for the pub, etc). Regardless of the floor that newly arriving people wish to go, they’re being discouraged [prevented] from even queuing in the lobby for the elevator because of the time it will take to service the folks already ahead of them [Head of Line], let alone themselves. This little building is so busy that this situation happens quite regularly, maybe even constantly, and is producing some inefficient results for the buildings tenants and visitors alike.
So to fix this, what if we increased the number of elevators in the building from one to three? This would allow two more working elevator cars which could be servicing the line of folks while other ones are currently in transit between floors. To take it a step further, perhaps we could even install a private, VIP-only elevator?

What Are We Solving For?


The basic problem being solved by installing these new elevators is eliminating the long lobby elevator queue [Port Input Queue] and making sure that people ultimately reach their desired floors [Port Output Queue] without just leaving instead [Frame Discards].

Okay, enough with the analogies, you’ve suffered enough. The solution though for our busy little building roughly translates to that of using Virtual Output Queues in Ethernet switching, and if you’ve been involved in the building of network architectures for high replication, heavy east-west chatting applications then the term might sound familiar. And though the name might seem relatively new, the problem that it’s trying to solve is by no means, having existed in the operational basis of Ethernet bridging for decades.

The purpose of Virtual Output Queues (“VoQ”) can be summed up quite simply: a technology that was derived to solve Head-of-Line blocking in Ethernet switching.

To better understand the value of VoQ in our datacenter switches let’s first refresh our understanding of Ethernet switching basics.

Store & Forward - Stores the full Ethernet frame into memory buffers before forwarding
Cut-Through - Processes only enough Ethernet frame header data (usually only src/dst MAC) to forward

In S&F switching you have an input and output queue, per port. This means that when port 1 sends to port 2, frames are moved from port 1’s input queue to port 2’s output queue.

In CT you have only output queues per port. The [on-chip] memory allocation that was previously used for input queuing is now used just for output.

Side note: At a time there was much debate as to which was better for general deployment. Ultimately S&F became commonplace since the performance-to-feature ratio in S&F was more worthwhile than CT. (The speed performance using cut-through was nominal, something close to 1 microsecond, but with a loss of a handful of full-frame processing capabilities.)

S&F however is susceptible to a phenomenon known as head of line blocking due to it’s requirement of per-port input queuing, which is a nasty little problem to have in high-replication compute clusters.

Head of Line Blocking


Head of Line (HOL) blocking is the inability of a network port to accept frames into its input queue because it’s full.

Going back to our previous example, if port 2’s output queue fills up it ends up blocking back into port 1’s input queue. Now any frames transmitted to port 1, regardless of their output port, will be dropped. The input queue (port 1) is full because there’s port contention on the output interface (port 2). All frames destined for ports that aren’t contended are dropped. This is the head-of-line blocking phenomenon, which by nature will only affect S&F switching.

HOL blocking can also severely impact the performance of switches, limiting their efficiency up to 50%.

Cut-through switching has no input buffering and is therefor not susceptible to HOL blocking. Port 2’s output queue can still fill up and contention drops can still occur, however drops for other, unaffected ports, will not.

Contention Drops you say?


A switch suffering from port contention just means that one or more of its ports has a completely full output queue at a given time and is therefore forced to drop frames on the floor. The reason that it’s full is because the port is unable to deliver frames fast enough to keep up with the demand of input ports on the same switch competing to use it for output.

Port contention or congestion is probably occurring whenever you see a non-zero amount of output queue drops on any specific interface statistic.

Virtual Output Queuing and How it Helps


A bit of a misnomer, virtual output queueing is actually an input queuing strategy for switches to mitigate HOL drops.

The magic of the VoQ solution is creating n more input queues that can be used by frames destined for the non-congested port. The term “Virtual Output” is used because it represents the contended port’s output queue but physically resides on the input side.

image
The Virtual Output Queuing system roughly works as follows:
  1. Each port informs an "arbiter" (sup card, fabric scheduler, etc.) of the output queues that it has available. 
  2. Each port then creates an extra input queue (VoQ) for each available output queue reported from the previous step. 
  3. When a frame is to be switched between ports, the input port queues the frame in a VoQ. The input port then requests permission to transmit the frame over the crossbar to the output port. 
  4. If the output queue is available (not full), the arbiter grants the request from the input port and deducts one record from the available output queues; the frame is then transmitted. 
  5. Once the output port transmits the frame, it then notifies the arbiter that the egress queue is now available.
The basic result is that frames are now much less likely to be dropped on the floor due to a full input queue, the problem of HOL-blocking solved.

The above process is presented in a simple manner but can actually be a quite complex operation. Some hardware vendors (Cisco) use complex scheduling routines to combine Virtual Output Queuing with CoS to exponentially expand the number of input queues that are created up in Step 2.


Thanks for reading.

Popular posts from this blog

Configuring Cisco ASA for Route-Based VPN

Up and Rawring with TRex: Cisco's Open Traffic Generator

Running ASA on Firepower 2100: An End-to-End Guide