Showing posts from 2018

Putting Virtual Networking into the Fast Lane

Here's a write up on some of the dirty details that I've learned over the last year or so while building an NFV (network function virtualization) platform with the goal of virtualizing edge devices at scale. If you've ever wondered how to get usable, scalable performance out of virtualized networking drivers and appliances in production then hopefully you'll find this useful.

While deploying specialized, purpose built network hardware might still make sense to many organizations who require a certain level of scale and performance (read: layer 2-3), I'd like to explore for a moment the possibilities which stem from the proliferation of x86-based cloud platforms, namely: virtual network appliances and their capability to eliminate the rapidly less-sexy sound of network appliances being unwieldy racked & stacked in your edge cabs.

What must system owners consider before making a transition to virtual appliances for services such as firewall security, VPN and loa…

What is SD-WAN and will it Replace MPLS?

I've noticed quite a lot of confusion in the networking realm over the last few years, even by experienced networking professionals, as to what exactly SD-WAN is and for what use cases one may consider using it for. Well, here's my take on hopefully clearing some things up.

First things first...
How SD-WAN compares to traditional MPLS L3VPN They're both managed VPN services, it's mostly a difference of who's performing the encapsulation and doing the management. SD-WAN offers true CE-to-CE flow encryption, whereas MPLS isn't encrypted at all and performs encap/decap on the upstream PE routers for each site. SD-WAN needs this encryption since it relies on the Internet to be it's backbone, where MPLS is contained in a service provider's VRF.

Bottom line: Carriers are maddeningly slow and expensive, and the SD-WAN market wouldn't have been created at all if it weren't to give a giant middle finger to that.
SD-WAN technology isn't standardized;…

Let's Build a Datacenter Network

It's quite common to hear of companies these days planning to migrate some or all of their infrastructure into third-party cloud providers such as AWS. However, for some organizations it still makes good sense to build physical, on-premises data centers to either augment that cloud workload presence or supplement it entirely. Today I'm going to pretend I'm working for one of those companies and come up with a network design to build out, just to get the juices flowing.

Be forewarned: brief this article is not, but I have glossed over a few details here and there for some brevity. Really, I wanted to illustrate some of the decisions that go into the process for those unaccustomed or otherwise curious.
The Challenge Let's say a startup has hired me to design a data center network for their existing co-lo space that will be used to host all of their services. All that I've been given so far are four 42RU, dual-power cabinets in the datacenter cage, and two upstream I…

My 100Gb Spine

As a sort of engineering art form, no two computer network designs are ever really exactly alike, and in that spirit of variety today I’m going to play some designer make-believe. I’m going to focus on building a new high capacity, high performance and future-proofed IP underlay that should hopefully satisfy even the most performance-demanding customer applications.

For that I’m going to build a leaf-spine fabric to support an at-maximum 2:1 over subscription ratio, for it to be able to support both 10Gb and 25Gb node connectivity and all without breaking the bank on capex or power+cooling costs. These imaginary business requirements include a scale goal for connecting 1,700 1RU nodes day one, and with a business stretch goal of 5,000 before ever needing to think about a redesign.

Given that info we should be good to start dreaming and digging through some vendor data sheets. So let’s go shopping!
The Spine
Cisco Nexus 9236C

I primarily chose this switch because of it's incredible…

Configuring Cisco ASA for Route-Based VPN

Here I'll attempt to give an overview of Cisco ASA's implementation of the static virtual tunnel interface (aka "SVTI", or "VTI" for short), also known more simply as "route-based VPN", and how to configure it on Cisco ASA firewalls.

Some benefits of using VTI is it that does away with the painful requirement of configuring all of those joyless static crypto map access-lists, meaning you no longer have to manually maintain all possible local-to-remote prefix security associations. IPSec VPN deployments ultimately become easier and with BGP you also satisfy HA requirements to public cloud connectors such as AWS and GCP.
Guidelines Below are a snapshot of guidelines for using SVTI specific to the ASA platform (keep in mind that SVTI is not ASA or even Cisco-specific technology, each device will have a different implementation):
You can use dynamic or static routes for traffic over the tunnel interface The MTU for VTIs is automatically set, accordin…