Up and Rawring with TRex: Cisco's Open Traffic Generator
Network performance testing is hard. Historically it's required expensive [cost-prohibitive for < $BN enterprises] equipment starting at tens of thousands of dollars for what amounts to an overly complex traffic simulation appliance, then practically requiring a professional level certification just to operate it. Sound familiar at all?
Personally I must admit that I've largely scoffed at the entire concept of simulated traffic testing, noble as it may be. I conceded that, as an engineer of a relatively budget-conscious organization, the only way to properly test something on the network was to simply put it into production, and then watch it from a safe distance whilst wearing a white lab coat and holding a pen and clipboard. Either it would perform admirably, or promptly catch fire and explode, indicating a yank-and-forget...or a step upgrade. Easy enough. That being said, it's just not everyday that a big name vendor offers a super useful tool completely free of charge. Cisco, of all companies, has a little-known software project that may change someone with my attitude on the concept of performance testing.
While the end game for Cisco presumably is to eventually restrict to and therefor sell more UCS, at the moment the software remains freely available.
So...what is TRex exactly? Well, without cheaply copy & pasting directly from their very neatly organized man page, TRex appears to be an open-sourced alternative L4-7 network load tester playing the Joker to Batman products such as Ixia and Spirent/Avalanche. It runs on regular x86 servers (with modern CPU(s) and NICs) and manages to achieve line rate Tx/Rx through it's "innovative" DPDK-enabled NIC drivers. Read: No specialized hardware required.
TRex uses a host or VM to act as both sender and receiver of flows by designating and pairing off physical or virtual NICs appropriately. Then, by simply replaying [and re-writing the headers of] regular supplied or user-created pcap files, TRex is able to transmit packets as fast as you'd like it to up to line rate. It supports testing clusters of DUTs as well as just a single DUT (device under test). The former is quite important for me since I needed a way to test over-subscription models for a multi-tenant NFV cloud that I'm working on at the moment.
Other good news is that the project is rapidly maturing and has very friendly, active developers working on it (Hello Ido and Hanoh!). In fact, in just the few months that I've been working with it there have been several releases containing very useful features and bug fixes addressing my own very specific use cases, which I called out to the core devs via their Google group.
After reading through all of the setup documentation I personally managed to get up and running with TRex within a day or two. As mentioned, there's been several feature enhancements come out since then that have greatly helped make configuring and running TRex even easier. Here I'll outline the basic components so to maybe help shave an hour or two off your own dive in. So, without further fuss:
Up & Running with TRex: The Quick and Dirty Way
|TRex: Logical Layout for Single DUT Test|
|TRex: Multi-DUT Testing Layout - A bit more practical|
After you've verified that you have modern enough hardware to utilize DPDK, install a 64-bit linux operating system of your choice. I personally have used and have had success with both x86_64 editions of Ubuntu 14.04 and CentOS 7.
Hardware and OS
As root, download the latest TRex package and place it somewhere on your system (/opt/trex/ is just an example, but works fine)
$ mkdir /opt/trex && cd /opt/trex
$ wget --no-cache http://trex-tgn.cisco.com/trex/release/latest
$ tar -xzvf latest
You'll now need to locate the DPDK-capable physical NICs on your system, and capture the hardware ID to be later placed in TRex configuration files. The following is an example from my system with a 2x10GbE Intel NIC and a 4x1GbE Broadcom NIC installed. The 1GbE port is only being utilized to manage the system (ie control-plane) and the 10GbE ports are strictly for test data (data-plane).
Network devices using DPDK-compatible driver
0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic
0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' drv=igb_uio unused=ixgbe,vfio-pci,uio_pci_generic
Network devices using kernel driver
0000:02:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=em1 drv=tg3 unused=igb_uio,vfio-pci,uio_pci_generic
0000:02:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=em2 drv=tg3 unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:02:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=em3 drv=tg3 unused=igb_uio,vfio-pci,uio_pci_generic
0000:02:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=em4 drv=tg3 unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
Alternatively run the same command with a '-i' flag instead and TRex should step you through an interactive setup for generating your primary TRex config file.
Relevant Configuration Files
General TRex configuration file. The most important information variables here are 'dest_mac'. TRex will use the MAC addresses here to rewrite generated packets toward it's destination. TRex does not currently have a TCP/IP stack.
Full info on configuration options available here: https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_configuration_yaml_parameter_of_cfg_option
Perhaps the most practical use of TRex, this is actually an optional configuration file useful for testing a single or multiple devices with a single test. If this file is specified in the command line argument, the destination MAC addresses specified here will override those defined in /etc/trex_cfg.yaml. Read more about 'client clustering' here: https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_client_clustering_configuration
Here's an example configuration file that I use to test against 8 devices.
Here's an example configuration file that I use to test against 8 devices.
- ip_start : 184.108.40.206
ip_end : 220.127.116.11
vlan : 3630
next_hop : 18.104.22.168
src_ip : 22.214.171.124
vlan : 3658
next_hop : 126.96.36.199
src_ip : 188.8.131.52
count : 8
Note the following configuration elements:
- TRex has baked-in dot1q frame tagging capability, where my server's 10GbE NICs actually have dot1q links to their switchports. This allows me to change the VLAN in which I want to run tests without touching the switchport configuration.
- This boolean must be specified as true or false.
- This, in YAML format, specifies a DUT 'cluster'.
- The client prefix ('ip_start' & 'ip_end') are repeated here for overwriting purposes. This is so that you can modify the pools to connect to different groups/clusters, if you decide to have multiples anyway.
- Details for the NIC used for generating client-side flows.
- 'next_hop' is the starting (or lone) IP address of our DUT cluster.
- 'src_ip' is the IP address that TRex itself will assume.
- Details for the NIC used for generating server-side flows
- Same logic as with 'initiator'
- The number of DUT devices in the group/cluster. If >1, TRex will round-robin flows incremented IP addresses, beginning with the value of 'next_hop'.
DUT Configuration Bootstrapping
With new version of TRex it's capable of resolving MAC addresses through ARP, so you no longer need to manually bootstrap images with static ARP information. You will need to configure IP routes for the TRex client (16./8) and server (48./8) prefixes so that traffic can flow appropriately however.
Here is an example configuration required for ASAv:
route outside 184.108.40.206 255.0.0.0 220.127.116.11
route inside 18.104.22.168 255.0.0.0 22.214.171.124
And for IOS-XE:
ip route 126.96.36.199 255.0.0.0 188.8.131.52
ip route 184.108.40.206 255.0.0.0 220.127.116.11
You get the idea :)
Simple Test Examples
Once a DUT or multiple DUTs have been defined via the above configuration files, you can start TRex with the following example commands.
Use the 'Command Line' section of the TRex manual to tweak certain things, like traffic rate, etc.
Large UDP, 2.4Gbps
root@/opt/trex/v2.14/# ./t-rex-64 -f cap2/imix_1518.yaml -m 25 -d 0
root@/opt/trex/v2.14/# ./t-rex-64 -f cap2/imix_1518.yaml -m 25 -d 0 --client_cfg my_cfg.yaml
Mixed UDP, 1Gbps, 100K Flows
root@/opt/trex/v2.14/# ./t-rex-64 -f cap2/imix_fast_1g_100k_flows.yaml -m 1 -d 0
root@/opt/trex/v2.14/# ./t-rex-64 -f cap2/imix_fast_1g_100k_flows.yaml -m 1 -d 0 --client_cfg my_cfg.yaml
root@/opt/trex/v2.14/# ./t-rex-64 -f avl/sfr_delay_10_1g.yaml -m 1 -d 0
root@/opt/trex/v2.14/# ./t-rex-64 -f avl/sfr_delay_10_1g.yaml -m 1 -d 0 --client_cfg my_cfg.yaml
Put all of the above together, and here is an example of what it would look like being executed at the command line: