Rigorously testing a network device or distributed service requires complex, realistic network test environments. Linux Traffic Control (tc) with Network Emulation (netem) provides the building blocks to create an impairment node that simulates such networks.

This three-part series describes how an impairment node can be set up using Linux Traffic Control. In this first blog post, Linux Traffic control and its queuing disciplines are introduced. TheĀ second partĀ will show which traffic control configurations are available to impair traffic and how to use them. TheĀ thirdĀ and last part will describe how to get an impairment node up and running!

Emulation by elimination

How does your product operate in a realistic (or high latency) environment? Well letā€™s find out.

  • Maybe we can use the production network, which obviously contains a realistic latency environment?Ā ā€œOut of question!ā€, responds the operations department.
  • Perhaps we could engineer a purpose-built network with the desired characteristics?Ā ā€œOut of budget!ā€, yells your project manager.

This leaves us withĀ emulation: simulating an (arbitrarily complex) network by configuring the desired network impairments in software.

Linux to the rescue

Network nodes (such as IP routers) often run a Linux based operating system. The Linux kernel offers a native framework for routing, bridging, firewalling, address translation and much else.

Before a packet leaves the output interface, it passes throughĀ Linux Traffic ControlĀ (tc). This component is a powerful tool for scheduling, shaping, classifying and prioritizing traffic.

The basic component of Linux Traffic Control is theĀ queuing disciplineĀ (qdisc). The simplest implementation of a qdisc isĀ first in first out (FIFO).Ā Other queuing disciplines includeĀ Token Bucket Filter (TBF), which shapes traffic to conform to a configured output rate and burstiness.

TheĀ network emulationĀ (netem)Ā project adds queuing disciplines that emulate wide area network properties such as latency, jitter, loss, duplication, corruption and reordering.

A first glance

Suppose we have an impairment node device (any Linux based system) which manipulates traffic between its two Ethernet interfacesĀ eth0Ā andĀ eth1Ā to simulate a wide area network. An extra interface (e.g.Ā eth2) should be available to configure the deviceĀ out-of-band.

We can check the default queuing disciplines and traffic classes using theĀ tcĀ (man page) command:

 

$ tc qdisc show
qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priomapĀ  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth1 root refcnt 2 bands 3 priomapĀ  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc pfifo_fast 0: dev eth2 root refcnt 2 bands 3 priomapĀ  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
$ tc class show
<no output>

The default traffic control configuration consists of a single queuing discipline pfifo_fast (man page, read more) and contains no user defined traffic classes. This queuing discipline works more or less as FIFO, but looks at the IP ToS bits to prioritize certain packets. Each line in the output above should be read as follows:

Interface eth0Ā has a queuing disciplineĀ pfifo_fastĀ with labelĀ 0:Ā attached to theĀ rootĀ of its qdisc tree. This qdisc classifies and prioritizes all outgoing packets by mapping their 4-bit IP ToS value (i.e. the 16 listed values) to native Linux priority bands 0, 1 or 2. Traffic in band 0 is always served first, then the second band is emptied of pending traffic, before moving on to band 2. Within a band, all packets are sent in a FIFO manner.

Queuerarchy

Each interface has a qdisc hierarchy. The first qdisc is attached to theĀ rootĀ label and subsequent qdiscs specify the label of their parent.

Some examples of hierarchies are shown in the picture below. It also shows the conventional labels used.

Adding a qdisc to theĀ rootĀ of an interface (usingĀ tc qdisc add) actually replaces the default configuration shown above. Deleting a root qdisc (usingĀ tc qdisc del) actually removes the complete hierarchy and replaces it with the default.

$ tc qdisc add dev eth0 root handle 1: my_qdisc <args>

Queuing disciplines may be chained, with traffic flowing through both, by specifying the label of their predecessor as parent.

$ tc qdisc add dev eth0 root handle 1: my_qdisc1 <args>
$ tc qdisc add dev eth0 parent 1: handle 2: my_qdisc2 <args>

Classful queuing disciplines classify their traffic. Each of the traffic classes can be handled in a specific way, through a specific child qdisc.

$ tc qdisc add dev eth0 root handle 1: classful_qdisc <args>
$ tc class add dev eth0 parent 1: classid 1:1 myclass <args>
$ tc class add dev eth0 parent 1: classid 1:2 myclass <args>
$ tc qdisc add dev eth0 parent 1:2 handle 20: my_qdisc2 <args>

Configuring traffic impairments

To impair traffic leaving an interfaceĀ eth0, we simply overwrite the default qdisc with our own impairment qdisc hierarchy. Which impairment configurations are available and how they can be configured is described in theĀ second partĀ of this series!

More information

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.