When we released a new version of the ByteBlower traffic generator a few years ago we started getting user reports saying the TCP traffic flows mysteriously “stopped” after a short time. Some users told us the traffic stopped after 10 to 30 seconds. Other users told us the TCP flows stopped immediately. For those users the issue was consistently reproducible. However, we could not reproduce the issue in our own test setup.
After some investigation we found two things:
- the problem occurred when the TCP connection was initiated from behind a NAT
- ByteBlower was making use of the TCP “half-close” feature
What is TCP Half–Close?
TCP allows you to close each direction of the connection independently. A TCP connection is considered to be half-closed when it’s closed in one direction and still open in the other direction. It allows an application to say: “I am done sending data, so send a FIN to the other end, but I still want to receive data from the other end, until it sends me a FIN.”
TCP half–close is sometimes used to emulate Unix-style IO redirection on TCP streams by using the FIN message as a EOF marker. The book TCP/IP Illustrated, Vol. 1 provides a nice example for this:
ssh hostname sort < datafile
Here the “ssh” program starts the “sort” program on the remote host and reads standard input from a local file named “datafile”. Each line of standard input is sent to the sort program over the TCP stream. The final EOF marker from datafile is sent over the TCP stream as a FIN message and then written as EOF to the standard input of the sort program. Note that sort can’t generate any output until it has received all input, so it relies on TCP half–close to work.
What’s the problem?
We suspect that some NAT devices, when seeing a FIN message, will delete the corresponding NAT entry after a short timeout even if no FIN is seen from the other side. The result is that TCP connections that are in a half–close state will stop working after a while.
I should mention that this behavior is not allowed by RFC 5382: The closing phase begins when both endpoints have terminated their half of the connection by sending a FIN packet. (The RFC also allows an idle-timeout for inactive connections after two hours and 4 minutes. But that rule did not apply in our situation because our TCP flow had only been running for a few seconds and it wasn’t idle.) So maybe this was just a bug in the NAT device implementation.
But it’s not only NAT devices that can cause problems. Many firewalls also implement a TCP half-close timeout. These timeouts can be very short: Cisco decreased the minimum half-close timeout to 30 seconds to provide better DoS protection.
And even the Linux operating system implements a half-close timeout. The default value is 60 seconds. The manual for net.ipv4.tcp_fin_timeout setting says the following:
This specifies how many seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification, but required to prevent denial-of-service attacks.
Many systems have introduced a timeout mechanism that restricts how long a TCP connection can remain in the half-closed state. Preventing denial of service attacks seems to be the most common reason. In other cases, like the NAT example, it may be that the reason is simply a bug in the implementation.
As a result the TCP half-close cannot be used reliably. Except perhaps for a short duration, for example to allow flushing outstanding packets or sending a protocol “goodbye” message.
In the end we updated the ByteBlower to no longer make use of the TCP half close state. We now only send a FIN message from each side once the flow has completely finished. When this version was released users reported that the issue had disappeared.