Adam Warski

18 Sep 2012

Unexpected problems with Apache and mod_rewrite under high load

distributed
java
testing

In one of the projects that we are currently working on we have a fairly typical setup with one server (Apache with mod_rewrite) proxying traffic to backend servers.

We also have some automated performance/stress tests. The whole system worked fine with around 250 requests/second hitting the server (http get/post calls). However, when we increased that to 700 requests/second, after some time we started getting 503 responses.

As it turned out, these requests never reached the backend servers, but the Apache error logs contained entries such as:

Cannot assign requested address attempt to connect to (...) failed

Googling for a while revealed that this may be because the OS (Ubuntu in this case) wasn’t able to allocate new ports. Each TCP client-server connection gets assigned a new ephemeral port on the client side, which typically is from the range 32768 to 61000. After a request completes, even if both sides properly close the TCP connection, the port will be freed and re-useable only after about 4 minutes. That’s because the connection is put in the TIME_WAIT state, and will be discarded after the associated timeout passes.

As a TCP client-server connection is uniquely identified by the (client IP, client port, server IP, server port) tuple, in the default setup, the server can only handle about 30k requests in 4 minutes from a single client (the server’s address is fixed and well-known, so only the client port can change).

The next step was checking if this is indeed a problem with allocating ports. To get a rough number of open connections, we simply ran during the tests:

netstat -p tcp | wc -l

We also added greps on the client’s or backend’s IP, to get a count on the number of connections between client<->proxy and proxy<->backend. It turned out that while there is a constant pool of connections between the client and proxy (so HTTP keepalive was working properly – also see below), a new connection was established for each request between the proxy and the backend!

So in our case the client side of the TCP connection was the proxy, the server side – the backend, and the limit on the number of connections applied to the proxy<->backend pair, even though originally the requests could have come from various clients. Hence our whole setup was limited to 30k requests per 4 minutes per backend server.

Of course the next step was to find out which side is initiating closing of the connections. For that we used:

tcpdump src [proxy ip] and dst [backend ip]

and directed single requests at the server. The flow clearly showed that Apache was closing the connections.

Why? That was a very good question. Our Apache+mod_rewrite configuration was really simple:

<VirtualHost *:80>
  RewriteEngine On
  ProxyPreserveHost On
  RewriteRule ^(.*)$ http://[backend ip]:8080$1 [P,L]
</VirtualHost>

For a lack of other leads I asked on ServerFault, which turned out to be a very good idea. I quickly got the answer that mod_rewrite does not do connection pooling. I couldn’t find any mention about this in the docs, and I think it’s pretty important, especially for systems under high load.

The solution was also very simple: use mod_proxy instead. Changing the above config to:

<VirtualHost *:80>
   ProxyPreserveHost On
   ProxyPass / http://[backend ip]:8080/
</VirtualHost>

caused that our tests finally passed under the ~700 requests/second load.

As a side note, we also made sure that the test agent sending the requests (it was one machine) uses HTTP keepalive, which causes a single TCP connection to be reused for multiple HTTP requests. As it turns out, if you are using Java’s URLConnection this isn’t that straighforward (for simplicity we didn’t use Apache HttpClient here): you need to adjust the http.maxConnections system property and not use the .connect() or .close() methods on URLConnection.

Adam

comments powered by Disqus

Any questions?

Can’t find the answer you’re looking for?