My Varnish is leaking memory
2010-11-30
2 minutes read

Every so often, we get bug reports about Varnish leaking memory. People have told Varnish to use 20 gigabytes for cache and they discover the process is eating 30 gigabytes of memory and they get confused about what’s going on. So, let’s take a look.

First, a little bit of history. Varnish 2.0 had a fixed per-object workspace which was used for both header manipulations in vcl_fetch as well as for storing the headers of the object when vcl_fetch was done. The default size of this workspace was 8k. If we assume an average object size of 20k, that is almost 1/3 of the store being overhead.

With 2.1, this changed. First, vcl_fetch doesn’t have obj any longer, it only has beresp which is the backend response. At the end of vcl_fetch, the headers and other relevant bits of the backend response are copied into an object. This means we no longer have a fixed overhead, we use what we need. Of course, we’re still subject to malloc’s whims when it comes to page sizes and how it actually allocates memory.

Less overhead means more objects in the store. More objects in the store, means, everything else being equal, more overhead outside the store (for the hash buckets or critbit tree and other structs). This is where lots of people get confused, since what they see is just Varnish consuming more memory. When moving from 2.0 to 2.1, people should lower their cache size. How much depends on the amount of objects they have, but if they have many and small objects, a significant reduction might be needed. For a machine dedicated to Varnish, we usually recommend making the cache size be 70-75% of the memory of the machine.

A reasonable question to ask at this point is what all this overhead is being used for. Part of it is a per-thread overhead. Linux has a 10MB stack size by default, but luckily, most of it isn’t allocated, so it only counts against virtual, not resident memory. In addition, we have a hash algorithm which has overhead and the headers from the objects are stored in the object itself and not in the stevedore (object store). Last, but by no means least, we usually see an overhead of around 1k per object, but I have seen up to somewhere above 2k. This doesn’t sound like much, but when you’re looking at servers with 10 million objects, 1k of overhead means 10 gigabytes of total overhead, leading to the confusion I talked about at the start.

Back to posts