Add -c and -r options:
- If the former is specified, fetcher will go into a loop after having
traversed the entire tree, and continuously re-fetch all known URLs.
- The latter is not yet implented, but the idea is to assign a random
probability to each URL based on an inverse-exponential (or similar)
distribution, and re-fetch URLs at random according to this frequency.
This will help simulate a "short head long tail" scenario.
Some restructuring.
Add a comment about a possible improvement which will help work around
bugs in certain commonly used data sets (e.g. the Apache httpd manual)
which can result in an infinite set of URLs (which in reality map to
a fairly large but finite set of pages)
git-svn-id: svn+ssh://projects.linpro.no/svn/varnish/trunk@2374
d4fa192b-c00b-0410-8231-
f00ffab90ce4