of zero from serializing access to an object with very draconian
backend cache-control headers.
We could get far even with a one second TTL, but following our general
"there is a reason people put Varnish there in the first place" logic
we use the default_ttl parameter (default: 120 s) for this value.
If another value is desired, this can be set in vcl_fetch, even if it
looks somewhat counter-intuitive:
sub vcl_fetch {
if (obj.http.set-cookie) {
set obj.ttl = 10s;
pass;
}
}
Fixes #425
git-svn-id: svn+ssh://projects.linpro.no/svn/varnish/trunk@3537
d4fa192b-c00b-0410-8231-
f00ffab90ce4
return (0);
case VCL_RET_PASS:
sp->obj->pass = 1;
+ if (sp->obj->ttl - sp->t_req < params->default_ttl)
+ sp->obj->ttl = sp->t_req + params->default_ttl;
break;
case VCL_RET_DELIVER:
break;
--- /dev/null
+# $Id$
+
+test "check late pass stalling"
+
+server s1 {
+ rxreq
+ txresp \
+ -hdr "Set-Cookie: foo=bar" \
+ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \
+ -body "1111\n"
+ rxreq
+ txresp \
+ -hdr "Set-Cookie: foo=bar" \
+ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \
+ -body "22222n"
+ rxreq
+ txresp \
+ -hdr "Set-Cookie: foo=bar" \
+ -hdr "Expires: Thu, 19 Nov 1981 08:52:00 GMT" \
+ -body "33333n"
+} -start
+
+varnish v1 -vcl+backend { } -start
+
+client c1 {
+ txreq
+ rxresp
+ txreq
+ rxresp
+ txreq
+ rxresp
+} -run
+
+varnish v1 -expect cache_hitpass == 2