This is a how to article on setting up Varnish Cache on CentOS 6.3 64-bit as a single server web application accelerator (caching HTTP reverse proxy). Varnish Cache can increase the speed of your web application by 300 – 1000 times when used properly in the proper architecture.
First install your Varnish dependencies:
yum install -y gcc
yum install -y make
yum install -y automake
yum install -y autoconf
yum install -y libtool
yum install -y ncurses-devel
yum install -y libxslt
yum install -y groff
yum install -y pcre-devel
yum install -y pckgconfig
yum install -y libedit
yum install -y libedit-devel
Get your Varnish source code at the time of this blog Varnish 3.0.5 stable:
wget http://repo.varnish-cache.org/source/varnish-3.0.5.tar.gz
Extract and compile Varnish
gunzip -c varnish-3.0.5.tar.gz | tar -xvf -
cd varnish-3.0.5
./autogen.sh
./configure
make
make check
make install
Your Varnish configuration file will be locate at /usr/local/etc/varnish/default.vcl
Below is a example default.vcl which configures Varnish to cache only what you determine and not everything else. The configuration is commented to explain the settings:
default.vcl
## Your backend http application server info as well as health checking
backend website {
.host = "app.server.hostname"; # backend http application server or vip
.port = "80"; # port your backend application listens on
.probe = { # health check setting
.request =
"GET / HTTP/1.0" # what to health check
"Host: app.server.hostname" # what host to health check
"Connection: close"; # make sure to close the connection of the check
.interval = 5s; # how long time to wait between polls
.window = 5; # how many of the latest polls to consider if its healthy
.threshold = 2; # how many of the .window polls must be good to be healthy
.timeout = 1s; # how fast the probe must finish
.expected_response = 200; # expected http response of the health check
}
}
## Example cache only url beginning with /xyz and not everything else
sub vcl_recv {
remove req.http.X-Forwarded-For; # allowing client ip pass through
set req.http.X-Forwarded-For = client.ip;
# cache url beginning with xyz and things ending in jpg css etc..
if (! req.url ~ "^/xyz/" || req.url ~ "\.(jpg|jpeg|css|js|png)$" ) {
return(pass);
}
unset req.http.Cookie; # disregard caching cookies
set req.grace = 1h; # accept objects that are up to 1 hour old in case of backend failure
return(lookup);
}
sub vcl_fetch {
set beresp.do_esi = true; # enable esi includes
set beresp.ttl = 2m; # time to live for cache that is not explict set from the header info
set beresp.grace = 1h; # deliver objects that are up to 1 hour old in case of backen failure
return(deliver);
}
Now you can start Varnish with the below command, in this case it says to use the default.vcl with 3 Gigs of memory for cache, with administrative ip localhost and port 2000 and listen for any request on port 80
/usr/local/sbin/varnishd -f /usr/local/etc/varnish/default.vcl -s malloc,3G -T 127.0.0.1:2000 -a 0.0.0.0:80
Some other commands to help you tune Varnish and debug it are:
# show varnish statistics
varnishstat
# show varnish logs
varnishlog
# clear all varnish cache
varnishadm "ban req.url ~ /"
To test it out put the url of your application site substituting the hostname for your new Varnish server, on your second load it should be a lot faster serving data from cache. You should use firebug to make sure or varnishstat command to see your cache hits vs misses. Remember in this configuration the things you explicitly state are only cached, and are what you will see the performance increase in.
Example:
http://app.server.hostname/xyz/my/app/is/awesome.php
vs
http://varnish.server.hostname/xyz/my/app/is/awesome.php (should be faster after on second load)
Hope you find this helpful.