in Internet Services

Varnish HTTP accelerator nears 2.0 release

I’ve long been an advocate of origin HTTP caching and acceleration for large websites, something I alluded to in the post Performance Tuning and Optimization of High-Traffic Websites, which I wrote almost eleven months ago. In the early, heady days of the World Wide Web, many vendors like CacheFlow (later BlueCoat) and Nortel made HTTP caching appliances, but there are almost no such vendors in the marketplace now. I still believe there is a sound technical reason for an origin website architecture with HTTP accelerators deployed in front of it, and I’m happy to see that one recent entrant into this space, the Varnish HTTP Accelerator, is nearing a stable 2.0 release. In this post, I’ll elaborate on why I think HTTP caching solutions went the way of the dodo, why I think they should come back, and use the feature set and stated goals of the Varnish project as evidence.

The origin of HTTP caching appliances

Way back in Web 0.7-land (say, between 1996-2000), most websites — even “major” websites like the CBC’s — ran on a single webserver. Most site content was static, but most of the server’s CPU time was consumed repeatedly processing requests for infrequently-changing site assets like images. As such, many sites implemented transparent caching of such objects using HTTP accelerators, thereby offloading the effort onto purpose-built appliances like the Cacheflow, which ran a stripped-down microkernel operating system called CacheOS. Some sites also segregated image serving onto entirely separate subdomains, e.g. images.foobar.com, a practice that continues today on some high-traffic sites like Wikipedia.

Decline and fall

Around 2001-2002, most vendors of caching appliances seemed to either exit this business or disappear entirely. Why? I believe a number of factors were at play:

  • The rise of load-balanced web server environments reduced the number of customers requiring HTTP caching. (According to CBC.ca’s 10th anniversary site, we started using Arrowpoint CSS load balancers in October 2000.)
  • Content-delivery networks such as Akamai (founded in 1998) began to impact the requirement to run a large origin infrastructure, by offering some caching capabilities as part of their core design.
  • The dot-com implosion of 2000-2001 meant that many manufacturers began to exit this business for more lucrative markets, partly because many of their customers had failed as businesses.

The end result is that few companies are left in this business specifically. Several firms continue to make “application accelerators” which optimize bandwidth consumption, but there are few products left commercially.

Advocating a renaissance

Does this mean HTTP acceleration by reverse proxy is dead? Hardly. I think businesses rely too heavily on CDNs to isolate origin faults from the end-user. Let’s remember what a CDN is good for: accelerating the last mile to the user. The fact that a CDN has some caching and fault-masking properties is not an excuse to build an unreliable origin, just like how corn starch has some fire-fighting properties and so I keep it near my stove, but it’s no reason to cause a grease fire.

The central objective while engineering website infrastructure should always be to implement technologies that best protect the origin at all costs Not only should the origin be able to withstand a high level of traffic, but also remain as available as possible even if back-end applications fail. This implies some kind of segmentation (we call it “sandboxing”) so that critical portions of the origin are isolated from not-so-critical portions. In many cases, the critical portions will not even be dynamic components but simply static pages.

In front of such an origin should reside an HTTP accelerator as a reverse proxy. This accelerator can either be placed between the firewall and the virtual IP of the load balancer, or multiple accelerators can be placed behind the load balancer in front of the actual “real” servers.

“Sandboxing” can cause some integration problems, however. Cross-sandbox server-side includes will not work, which limits re-usability of site components. This is where a feature-rich accelerator can help. An accelerator supporting the Edge Side Includes (ESI) standard can do the page integration at the caching layer, just prior to delivery to the end-user.

And thus brings me to why I am so excited about the Varnish HTTP accelerator’s 2.0 release. 2.0 will support a sensible subset of ESI, thereby enabling edge-side page reassembly. Varnish also has many other valuable features to increase the robustness of the origin and to isolate faults from the end user; here are some of my favourites:

  • “Dirty” caching of objects (returning objects from cache if the origin is unavailable)
  • Manual cache management capabilities (i.e. administrator-driven cache eviction)
  • Configurable backend health polling with easy-to-read backend status scoreboard

We’ve seen many of these features in sophisticated load balancers like the Cisco CSM but it’s fantastic that they are being implemented in an open-source product.

In summary, I hope I’ve made a sufficient case for why origin HTTP acceleration and caching is still required, and I’m looking forward to the final release of Varnish 2.0 which should happen some time this fall.

Write a Comment

Comment