High Activity

News

  Analyzed 29 days ago based on code collected 5 months ago.
 
Posted 4 months ago
Still on varnish-3.0? Missing the ability to filter X-Forwarded-For through ACLs? Use vmod ipcast by Lasse Karstensen.

I cleaned up and rolled an rpm package of vmod-ipcast-1.2 for varnish-3.0.6 on el6. It’s available here: ... [More] http://users.linpro.no/ingvar/varnish/vmod-ipcast/.

Note that the usage has changed a bit since the last version. You are now longer permitted to change client.ip (and that’s probably a good thing). Now it’s called like this, returning an IP address object:

ipcast.ip("string","fallback_ip");
If the string does not resemble an IP address, the fallback ip is returned. Note that if the fallback ip is an unvalid address, varnishd will crash!

So, if you want to filter X-Forwarded-For through an ACL, you would something like this:

import ipcast;
sub vcl_recv {
# Add some code to sanitize X-Forwarded-For above here, so it resembles one single IP address
if ( ipcast.ip(req.http.X-Forwarded-For, "198.51.100.255") ~ someacl ) {
# Do something different
}
}
And that’s all for today.

Varnish Cache is powerful and feature rich front side web cache. It is also very fast, as in on steroids, powered by The Dark Side of the Force.

Redpill Linpro is the marked leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, container solutions, in-house, cloud, or data center, contact us at redpill-linpro.com. [Less]
Posted 5 months ago
There are quite a few tunables in the Linux kernel. Reading the documentation it is clear that quite a few of them could have an impact on how Varnish performs. One that caught my attention is dirty_background_writeback tunable. It allows you to set ... [More] a limit for how much of the page cache would be dirty, i.e. contain data not yet written to disk, before the kernel will start writing it out. [Less]
Posted 5 months ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted 5 months ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted 5 months ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted 5 months ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted 5 months ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted 5 months ago
Varnish was initially made for web site acceleration. We started out using a memory mapped file to store objects in. It had some problems associated with it and was replaced with a storage engine that relied on malloc to store content. While it ... [More] usually performed better than the memory mapped files performance suffered as the content grew past the limitations imposed by physical memory. [Less]
Posted 6 months ago
Following up on the test of dmcache we decided to scale it up a bit, to get better proportions between RAM, SSD and HDD. So we took the dataset and made it 10 times bigger. It is now somewhere around 30GB with an average object size of 800Kbyte. In ... [More] addition we made the backing store for Varnish ten times biggers as well, increasing it from 2GB to 20GB. This way we retain the cache hit rate of around 84% but we change the internal cache hit rates significantly.
The results where pretty amazing and shows what a powerful addition dmcache can be for IO intensive workloads. [Less]
Posted 6 months ago
As you might or might not know we’ve been working on this storage backend for a year now, built for handling large data volumes like the ones we see in online video and CDNs. The new storage backend is written with performance in mind leveraging some ... [More] novel ideas we have to make things go a lot faster. If you want to know more you can get in touch with me or come to one of our summits where I’ll be presenting.
Since the new storage engine relies much more on IO capacity we’re suddenly looking at things such as filesystems and IO performance. We expect most of the deployments of this software to take place on solid state drives. However, if you are going to cache up a petabyte sized video library, having a secondary cache level with some SATA-based disk cabinets might make a lot of sense.
The problem with SATA based storage is that the IO capacity is abysmal compared to solid state drives. So, we want as much memory as possible to achieve a reasonably high hit rate for the Linux page cache. However, if you attach 20 terabyte of SATA storage 384GB of RAM is still a bit less than I would like to see in order to cache this. So, how do we cache 20TB in a reasonably cost-effective way. [Less]