Activity Not Available
86
I Use This!

News

Analyzed 5 months ago. based on code collected 5 months ago.
Posted over 1 year ago
Still on varnish-3.0? Missing the ability to filter X-Forwarded-For through ACLs? Use vmod ipcast by Lasse Karstensen. I cleaned up and rolled an rpm package of vmod-ipcast-1.2 for varnish-3.0.6 on el6. It’s available here: ... [More] http://users.linpro.no/ingvar/varnish/vmod-ipcast/. Note that the usage has changed a bit since the last version. You are now longer permitted to change client.ip (and that’s probably a good thing). Now it’s called like this, returning an IP address object: ipcast.ip("string","fallback_ip"); If the string does not resemble an IP address, the fallback ip is returned. Note that if the fallback ip is an unvalid address, varnishd will crash! So, if you want to filter X-Forwarded-For through an ACL, you would something like this: import ipcast; sub vcl_recv { # Add some code to sanitize X-Forwarded-For above here, so it resembles one single IP address if ( ipcast.ip(req.http.X-Forwarded-For, "198.51.100.255") ~ someacl ) { # Do something different } } And that’s all for today. Varnish Cache is powerful and feature rich front side web cache. It is also very fast, as in on steroids, powered by The Dark Side of the Force. Redpill Linpro is the marked leader for professional Open Source and Free Software solutions in the Nordics, though we have customers from all over. For professional managed services, all the way from small web apps, to massive IPv4/IPv6 multi data center media hosting, container solutions, in-house, cloud, or data center, contact us at redpill-linpro.com. [Less]
Posted over 1 year ago
There are quite a few tunables in the Linux kernel. Reading the documentation it is clear that quite a few of them could have an impact on how Varnish performs. One that caught my attention is dirty_background_writeback tunable. It allows you to set ... [More] a limit for how much of the page cache would be dirty, i.e. contain data not yet written to disk, before the kernel will start writing it out. [Less]
Posted over 1 year ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted over 1 year ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted over 1 year ago
There is a reason why people install Varnish Cache in their servers. It’s all about the performance. Delivering content from Varnish is often a thousand times faster than delivering it from your web server. As your website grows, and it usually grows ... [More] significantly if your cache hit rates are high, you’ll rely more and more on Varnish to deliver the brunt of the requests that are continuously flowing into your infrastructure. [Less]
Posted over 1 year ago
Varnish was initially made for web site acceleration. We started out using a memory mapped file to store objects in. It had some problems associated with it and was replaced with a storage engine that relied on malloc to store content. While it ... [More] usually performed better than the memory mapped files performance suffered as the content grew past the limitations imposed by physical memory. [Less]
Posted over 1 year ago
Following up on the test of dmcache we decided to scale it up a bit, to get better proportions between RAM, SSD and HDD. So we took the dataset and made it 10 times bigger. It is now somewhere around 30GB with an average object size of 800Kbyte. In ... [More] addition we made the backing store for Varnish ten times biggers as well, increasing it from 2GB to 20GB. This way we retain the cache hit rate of around 84% but we change the internal cache hit rates significantly. The results where pretty amazing and shows what a powerful addition dmcache can be for IO intensive workloads. [Less]
Posted over 1 year ago
As you might or might not know we’ve been working on this storage backend for a year now, built for handling large data volumes like the ones we see in online video and CDNs. The new storage backend is written with performance in mind leveraging some ... [More] novel ideas we have to make things go a lot faster. If you want to know more you can get in touch with me or come to one of our summits where I’ll be presenting. Since the new storage engine relies much more on IO capacity we’re suddenly looking at things such as filesystems and IO performance. We expect most of the deployments of this software to take place on solid state drives. However, if you are going to cache up a petabyte sized video library, having a secondary cache level with some SATA-based disk cabinets might make a lot of sense. The problem with SATA based storage is that the IO capacity is abysmal compared to solid state drives. So, we want as much memory as possible to achieve a reasonably high hit rate for the Linux page cache. However, if you attach 20 terabyte of SATA storage 384GB of RAM is still a bit less than I would like to see in order to cache this. So, how do we cache 20TB in a reasonably cost-effective way. [Less]
Posted almost 2 years ago
I recently went looking for something similar to pep8/pylint when writing Varnish VMODs, and ended up with OCLint. I can’t really speak to how good it is, but it catches the basic stuff I was interested in. The documentation is mostly for cmake, so ... [More] I’ll give a small tutorial for automake: (download+install oclint to somewhere in $PATH) apt-get install bear cd libvmod-xxx ./autogen.sh; ./configure –prefix=/usr bear make # “build ear” == bear. writes compile_commands.json cd src oclint libvmod-xxx.c # profit Which will tell you about unused variables, useless parentheses, dead code and so on. [Less]
Posted almost 2 years ago
I’ve uploaded my new TCP VMOD for Varnish 4 to github, you can find it here: http://github.com/lkarsten/libvmod-tcp. This VMOD allows you to get the estimated client socket round trip time, and then let you change the TCP connection’s congestion ... [More] control algorithm if you’re so inclined. Research[tm][0] says that Hybla is better for long high latency links, so currently that is what it is used for. Here is a quick VCL example: if (tcp.get_estimated_rtt() > 300) { set req.http.x-tcp = tcp.congestion_algorithm("hybla"); } One thing to note is that VCL handling is very early in the TCP connection lifetime. We’ve only just read and acked the HTTP request. The readings may be off, I’m analyzing this currently. (As I understand it the Linux kernel will keep per-ip statistics, so for subsequent requests this should get better and better..) References: 0: Esterhuizen, A., and A. E. Krzesinski. “TCP Congestion Control Comparison.” (2012). [Less]