88
I Use This!
Activity Not Available

News

Posted about 13 years ago
Posted on 2011-03-16 I was made aware of a synthetic benchmark that concerned Varnish today, and it looked rather suspicious. The services tested was Varnish, nginx, Apache and G-Wan. And G-Wan came out an order of magnitude faster than Varnish. This ... [More] made me question the result. The first thing I noticed was AB, a tool I've long since given up trying to make behave properly. As there was no detailed data, I decided to give it a spin myself. You will not find graphs. You will not find "this is best!"-quotes. I'm not even backing up my statements with httperf-output. Disclaimer This is not a comparison of G-Wan versus Varnish. It is not complete. It is not even a vague attempt at making either G-Wan or Varnish perform better or worse. It is not realistic. Not complete and in no way a reflection on the overall functionality, usability or performance of G-Wan. Why not? Because I would be stupid to publicize such things without directly consulting the developers of G-Wan so that the comparison would be fair. I am a Varnish-developer. This is a text about stress testing. Not the result of stress testing. Nothing more. The basic idea So G-Wan was supposedly much faster than Varnish. The feature-set is also very narrow, as it goes about things differently. The test showed that Varnish, Apache and nginx were almost comparable in performance, whereas G-Wan was ridiculously much faster. The test was also conducted on a local machine (so no networking) and using AB. As I know that it's hard to get nginx, Apache and Varnish to perform within the same level, this indicated that G-Wan did something differently that affected the test to me. I installed g-wan and Varnish on a virtual machine and started playing with httperf. What to test The easiest number to demonstrate in a test is the maximum request rate. It tells you what the server can do under maximum load. However, it is also the hardest test to do precisely and fairly across daemons of vastly different nature. Other things I have rarely written about is the response time of Varnish for average requests. This is often much more interesting to the end user, as your server isn't going to be running at full capacity anyway. The fairness and concurrency is also highly relevant. A user doing a large download shouldn't adversely affect other users. I wasn't going to bother with all that. First test The first test I did was "max req/s"-like. It quickly showed that G-Wan was very fast, and in fact faster than Varnish. At first glance. The actual request-rate was faster. The CPU-usage was lower. However, Varnish is massively multi-threaded, which offsets the cpu measurements greatly and I wasn't about to trust it. Looking closer I realized that the real bottleneck was in fact httperf. With Varnish, it was able to keep more connections open and busy at the same time, and thus hit the upper limit of concurrency. This in turned gave subtle and easily ignored errors on the client which Varnish can do little about. It seemed G-Wan was dealing with fewer sessions at the same time, but faster, which gave httperf an easier time. This does not benefit G-Wan in the real world (nor does it necessarily detract from the performance), but it does create an unbalanced synthetic test. I experimented with this quite a bit, and quickly concluded that the level of concurrency was much higher with varnish. But it was difficult to measure. Really difficult. Because I did not want to test httperf. The hardware I used was my home-computer, which is ridiculously overpowered. The VM (KVM) was running with two CPU cores and I executed the clients from the host-OS instead of booting up physical test-servers. (... That 275k req/s that's so much quoted? Spotify didn't skip a beat while it was running (on the same machine). ;)) Conclusion The more I tested this, the more I was able to produce any result I wanted by tweaking the level of concurrency, the degree of load, the amount of bandwidth required and so forth. The response time of G-Wan seemed to deteriorate with load. But that might as well be the test environment. As the load went up, it took a long time to get a response. This is just not the case with Varnish at all. I ended up doing a little hoodwinking at the end to see how far this went, and the results varied extremely with tiny variations of test-parameters. The concurrency is a major factor. And the speed of Varnish at each individual connection played a huge part. At large amounts of parallel requests Varnish would be sufficiently fast with all the connections that httperf never ran into problems, while G-Wan would be more uneven and thus trigger failures (and look slower)... My only conclusion is that it will take me several days to properly map out the performance patterns of Varnish compared to G-Wan. They treat concurrent connections vastly different and perform very different depending on the load-pattern you throw at them. Relating this to real traffic is very hard. But this confirms my suspicion of the bogus-ness of the blog post that lead me to perform these tests. It's not that I mind Varnish losing performance tests if we are actually slower, but it's very hard to stomach when the nature of the test is so dubious. The art of measuring realistic performance with synthetic testing is not one that can be mastered in an afternoon. Lessons learned (I think conclusions are supposed to be last, but never mind) First: Be skeptical of unbalanced results. And of even results. Second: Measure more than one factor. I've mainly focused on request-rate in my posts because I do not compare Varnish to anything but itself. Without a comparison it doesn't make that much sense to provide reply latency (though I suppose I should start supplying a measure of concurrency, since that's one of the huge strong-points of Varnish.). Third: Conclude carefully. This is an extension of the first lesson. A funny detail: While I read the license for the non-free G-Wan, which I always do for proprietary software, I was happy to see that it didn't have a benchmark-clause (Oracle, anyone?). But it does forbid removing or modifying the Server:-header. It also forces me to give the G-Wan-guys permission to use my using of G-Wan in their marketing… Hmm — maybe I should ... — err, never mind. Comments [Less]
Posted about 13 years ago
Posted on 2011-03-16 I will be in Paris next week to participate in a seminar on Varnish at Capgemini's premises. If you are in the area and interested in Varnish, take a look at https://www.varnish-software.com/paris ... [More] (https://www.varnish-software.com/paris). The nature of the event is informational for technical minds. (This must be my shortest blog-post by far) Comments [Less]
Posted about 13 years ago
We're busy debugging trunk these days. Varnish does a fairly good job of catching a stack trace when the child process crashed. However, as the stack trace is sent to syslog, syslog mangles it and loses a bit of information. Tollef grew tired of ... [More] getting people to fish out traces from syslog and implemented the CLI command panic.show. It looks like this....read more [Less]
Posted about 13 years ago
read more
Posted about 13 years ago
It is a useful trick to lazily load comments or such elements at the bottom of page. Some elements may be loaded only when they are scrolled visible. All users are not interested in the information and do not necessary read the article long enough ... [More] to see it By lazily loading such elements one can speed up the initial page load time You save bandwidth If you use AJAX for the dynamic elements of the page you can more easily cache your pages in static page cache (Varnish) even if the pages contain personalized bits For example, Disqus is doing this (see comments in jQuery API documentation). You can achieve this with in-view plug-in for jQuery. Below is an example for Plone triggering productappreciation_view loading when our placeholder div tag becomes visible. ... <head> <script type="text/javascript" tal:attributes="src string:${portal_url}/++resource++your.app/in-view.js"></script> </head> ... <div id="comment-placefolder"> <!-- Display spinning AJAX indicator gif until our AJAX call completes --> <p> <!-- Image is in Products.CMFPlone/skins/plone_images --> <img tal:attributes="src string:${context/@@plone_portal_state/portal_url}/spinner.gif" /> Loading comments </p> <!-- Hidden link to a view URL which will render the view containing the snippet for comments -->                        <a rel="nofollow" style="display:none" tal:attributes="href string:${context/absolute_url}/productappreciation_view" /> <script> jq(document).ready(function() { // http://remysharp.com/2009/01/26/element-in-view-event-plugin/                                         jq("#comment-placeholder").bind("inview", function() { // This function is executed when the placeholder becomes visible // Extract URL from HTML page var commentURL = jq("#comment-placeholder a").attr("href"); if (commentURL) { // Trigger AJAX call jq("#comment-placeholder").load(commentURL); } });                                      });      </script> </div>  Subscribe to this blog in a reader Follow me on Twitter [Less]
Posted about 13 years ago
Varnish is a very fast front end cache server. You might want to use it at the front of Apache to speed up loading of your static pages and static media, for example for your WordPress blog. You can also use Varnish backends to multiplex the requests ... [More] between Plone and Apache based PHP software running on the same server using different backend directives. However if you wish to use Apache virtual hosts with Varnish there is a trick in it. We use the following setup Varnish listens to port 80, HTTP Apache listens to port 81 Varnish uses Apache as a backend The related varnish.vcl is backend backend_apache { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { ... elsif (req.http.host ~ "^blog.mfabrik.com(:[0-9]+)?$") { set req.backend = backend_apache; } ... } Note that the backend IP is 127.0.0.1 (localhost). By default, with Debian or Ubuntu Linux, Apache configuration does not do virtual hosting for this. So if /etc/apache2/sites-enabled/blog.mfabrik.com looks like: <VirtualHost *:81> ServerName blog.mfabrik.com ... LogFormat       combined TransferLog     /var/log/apache2/blog.mfabrik.com.log ... ExpiresActive On ExpiresByType image/gif A3600 ExpiresByType image/png A3600 ExpiresByType image/image/vnd.microsoft.icon A3600 ExpiresByType image/jpeg A3600 ExpiresByType text/css A3600 ExpiresByType text/javascript A3600 ExpiresByType application/x-javascript A3600 </VirtualHost> And now the trick – you need to add the following to /etc/apache2/httpd.conf NameVirtualHost *:81 Unless you do all this, Apache will just pick the first virtualhost file in /etc/apache2/sites-enabled and use it for all requests. Also you need to edit ports.conf and change Apache to listen to port 81: Listen 81  Subscribe to this blog in a reader Follow me on Twitter [Less]
Posted about 13 years ago
Two features are very often asked for in Varnish. One is SSL and we'll be coming back to that one later and the other is ICP - Internet cache protocol as defined in RFC 2186. read more
Posted about 13 years ago
<script type="text/javascript"> </script> <script src="http://pagead2.googlesyndication.com/pagead/show_ads.js" type="text/javascript"> </script>With varnish 2.1.4, try this: if you want purge this regexp: ^(www\.)?(.*)xxx you ... [More] must do : varnishadm -T localhost:6082 purge "req.http.host ~ \"^(www\\\.)?(.*)xxx\"" Be careful to put good backslash sequence, in order to avoid: Syntax Error: Invalid backslash sequence [Less]
Posted about 13 years ago
Varnish is a state of the art http accelerator, or frontside cache, if you like. varnish-2.1.5 was released the other day. I have updated my packages in Fedora and epel6. Builds for rhel4 and rhel5 may be found at the usual ... [More] http://users.linpro.no/ingvar/varnish/. The rhel5 packages require some dependencies pulled from epel5. Varnish Software produces their own packages, based on the specfile I maintain for Fedora. The only important change is that my spins link against a system installed jemalloc, instead of the one provided with the source. This gives us the opportunity to update jemalloc to the latest version without recompiling varnish. I also build packages for rhel4. While probably unsupported from Varnish Software, it compiles and runs the test suite after some small fixes to the build. jemalloc packages are provided as well. [Less]
Posted about 13 years ago
This morning Kristian tried to throw Gource at the Varnish Git repository. The results where spectacular.<iframe allowfullscreen="allowfullscreen" frameborder="0" height="510" src="http://www.youtube.com/embed/5mZA-KbP5WQ" title="YouTube video player" width="640"></iframe>