I Use This!
Very High Activity

News

Analyzed about 13 hours ago. based on code collected about 20 hours ago.
Posted over 6 years ago
Nobody wants to go slow on the internet. (After all, it’s supposed to be a highway.) This quick fix-it-list will have you feeling the wind in your hair in no … Read more The post Why is my computer so slow? Your browser needs a tune-up. appeared first on The Firefox Frontier.
Posted over 6 years ago by Camelia Badau
Hello Mozillians, We are happy to let you know that Friday, October 13th, we are organizing Firefox 57 Beta 8 Testday. We’ll be focusing our testing on the following new features: Activity Stream, Photon Structure and Photon Onboarding Tour ... [More] Notifications & Tour Overlay 57. Check out the detailed instructions via this etherpad . No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions. Join us and help us make Firefox better! See you on Friday! [Less]
Posted over 6 years ago by Alessio Placitelli
The Go Faster initiative is important as it enables us to ship code faster, using special add-ons, without being strictly tied to the Firefox train schedule. As Georg Fritzsche pointed out in his article, we have two options for instrumenting these add-ons: having probe definitions ride the trains (waiting a few weeks!) or implementing and … →
Posted over 6 years ago by Will Kahn-Greene
Summary This quarter I worked on creating a command line interface for signature generation and in doing that extracted it from the processor into a standalone-ish module. The end result of this work is that: anyone making changes to signature ... [More] generation can can test the changes out on their local machine using a Socorro local development environment I can trivially test incoming signature generation changes--this both saves me time and gives me a much higher confidence of correctness without having to merge the code and test it in our -stage environment [1] we can research and experiment with changes to the signature generation algorithm and how that affects existing crash signatures it's a step closer to being usable by other groups This blog post talks about that work briefly and then talks about some of the things I've been able to do with it. [1] I can't overstate how awesome this is. Read more… (19 mins to read) [Less]
Posted over 6 years ago
In order to figure out data: URL processing requirements I have been studying MIME types (also known as media types) lately. I thought I would share some examples that yield different results across user agents, mostly to demonstrate that even simple ... [More] things are far from interoperable: text/html;charset =gbk text/html;charset='gbk' text/html;charset="gbk"x text/html(;charset=gbk text/html;charset=gbk( text/html;charset="gbk text/html;charset=gbk" These are the relatively simple issues to deal with, though it would have been nice if they had been sorted by now. The MIME type parsing issue also looks at parsing for the Content-Type header, which is even messier, with different requirements for its request and response variants. [Less]
Posted over 6 years ago by [email protected] (Robert)
Microsoft is releasing "Edge for Android" and it uses Chromium. That is bad for the Web. It's bad because engine diversity is really essential for the open Web. Having some users, even a relatively small number, using the Edge engine on Android would ... [More] have been a good step. Going with Chromium increases Web developer expectations that all browsers on Android are — or even should be — Chromium. The less thoughtful sort of developer (i.e., pretty much everyone) will say "Microsoft takes this path, so why doesn't Mozilla too, so we can have the instant gratification of compatibility thanks to a single engine?" The slow accumulation of unfixable bugs due to de facto standardization will not register until the platform has thoroughly rotted; the only escape being alternative single-vendor platforms where developers are even more beholden to the vendor. Sure, it would have been quite a lot of work to port Edge to Android, but Microsoft has the resources, and porting a browser engine isn't a research problem. If Microsoft would rather save resources than promote their own browser engine, perhaps they'll be switching to Chromium on Windows next. Of course that would be even worse for the Web, but it's not hard to believe Microsoft has stopped caring about that, to the extent they ever did. (Of course Edge uses Webkit on iOS, and that's also bad, but it's Apple's ongoing decision to force browsers to use the least secure engine, so nothing new there.) [Less]
Posted over 6 years ago by [email protected] (ClassicHasClass)
This blog post is coming to you from a midway build of TenFourFox FPR4, now with more AltiVec string acceleration, less browser chrome fat, some layout performance updates and upgraded Brotli, OTS and WOFF2 support (current to what's in ... [More] mozilla-central). Next up is getting some more kinks out of CSS Grid support, and hopefully a beta will be ready in a couple weeks for you to play with. Meanwhile, for those of you using the Gopher enabler add-on OverbiteFF on Firefox, its successor is on the way for the Firefox self-inflicted add-on apocalypse: OverbiteWX. OverbiteWX requires Firefox 56 or higher and implements an internal protocol handler that redirects gopher:// URLs typed in the Firefox omnibox or clicked on to the Floodgap Public Gopher Proxy. The reason I've decided to create a new one instead of uploading a "WebExtensions-compatible" version is because, frankly, right now it's impossible. Because there is still no TCP socket API in WebExtensions, there is presently no way to implement a Gopher handler except via a web proxy, and this would be unexpected behaviour to an OverbiteFF user expecting a direct connection (which implemented a true nsIChannel to make the protocol once again a first class citizen in the browser). Since this means Gopher URLs you access are now being sent through an external service, albeit a benign one I run, I think you at least should opt in to that by affirmatively getting the new extension rather than being silently "upgraded" to a new version with (despite my best efforts) rather less functionality. The extension is designed to be forward compatible so that in the near future you can select from your choice of proxies, and eventually, once Someone(tm) writes the API, true socket access directly to the Gopher server of your choice. It won't be as nice as OverbiteFF was, but given that WebExtensions' first and most important goal is to reduce what add-on authors can do to the browser, it may be as good as we get. A prototype is available from the Floodgap Gopher server, which, if you care about Gopher, you already can access (please note that this URL is temporary). Assuming no issues, a more fully-fledged version with a bit more window dressing should be available in AMO hopefully sometime next week. TenFourFox users, never fear; OverbiteFF remains compatible. I've also been approached about a Pale Moon version and I'm looking into it. For those of you following my previous posts on the Raptor Talos II, the next-generation POWER9 workstation with a fully-open-source stack from the firmware to the operating system and no x86 anywhere, you'll recall that orders are scheduled for fulfillment starting in Q4 2017. And we're in Q4. Even though I think it's a stellar package given what you get, it hasn't gotten any cheaper, so if you've got your money together or you've at least got a little headroom on the credit card it's time to fish or cut bait. Raptor may still take orders after this batch starts shipping, but at best you'll have a long wait for their next production run (if there is one), and at worst you might not get to order at all. Let Raptor know there is a lasting and willing market for an alternative architecture you fully control. This machine really is the best successor to the Power Mac. When mine arrives you'll see it first. Last but not least, Microsoft is announcing their Edge browser for iOS and Android. "Cool," sez I, owner of a Pixel XL, "another choice of layout engines on Android" (I use Android Firefox, natch); I was rather looking forward to seeing the desktop Edge layout engine running on non-Microsoft phones. Well, no, it's just a shell over Blink and Chromium. Remember a few years ago when I said Blink would eat the Web? Through attrition and now, arguably, collusion, that's exactly what's happening. [Less]
Posted over 6 years ago by [email protected] (Robert)
This quote is telling: Billions of devices run dnsmasq, and it had been through multiple security audits before now. Simon had done the best job possible, I think. He got beat. No human and no amount of budget would have found these problems before ... [More] now, and now we face the worldwide costs, yet again, of something ubiquitous now, vulnerable.Some of this is quite accurate. Human beings can't write safe C code. Bug-finding tools and security audits catch some problems but miss a lot of others. But on the other hand, this message and its followup betray mistaken assumptions. There are languages running on commodity hardware that provide much better security properties than C. In particular, all three remote code execution vulnerabilities would have been prevented by Rust, Go or even Java. Those languages would have also made the other bugs much more unlikely. Contrary to the quote, given a finite "amount of budget", dnsmasq could have been Rewritten In Rust and these problems avoided. I understand that for legacy code like dnsmasq, even that amount of budget might not be available. My sincere hope is that people will at least stop choosing C for new projects. At this point, doing so is professional negligence. What about C++? In my circle I seldom see enthusiasm for C, yet there is still great enthusiasm for C++, which inherits C's core security weaknesses. Are the C++ projects of today going to be the vulnerability-ridden legacy codebases of tomorrow? (In some cases, e.g. browsers, they already are...) C++ proponents seem to believe that C++ libraries and analysis tools, including efforts such as C++ Core Guidelines: Lifetimes, plus mitigations such as control-flow integrity, will be "good enough". Personally, I'm pessimistic. C++ is a fantastically complex language and that complexity is growing steadily. Much more effort is going into increasing its complexity than addressing safety issues. It's now nearly two years since the Lifetimes document had any sort of update, and at CppCon 2017 just one of 99 talks focused on improving C++ safety. Those of us building code to last owe it to the world to build on rock, not sand. C is sand. C++ is better, but it's far from a solid foundation. [Less]
Posted over 6 years ago by marcia
Kate Glazko and I were fortunate to be able to present a session on Firefox Nightly at this year's Grace Hopper event. My first impression was how massive an event it was! Just watching everyone stream into the venue for the keynote was magnificent. ... [More] Legions of attendees from different companies were easily recognizable by their coordinated shirts. Whether it was Amazon's lime green or Facebook's blue, it was great to see (and almost like a parade!) I thought our presentation went really well. While we had originally conceived it as a workshop, we decided to opt for a presentation followed by a few exercises instead.  Part of the reasoning behind the decision was we simply did not have enough moderators to cover the session. The room held 180 people - I estimate we had about 80 attendees present at the session. We got some really good questions during the Q&A, even one about Thunderbird. Attendees were interested in a wide range of subjects, including privacy practices, how we monitor failing tests, and information and details about Project Quantum.  One attendee was interested in how she could get the Developer tools in Nightly. I hope we succeeded in getting more people downloading and using nightly and 57 beta. At least one student approached me after the event and wants to contribute - that is what makes these types of events so great! [Less]
Posted over 6 years ago by Tarek Ziade
Molotov, the load testing tool I've developed, comes now with an autosizing feature. When the --sizing option is used, Molotov will slowly ramp-up the number of workers per process and will stop once there are too many failures per minute. The ... [More] default tolerance for failure is 5%, but this can be tweaked with the --sizing-tolerance option. Molotov will use 500 workers that are getting ramped up in 5 minutes, but you can set your own values with --workers and --ramp-up if you want to autosize at a different pace. See all the options at http://molotov.readthedocs.io/en/stable/cli This load testing technique is useful to determine what is the limiting resource for a given application: RAM, CPU, I/O or Network. Running Molotov against a single node that way can help decide what is the best combination of RAM, CPU, Disk and Bandwidth per node to deploy a project. In AWS that would mean helping chosing the size of the VM. To perform this test you need to deploy the app on a dedicated node. Since most of our web services projects at Mozilla are now available as Docker images, it becomes easy to automate that deployment when we want to test the service. I have created a small script on the top of Molotov that does exactly that, by using Amazon SSM (Systems Manager). See http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html Amazon SSM SSM is a client-server tool that simplifies working with EC2 nodes. For instance, instead of writing a low-level script using Paramiko that drives EC2 instances through SSH, you can send batch commands through SSM to any number of EC2 instances, and get back the results asynchronously. SSM integrates with S3 so you can get back your commands results as artifacts once they are finished. Building a client around SSM is quite easy with Boto3. The only tricky part is waiting for the results to be ready. This is my SSM client: https://github.com/tarekziade/sizer/blob/master/sizer/ssm.py Deploying and running Based on this SSM client, my script is doing the following operations on AWS: Deploy (or reuse) an EC2 Instance that has an SSM agent and a Docker agent running Run the Docker container of the service on that EC2 instance Run a Docker container that runs Glances (more on this later) Once the EC2 instance has the service up and running, it's ready to be used via Molotov. The script takes a github repo and run it, using moloslave http://molotov.readthedocs.io/en/stable/slave Once the test is over, metrics are grabbed via SSM and the results are presented in a fancy HTML 5 page where you can find out what is the bottleneck of your service Example with Kinto Kinto is a Python service that provides a rest-ish API to read write schemaless JSON documents. Running a load test on it using Molotov is pretty straightforward. The test script adds data, browses it and verifies that the Kinto service returns things correctly. And Kinto has a docker image published on Docker hub. I've run the sizing script using that image on a t2.micro instance. Here are the results: https://ziade.org/sizer_tested.html You can see that the memory is growing throughout the test, because the Docker image uses a memory database and the test keeps on adding data -- that is also why the I/O is sticking to 0. If you double-click on the CPU metrics, you can see that the CPU reaches almost 100% at the end of the test before things starts to break. So, for a memory backend, the limiting factor for Kinto is the CPU, which makes sense. If we had had a bottleneck on I/O, that would have been an indication that something was wrong. Another interesting test would be to run it against a Postgres RDS deployment instead of a memory database. Collecting Metrics with Glances The metrics are collected on the EC2 box using Glances (http://glances.readthedocs.io/) which runs in its own Docker container and has the ability to measure other docker images running on the same agent. see http://glances.readthedocs.io/en/stable/aoa/docker.html?highlight=docker In other words, you can follow the resource usage per docker container, and in our case that's useful to track the container that runs the actual service. My Glances docker container uses this image: https://github.com/tarekziade/sizer/blob/master/Dockerfile which runs the tool and spits out the metrics in a CSV file I can collect via SSM once the test is over. Vizualizing results I could have send the metrics to an Influxdb or Grafana system, but I wanted to create a simple static page that could work locally and be passed around as a test artifact. That's where Plotly (https://plot.ly/) comes in handy. This tool can turn a CSV file produced by Glances into a nice looking HTML5 page where you can toggle between metrics and do other nice stuff. I have used Pandas/Numpy to process the data, which is probably overkill given the amount of processed lines, but their API are a natural fit to work with Plotly. See the small class I've built here: https://github.com/tarekziade/sizer/blob/master/sizer/chart.py Conclusion The new Molotov sizing feature is pretty handy as long as you can automate the deployment of isolated nodes for the service you want to test -- and that's quite easy with Docker and AWS. Autosizing can give you a hint on how an application behaves under stress and help you decide how you want to initially deploy it. In an ideal world, each one of our services has a Molotov test already, and running an autosizing test can be done with minimal work. In a super ideal world, everything I've described is part of the continuous deployement process :) [Less]