I Use This!
Very High Activity


Analyzed 12 days ago. based on code collected 23 days ago.
Posted 2 days ago by Petruta Rasa
Hello everyone, Last Friday – December 8th – we held a Testday event, for Firefox 58 Beta 10. Thank you all for helping us make Mozilla a better place! From India team: Mohammed Adam, Surentharan.R.A, B.Krishnaveni, Aishwarya Narasimhan, Nagarajan ... [More] Rajamanickam, Baranitharan, Fahima Zulfath, Andal_Narasimhan, Amit Kumar Singh. From Bangladesh team: Rezaul Huque Nayeem, Tanvir Mazharul, Tanvir Rahman, Maruf Rahman, Sontus Chandra Anik. Results: – several test cases executed for Media Recorder Refactor and Tabbed Browser; – 4 bugs verified: 1393237, 1399397, 1403593, and 1405319; Thanks for another successful testday!  We hope to see you all in our next events, all the details will be posted on QMO! [Less]
Posted 4 days ago
Test Pilot in 2018: Lessons learned from graduating Screenshots into Firefox Wil and I were talking about the Bugzilla vs Github question for Screenshots a couple of days ago, and I have to admit that I’ve come around to “let’s just use Bugzilla for ... [More] everything, and just ride the trains, and work with the existing processes as much as possible.” I think it’s important to point out that this is a complete departure from my original point of view. Getting an inside view of how and why Firefox works the way it does has changed my mind. Everything just moves slower, with so many good reasons for doing so (good topic for another blog post). Given that our goal is to hand Screenshots off, just going with the existing processes, minimizing friction, is the way to go. If Test Pilot’s goals include landing stuff in Firefox, what does this mean for the way that we run earlier steps in the product development process? Suggestions for experiments that the Test Pilot team will graduate into Firefox Ship maximal, not minimal, features I don’t think we should plan on meaningful iteration once a feature lands in Firefox. It’s just fundamentally too slow to iterate rapidly, and it’s way too hard for a very small team to ship features faster than that default speed (again, many good reasons for that friction). The next time we graduate something into Firefox, we should plan to ship much more than a minimum viable product, because we likely won’t get far past that initial landing point. When in Firefox, embrace the Firefox ways Everything we do, once an experiment gets approval to graduate, should be totally Firefox-centric. Move to Bugzilla for bugs (except, maybe, for server bugs). Mozilla-central for code, starting with one huge import from Github (again, except for server code). Git-cinnabar works really well, if you prefer git over mercurial. We have committers within the team now, and relationships with reviewers, so the code side of things is pretty well sorted. Similarly for processes: we should just go with the existing processes to the best of our ability, which is to say, constantly ask the gatekeepers if we’re doing it right. Embrace the fact that everything in Firefox is high-touch, and use existing personal relationships to ask early and often if we’ve missed any important steps. We will always miss something, whether it’s new rules for some step, or neglecting to get signoff from some newly-created team, but we can plan for that in the schedule. I think we’ve hit most of the big surprises in shipping Screenshots. Aim for bigger audiences in Test Pilot Because it’s difficult to iterate on features when code is inside Firefox, we should make our Test Pilot audience as close to a release audience as possible. We want to aim for the everyday users, not the early adopters. I think we can do this by just advertising Test Pilot experiments more heavily. By gathering data from a larger audience, our data will be more representative of the release audience, and give us a better chance of feature success. Aim for web-flavored features / avoid dramatic changes to the Firefox UI Speaking as the person who did “the weird Gecko stuff” on both Universal Search and Min-Vid, doing novel things with the current Firefox UI is hard, for all the reasons Firefox things are hard: the learning curve is significant and it’s high-touch. Knowledge of how the code works is confined to a small group of people, the docs aren’t great, and learning how things work requires reading the source code, plus asking for help on IRC. Given that our team’s strengths lie in web development, we will be more successful if our features focus on the webby things: cross-device, mobile, or cloud integration; bringing the ‘best of the web’ into the browser. This is stuff we’re already good at, stuff that could be a differentiator for Firefox just as much as new Firefox UI, and we can iterate much more quickly on server code. This said, if Firefox Product wants to reshape the browser UI itself in powerful, unexpected, novel ways, we can do it, but we should have some Firefox or Platform devs committed to the team for a given experiment. [Less]
Posted 5 days ago by Blair MacIntyre
Today, we’re happy to announce that the WebXR Viewer app is available for download on iTunes. In our recent announcement of our Mixed Reality program, we talked about some explorations we were doing to extend WebVR to include AR and MR technology. ... [More] In that post, we pointed at an iOS WebXR Viewer app we had developed to allow us to experiment with these ideas on top of Apple’s ARKit. While the app is open source, we wanted to let developers experiment with web-based AR without having to build the app themselves. The WebXR Viewer app lets you view web pages created with the webxr-polyfill Javascript library, an experimental library we created as part of our explorations. This app is not intended to be a full-fledged web browser, but rather a way to test, demonstrate and share AR experiments created with web technology. Code written with the webxr-polyfill runs in this app, as well as Google’s experimental WebARonARCore APK on Android. We are working on supporting other AR and VR browsers, including WebVR on desktop. We’ve also been working on integrating the webxr-polyfill into the popular three.js graphics library and the A-Frame framework to make it easy for three.js and A-Frame developers to try out these ideas. We are actively working on these libraries and using them in our own projects; while they are works-in-progress, each contains some simple examples to help you get started with them. We welcome feedback and contributions! What’s Next? We are not the only company interested in how WebVR could be extended to support AR and MR; for example, Google released a WebAR extension to WebVR with the WebARonARCore application mentioned above, and discussions on the WebVR standards documents has been lively with these issues. As a result, the companies developing the WebVR API (including us) recently decided to rename the WebVR 2.0 proposal to the WebXR Device API and rename the WebVR Community Group to the Immersive Web Community Group, to reflect broad agreement that AR and VR devices should be exposed through a common API. The WebXR API we created was based on WebVR2.0; we will be aligning it with the WebXR Device API as is develops and continue using it to explore ideas for exposing additional AR concepts into WebXR. We’ve been working on this app since earlier this fall, before the WebVR community decided to move from WebVR to WebXR, and we are looking forward to continue updating the app and libraries as the WebXR Device API is developed. We will continue to use this app as a platform for our experiments with WebXR on iOS using ARKit, and welcome others (both inside and outside the Immersive Web Community Group) to work with us on the app, the javascript libraries, and demonstrations of how the web can support AR and VR moving forward. [Less]
Posted 5 days ago
If you’ve downloaded the new Firefox, you’ve probably noticed that we did some redecorating. Our new UI design (we call it Photon) is bright, bold, and inspired by the speed … Read more The post What’s on the new Firefox menus? Speed appeared first on The Firefox Frontier.
Posted 5 days ago by Will Kahn-Greene
html5lib-python v1.0 released! Yesterday, Geoffrey released html5lib 1.0 [1]! The changes aren't wildly interesting. The more interesting part for me is how the release happened. I'm going to spend the rest of this post talking about that. ... [More] [1] Technically there was a 1.0 release followed by a 1.0.1 release because the 1.0 release had issues. The story of Bleach and html5lib I work on Bleach which is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML. It relies heavily on another library called html5lib-python. Most of the work that I do on Bleach consists of figuring out how to make html5lib do what I need it to do. Over the last few years, maintainers of the html5lib library have been working towards a 1.0. Those well-meaning efforts got them into a versioning model which had some unenthusing properties. I would often talk to people about how I was having difficulties with Bleach and html5lib 0.99999999 (8 9s) and I'd have to mentally count how many 9s I had said. It was goofy [2]. In an attempt to deal with the effects of the versioning, there's a parallel set of versions that start with 1.0b. Because there are two sets of versions, it was a total pain in the ass to correctly specify which versions of html5lib that Bleach worked with. While working on Bleach 2.0, I bumped into a few bugs and upstreamed a patch for at least one of them. That patch sat in the PR queue for months. That's what got me wondering--is this project dead? I tracked down Geoffrey and talked with him a bit on IRC. He seems to be the only active maintainer. He was really busy with other things, html5lib doesn't pay at all, there's a ton of stuff to do, he's burned out, and recently there have been spats of negative comments in the issues and PRs. Generally the project had a lot of stop energy. Some time in August, I offered to step up as an interim maintainer and shepherd html5lib to 1.0. The goals being: land or close as many old PRs as possible triage, fix, and close as many issues as possible clean up testing and CI clean up documentation ship 1.0 which ends the versioning issues [2] Many things in life are goofy. Thoughts on being an interim maintainer I see a lot of open source projects that are in trouble in the sense that they don't have a critical mass of people and energy. When the sole part-time volunteer maintainer burns out, the project languishes. Then the entitled users show up, complain, demand changes, and talk about how horrible the situation is and everyone should be ashamed. It's tough--people are frustrated and then do a bunch of things that make everything so much worse. How do projects escape the raging inferno death spiral? For a while now, I've been thinking about a model for open source projects where someone else pops in as an interim maintainer for a short period of time with specific goals and then steps down. Maybe this alleviates users' frustrations? Maybe this gives the part-time volunteer burned-out maintainer a breather? Maybe this can get the project moving again? Maybe the temporary interim maintainer can make some of the hard decisions that a regular long-term maintainer just can't? I wondered if I should try that model out here. In the process of convincing myself that stepping up as an interim maintainer was a good idea [3], I looked at projects that rely on html5lib [4]: pip vendors it Bleach relies upon it heavily, so anything that uses Bleach uses html5lib (jupyter, hypermark, readme_renderer, tensorflow, ...) most web browsers (Firefox, Chrome, servo, etc) have it in their repositories because web-platform-tests uses it I talked with Geoffrey and offered to step up with these goals in mind. I started with cleaning up the milestones in GitHub. I bumped everything from the 0.9999999999 (10 9s) milestone which I determined will never happen into a 1.0 milestone. I used this as a bucket for collecting all the issues and PRs that piqued my interest. I went through the issue tracker and triaged all the issues. I tried to get steps to reproduce and any other data that would help resolve the issue. I closed some issues I didn't think would ever get resolved. https://github.com/html5lib/html5lib-python/issues/295#issuecomment-333851735 https://github.com/html5lib/html5lib-python/issues/315#issuecomment-347709140 I triaged all the pull requests. Some of them had been open for a long time. I apologized to people who had spent their time to upstream a fix that sat around for years. In some cases, the changes had bitrotted severely and had to be redone [5]. https://github.com/html5lib/html5lib-python/pull/287#issuecomment-326636920 https://github.com/html5lib/html5lib-python/pull/176#issuecomment-333861511 Then I plugged away at issues and pull requests for a couple of months and pushed anything out of the milestone that wasn't well-defined or something we couldn't fix in a week. At the end of all that, Geoffrey released version 1.0 and here we are today! [3] I have precious little free time, so this decision had sweeping consequences for my life, my work, and people around me. [4] Recently, I discovered libraries.io--it's pretty amazing project. They have a page for html5lib. I had written a (mediocre) tool that does vaguely similar things. [5] This is what happens on projects that don't have a critical mass of energy/people. It sucks for everyone involved. Conclusion and thoughts I finished up as interim maintainer for html5lib. I don't think I'm going to continue actively as a maintainer. Yes, Bleach uses it, but I've got other things I should be doing. I think this was an interesting experiment. I also think it was a successful experiment in regards to achieving my stated goals, but I don't know if it gave the project much momentum to continue forward. I'd love to see other examples of interim maintainers stepping up, achieving specific goals, and then stepping down again. Does it bring in new people to the community? Does it affect the raging inferno death spiral at all? What kinds of projects would benefit from this the most? What kinds of projects wouldn't benefit at all? [Less]
Posted 5 days ago by Jofish Kaye
We are happy to announce the results of the Mozilla Research Grant program for the second half of 2017. This was a competitive process, with over 70 applicants. After three rounds of judging, we selected a total of fourteen proposals, ranging from ... [More] building tools to support open web platform projects like Rust and WebAssembly to designing digital assistants for low- and middle- income families and exploring decentralized web projects in the Orkney Islands. All these projects support Mozilla’s mission to make the Internet safer, more empowering, and more accessible. The Mozilla Research Grants program is part of Mozilla’s Emerging Technologies commitment to being a world-class example of inclusive innovation and impact culture-and reflects Mozilla’s commitment to open innovation, continuously exploring new possibilities with and for diverse communities. Zhendong Su University of California, Davis Practical, Rigorous Testing of the Mozilla Rust and bindgen Compilers Ross Tate Cornell University Inferable Typed WebAssembly Laura Watts IT University of Copenhagen Shaping community-based managed services (‘Orkney Cloud Saga’) Svetlana Yarosh University of Minnesota Children & Parent Using Speech Interfaces for Informational Queries Serge Egelman UC Berkeley / International Computer Science Institute Towards Usable IoT Access Controls in the Home Alexis Hiniker University of Washington Understanding Design Opportunities for In-Home Digital Assistants for Low- and Middle-Income Families Blase Ur University of Chicago Improving Communication About Privacy in Web Browsers Wendy Ju Cornell Tech Video Data Corpus of People Reacting to Chatbot Answers to Enable Error Recognition and Repair Katherine Isbister University of California Santa Cruz Designing for VR Publics: Creating the right interaction infrastructure for pro-social connection, privacy, inclusivity, and co-mingling in social VR Sanjeev Arora Princeton University and the Institute for Advanced Study Compact representations of meaning of natural language: Toward a rigorous and interpretable study Rachel Cummings Georgia Tech Differentially Private Analysis of Growing Datasets Tongping Liu University of Texas at San Antonio Guarder: Defending Heap Vulnerabilities with Flexible Guarantee and Better Performance   The Mozilla Foundation will also be providing grants in support of two additional proposals:   J. Nathan Matias CivilServant, incubated by Global Voices Preventing online harassment with Community A/B Test Systems Donghee Yvette Wohn New Jersey Institute of Technology Dealing with Harassment: Moderation Practices of Female and LGBT Live Streamers   Congratulations to all successfully funded applicants! The 2018H1 round of grant proposals will open in the Spring; more information is available at https://research.mozilla.org/research-grants/. Jofish Kaye, Principal Research Scientist, Emerging Technologies, Mozilla The post Mozilla Awards Research Grants to Fund Top Research Projects appeared first on The Mozilla Blog. [Less]
Posted 5 days ago
The roles functionality in Taskcluster is a kind of “macro expansion”: given the roles group:admins -> admin-scope-1 admin-scope-2 assume:group:devs group:devs -> dev-scope the scopeset ... [More] ["assume:group:admins", "my-scope"] expands to [ "admin-scope-1", "admin-scope-2", "assume:group:admins", "assume:group:devs", "dev-scope", "my-scope", ] because the assume:group:admins expanded the group:admins role, and that recursively expanded the group:devs role. However, this macro expansion did not allow any parameters, similar to allowing function calls but without any arguments. The result is that we have a lot of roles that look the same. For example, project-admin:.. roles all have similar scopes (with the project name included in them), and a big warning in the description saying “DO NOT EDIT”. Role Parameters Now we can do better! A role’s scopes can now include . When expanding, this string is replaced by the portion of the scope that matched the * in the roleId. An example makes this clear: project-admin:* -> assume:hook-id:project-/* assume:project::* auth:create-client:project//* auth:create-role:hook-id:project-/* auth:create-role:project::* auth:delete-client:project//* auth:delete-role:hook-id:project-/* auth:delete-role:project::* auth:disable-client:project//* auth:enable-client:project//* auth:reset-access-token:project//* auth:update-client:project//* auth:update-role:hook-id:project-/* auth:update-role:project::* hooks:modify-hook:project-/* hooks:trigger-hook:project-/* index:insert-task:project..* project::* queue:get-artifact:project//* queue:route:index.project..* secrets:get:project//* secrets:set:project//* With the above parameterized role in place, we can delete all of the existing project-admin:.. roles: this one will do the job. A client that has assume:project-admin:bugzilla in its scopes will have assume:hook-id:project:bugzilla/* and all the rest in its expandedScopes. There’s one caveat: a client with assume:project-admin:nss* will have assume:hook-id:project:nss* – note the loss of the trailing /. The * consumes any parts of the scope after the . In practice, as in this case, this is not an issue, but could certainly cause surprise for the unwary. Implementation Parameterized roles seem pretty simple, but they’re not! Efficiency Before parameterized roles the Taskcluster-Auth service would pre-compute the full expansion of every role. That meant that any API call requiring expansion of a set of scopes only needed to combine the expansion of each scope in the set – a linear operation. This avoided a (potentially exponential-time!) recursive expansion, trading some up-front time pre-computing for a faster response to API calls. With parameterized roles, such pre-computation is not possible. Depending on the parameter value, the expansion of a role may or may not match other roles. Continuing the example above, the role assume:project:focus:xyz would be expanded when the parameter is focus, but not when the parameter is bugzilla. The fix was to implement the recursive approach, but in such a way that non-pathological cases have reasonable performance. We use a trie which, given a scope, returns the set of scopes from any matching roles along with the position at which those scopes matched a * in the roleId. In principle, then, we resolve a scopeset by using this trie to expand (by one level) each of the scopes in the scopeset, substituting parameters as necessary, and recursively expand the resulting scopes. To resolve a scope set, we use a queue to “flatten” the recursion, and keep track of the accumulated scopes as we proceed. We already had some utility functions that allow us to make a few key optimizations. First, it’s only necessary to expand scopes that start with assume: (or, for completeness, things like * or assu*). More importantly, if a scope is already included in the seen scopeset, then we need not enqueue it for recursion – it has already been accounted for. In the end, the new implementation is tens of milliseconds slower for some of the more common queries. While not ideal, in practice that as not been problematic. If necessary, some simple caching might be added, as many expansions repeat exactly. Loops An advantage of the pre-computation was that it could seek a “fixed point” where further expansion does not change the set of expanded scopes. This allowed roles to refer to one another: some-role -> assume:another-role another* -> assume:some-role A naïve recursive resolver might loop forever on such an input, but could easily track already-seen scopes and avoid recursing on them again. The situation is much worse with parameterized roles. Consider: some-role-* -> assume:another-role-x another-role-* -> assume:some-role-y A simple recursive expansion of assume:some-role-abc would result in an infinite set of roles: assume:another-role-abcx assume:some-role-abcxy assume:another-role-abcxyx assume:some-role-abcxyxy ... We forbid such constructions using a cycle check, configured to reject only cycles that involve parameters. That permits the former example while prohibiting the latter. Atomic Modifications But even that is not enough! The existing implementation of roles stored each role in a row in Azure Table Storage. Azure provides concurrent access to and modification of rows in this storage, so it’s conceivable that two roles which together form a cycle could be added simultaneously. Cycle checks for each row insertion would each see only one of the rows, but the result after both insertions would cause a cycle. Cycles will crash the Taskcluster-Auth service, which will bring down the rest of Taskcluster. Then a lot of people will have a bad day. To fix this, we moved roles to Azure Blob Storage, putting all roles in a single blob. This service uses ETags to implement atomic modifications, so we can perform a cycle check before committing and be sure that no cyclical configuration is stored. What’s Next The parameterized role support is running in production now, but we have no yet updated any roles, aside from a few test roles, to use it. The next steps are to use the support to address a few known weak points in role configuration, including the project administration roles used as an example above. [Less]
Posted 5 days ago by Gregory Szorc
For over a year, AUFS - a layering filesystem for Linux - has been giving me fits. As I initially measured last year, AUFS has... suboptimal performance characteristics. The crux of the problem is that AUFS obtains a global lock in the Linux kernel ... [More] (at least version 3.13) for various I/O operations, including stat(). If you have more than a couple of active CPU cores, the overhead from excessive kernel locking inside _raw_spin_lock() can add more overhead than extra CPU cores add capacity. That's right: under certain workloads, adding more CPU cores actually slows down execution due to cores being starved waiting for a global lock in the kernel! If that weren't enough, AUFS can also violate POSIX filesystem guarantees under load. It appears that AUFS sometimes forgets about created files or has race conditions that prevent created files from being visible to readers until many seconds later! I think this issue only occurs when there are concurrent threads creating files. These two characteristics of AUFS have inflicted a lot of hardship on Firefox's continuous integration. Large parts of Firefox's CI execute in Docker. And the host environment for Docker has historically used Ubuntu 14.04 with Linux 3.13 and Docker using AUFS. AUFS was/is the default storage driver for many versions of Docker. When this storage driver is used, all files inside Docker containers are backed by AUFS unless a Docker volume (a directory bind mounted from the host filesystem - EXT4 in our case) is in play. When we started using EC2 instances with more CPU cores, we weren't getting a linear speedup for CPU bound operations. Instead, CPU cycles were being spent inside the kernel. Stack profiling showed AUFS as the culprit. We were thus unable to leverage more powerful EC2 instances because adding more cores would only provide marginal to negative gains against significant cost expenditure. We worked around this problem by making heavy use of Docker volumes for tasks incurring significant I/O. This included version control clones and checkouts. Somewhere along the line, we discovered that AUFS volumes were also the cause of several random file not found errors throughout automation. Initially, we thought many of these errors were due to bugs in the underlying tools (Mercurial and Firefox's build system were common victims because they do lots of concurrent I/O). When the bugs mysteriously went away after ensuring certain operations were performed on EXT4 volumes, we were able to blame AUFS for the myriad of filesystem consistency problems. Earlier today, we pushed out a change to upgrade Firefox's CI to Linux 4.4 and switched Docker from AUFS to overlayfs (using the overlay2 storage driver). The improvements exceeded my expectations. Linux build times have decreased by ~4 minutes, from ~750s to ~510s. Linux Rust test times have decreased by ~4 minutes, from ~615s to ~380s. Linux PGO build times have decreased by ~5 minutes, from ~2130s to ~1820s. And this is just the build side of the world. I don't have numbers off hand, but I suspect many tests also got a nice speedup from this change. Multiplied by thousands of tasks per day and factoring in the cost to operate these machines, the elimination of AUFS has substantially increased the efficiency (and reliability) of Firefox CI and easily saved Mozilla tens of thousands of dollars per year. And that's just factoring in the savings in the AWS bill. Time is money and people are a lot more expensive than AWS instances (you can run over 3,000 c5.large EC2 instances at spot pricing for what it costs to employ me when I'm on the clock). So the real win here comes from Firefox developers being able to move faster because their builds and tests complete several minutes faster. In conclusion, if you care about performance or filesystem correctness, avoid AUFS. Use overlayfs instead. [Less]
Posted 5 days ago by Alessio Placitelli
By: Alessio Placitelli, Ben Miroglio, Jason Thomas, Shell Escalante and Martin Lopatka. With special recognition of the development efforts of Roberto Vitillo who kickstarted this project, Mauro Doglio for massive contributions to the code base ... [More] during his time at Mozilla, Florian Hartmann, who contributed efforts towards prototyping the ensemble linear combiner, Stuart Colville for coordinating … → [Less]
Posted 5 days ago by Ryan T. Harter
I finally got a chance to scratch an itch today. Problem When working with bigger ETL jobs, I frequently run into jobs that take hours to run. I usually either step away from the computer or work on something less important while the job runs. I don't have a good …