I Use This!
Very High Activity

News

Analyzed about 22 hours ago. based on code collected 1 day ago.
Posted about 6 years ago
In the last week, we merged 85 PRs in the Servo organization’s repositories. Congratulations to waywardmonkeys for their new mandate to review and maintain the low-level harfbuzz bindings, and their work to create safe higher-level bindings! ... [More] Planning and Status Our roadmap is available online, including the overall plans for 2018. This week’s status updates are here. Notable Additions emilio made some Linux environments not crash on startup. jdm created a tool to chart memory usage over time. emilio reordered some style system checks for better performance. mrobinson improved the clipping behaviour of blurred text shadows. mbrubeck added the resize API to SmallVec nox expanded the set of CSS types that can use derived serialization. gw reduced the number of allocations necessary on most pages. SimonSapin replaced the angle crate with a fork maintained by Mozilla. mrobinson removed some redundant GPU matrix math calculations. Beta-Alf improved the performance of parsing CSS keyframes. gw simplified the rendering for box shadows. mkollaro implemented the glGetTexParameter API. fabricedesre added the pageshow event when navigating a page. SimonSapin demonstrated how to integrate the DirectComposition API in WebRender. waywardmonkey added a higher-level crate for using the harfbuzz library. paulrouget switched Servo to use the upstream glutin crate instead of an outdated fork. oOIgnitionOo added a command line flag to download and run a nightly build of Servo. New Contributors Dmitry Florian Wagner Martina Kollarova Vegard Sandengen Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors! [Less]
Posted about 6 years ago
Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 100 blog posts written over the span of a few weeks. The end result is the ... [More] recently-merged 2018 roadmap RFC. Rust: 2018 edition This year, we will deliver Rust 2018, marking the first major new edition of Rust since 1.0 (aka Rust 2015). We will continue to publish releases every six weeks as usual. But we will designate a release in the latter third of the year (Rust 1.29 - 1.31) as Rust 2018. This new “edition” of Rust will be the culmination of feature stabilization throughout the year, and will ship with polished documentation, tooling, and libraries that tie in to those features. The idea of editions is to signify major steps in Rust’s evolution, where a collection of new features or idioms, taken as a whole, changes the experience of using Rust. They’re a chance, every few years, to take stock of the work we’ve delivered in six-week increments. To tell a bigger story about where Rust is going. And to ship the whole stack as a polished product. We expect that each edition will have a core theme or focus. Thinking of 1.0 as “Rust 2015”, we have: Rust 2015: stability Rust 2018: productivity What will be in Rust 2018? The roadmap doesn’t say for certain what will ship in Rust 2018, but we have a pretty good idea, and we’ll cover the major suspects below. Documentation improvements Part of the goal with the Rust 2018 release is to provide high quality documentation for the full set of new and improved features and the idioms they give rise to. The Rust Programming Language book has been completely re-written over the last 18 months, and will be updated throughout the year as features reach the stable compiler. Rust By Example will likewise undergo a revamp this year. And there are numerous third party books, like Programming Rust, reaching print as well. Language improvements The most prominent language work in the pipeline stems from 2017’s ergonomics initiative. Almost all of the accepted RFCs from the initiative are available on nightly today, and will be polished and stabilized over the next several months. Among these productivity improvements are a few “headliners” that will form the backbone of the release: Ownership system improvements, including making borrowing more flexible via “non-lexical lifetimes”, improved pattern matching integration, and more. Trait system improvements, including the long-awaited impl Trait syntax for dealing with types abstractly. Module system improvements, focused on increasing clarity and reducing complexity. Generators/async/await: work is rapidly progressing on first-class async programming support. In addition, we anticipate a few more major features to stabilize prior to the Rust 2018 release, including SIMD, custom allocators, and macros 2.0. Compiler improvements As of Rust 1.24, incremental recompilation is available and enabled by default on the stable compiler. This feature already makes rebuilds significantly faster than fresh builds, but over the course of the year we expect continued improvements for both fresh and re-builds. Compiler performance should not be an obstacle to productivity in Rust 2018. Tooling improvements Rust 2018 will see high quality 1.0 releases of the Rust Language Server (“RLS”, which underlies much of our IDE integration story) and rustfmt (a standard formatting tool for Rust code). We will continue to improve Cargo by stabilizing custom registries, public dependencies, and a revised profile system. We’re also expecting further work on Cargo build system integration, Xargo integration, and custom test frameworks, though it’s unclear as yet how many of these will be complete prior to Rust 2018. Library improvements Building on our work from last year, we will publish a 1.0 version of the Rust API guidelines book, continue pushing important libraries to 1.0 status, improve discoverability through a revamped cookbook effort, and make heavy investments in libraries in specific domains—as we’ll see below. Web site improvements As part of Rust 2018, we will completely overhaul the Rust web site, making it useful for CTOs and engineers alike. It should be far easier to find information to help evaluate Rust for your use case, and to stay up to date with the latest tooling and ecosystem improvements. Four target domains Part of our goal with Rust 2018 is to demonstrate Rust’s productivity in specific domains of use. We’ve selected four such domains to invest in and highlight this year: Network services. Rust’s reliability and low footprint make it an excellent match for network services and infrastructure, especially at high scale. Command-line apps (CLI). Rust’s portability, reliability, ergonomics, and ability to produce static binaries come together to great effect for writing CLI apps. WebAssembly. The “wasm” web standard allows shipping native-like binaries to all major browsers, but GC support is still years away. Rust is extremely well positioned to target this domain, and provides a reasonable on-ramp for programmers coming from JS. Embedded devices. Rust has the potential to make programming resource-constrained devices much more productive—and fun! We want embedded programming to reach first-class status this year. Each of these domains has a dedicated working group for the year. These WGs will work in a cross-cutting fashion, interfacing with language, tooling, library, and documentation work. Compatibility across editions TL;DR: Rust will continue its stability guarantee of hassle-free updates to new versions. Editions will have a meaning for the compiler. You will be able to write: edition = "2018" in your Cargo.toml to opt in to the new edition for your crate. Doing so may introduce new keywords or otherwise require adjustments to code. However: You can use old editions indefinitely on new compilers; editions are opt-in. Editions are set on a per-crate basis and can be mixed and matched; you can be on a different edition from your dependencies. Warning-free code in one edition must compile, and have the same behavior, on the next. Edition-related warnings, e.g. that an identifier will become a keyword in the next edition, must be easily fixable via an automated migration tool (rustfix). Only a small minority of crates should require any manual work to opt in to a new edition, and that manual work must be minimal. In other words, the progression of new compiler versions is independent from editions; you can migrate at your leisure, and don’t have to worry about ecosystem compatibility; and edition migration is normally trivial. Additional 2018 goals While the Rust 2018 release is our major focus this year, there are some additional ongoing concerns that we want to give attention to. Better serving intermediate Rustaceans One of the strongest messages we’ve heard from production users, and the 2017 survey, is that people need more resources to take them from understanding Rust’s concepts to knowing how to use them effectively. The roadmap does not stipulate exactly what these resources should look like — probably there should be several kinds — but commits us as a community to putting significant work into this space, and ending the year with some solid new material. Community Connect and empower Rust’s global community. We will pursue internationalization as a first-class concern, and proactively work to build ties between Rust subcommunities currently separated by language, geography, or culture. We will spin up and support Rust events worldwide, including further growth of the RustBridge program. Grow Rust’s teams and new leaders within them. We will refactor the Rust team structure to support more scale, agility, and leadership growth. We will systematically invest in mentoring, both by creating more on-ramp resources and through direct mentorship relationships. A call to action As always in the Rust world, the goals laid out here will ultimately be the result of a community-wide effort—maybe one including you! Here are some of the teams where we could use the most help. Note that all IRC channels refer to the irc.mozilla.org network. WebAssembly WG. Compiling Rust to WebAssembly should be the best choice for fast code on the Web. Check out rust-lang-nursery/rust-wasm to learn more and get involved! CLI WG. Writing CLI apps in Rust should be a frictionless experience–from finding the right libraries and writing concise integration tests up to cross-platform distribution. Join us at rust-lang-nursery/cli-wg and help us reach that goal! Embedded Devices WG. Quality, productivity, accessibility: Rust can change the embedded industry for the better. Let’s get this process started in 2018! Join us at https://github.com/rust-lang-nursery/embedded-wg Ecosystem WG. We’ll be providing guidance and support to important crates throughout the ecosystem. Drop into the WG-ecosystem room and we’ll guide you to places that need help! Dev Tools Team. There are always interesting things to tackle with developer tools (IDEs, Cargo, rustdoc, Clippy, Rustfmt, custom test frameworks, and more). Drop in to #rust-dev-tools and have a chat with the team! Rustdoc Team. With your help, we can make documentation better for everyone. Come join us in #rustdoc on IRC, and we can help you get started! Release Team. Drop by #rust-release on IRC to get involved with regression triage and release production! Community Team. We’ve kicked off several new Teams within the Community Team and are eager to add new members: Events, Content, Switchboard, RustBridge, Survey, and Localization! Check out our team repo or stop by our IRC channel, #rust-community, to learn more and get involved! [Less]
Posted about 6 years ago
Each year the Rust community comes together to set out a roadmap. This year, in addition to the survey, we put out a call for blog posts in December, which resulted in 100 blog posts written over the span of a few weeks. The end result is the ... [More] recently-merged 2018 roadmap RFC. Rust: 2018 edition This year, we will deliver Rust 2018, marking the first major new edition of Rust since 1.0 (aka Rust 2015). We will continue to publish releases every six weeks as usual. But we will designate a release in the latter third of the year (Rust 1.29 - 1.31) as Rust 2018. This new “edition” of Rust will be the culmination of feature stabilization throughout the year, and will ship with polished documentation, tooling, and libraries that tie in to those features. The idea of editions is to signify major steps in Rust’s evolution, where a collection of new features or idioms, taken as a whole, changes the experience of using Rust. They’re a chance, every few years, to take stock of the work we’ve delivered in six-week increments. To tell a bigger story about where Rust is going. And to ship the whole stack as a polished product. We expect that each edition will have a core theme or focus. Thinking of 1.0 as “Rust 2015”, we have: Rust 2015: stability Rust 2018: productivity What will be in Rust 2018? The roadmap doesn’t say for certain what will ship in Rust 2018, but we have a pretty good idea, and we’ll cover the major suspects below. Documentation improvements Part of the goal with the Rust 2018 release is to provide high quality documentation for the full set of new and improved features and the idioms they give rise to. The Rust Programming Language book has been completely re-written over the last 18 months, and will be updated throughout the year as features reach the stable compiler. Rust By Example will likewise undergo a revamp this year. And there are numerous third party books, like Programming Rust, reaching print as well. Language improvements The most prominent language work in the pipeline stems from 2017’s ergonomics initiative. Almost all of the accepted RFCs from the initiative are available on nightly today, and will be polished and stabilized over the next several months. Among these productivity improvements are a few “headliners” that will form the backbone of the release: Ownership system improvements, including making borrowing more flexible via “non-lexical lifetimes”, improved pattern matching integration, and more. Trait system improvements, including the long-awaited impl Trait syntax for dealing with types abstractly. Module system improvements, focused on increasing clarity and reducing complexity. Generators/async/await: work is rapidly progressing on first-class async programming support. In addition, we anticipate a few more major features to stabilize prior to the Rust 2018 release, including SIMD, custom allocators, and macros 2.0. Compiler improvements As of Rust 1.24, incremental recompilation is available and enabled by default on the stable compiler. This feature already makes rebuilds significantly faster than fresh builds, but over the course of the year we expect continued improvements for both fresh and re-builds. Compiler performance should not be an obstacle to productivity in Rust 2018. Tooling improvements Rust 2018 will see high quality 1.0 releases of the Rust Language Server (“RLS”, which underlies much of our IDE integration story) and rustfmt (a standard formatting tool for Rust code). We will continue to improve Cargo by stabilizing custom registries, public dependencies, and a revised profile system. We’re also expecting further work on Cargo build system integration, Xargo integration, and custom test frameworks, though it’s unclear as yet how many of these will be complete prior to Rust 2018. Library improvements Building on our work from last year, we will publish a 1.0 version of the Rust API guidelines book, continue pushing important libraries to 1.0 status, improve discoverability through a revamped cookbook effort, and make heavy investments in libraries in specific domains—as we’ll see below. Web site improvements As part of Rust 2018, we will completely overhaul the Rust web site, making it useful for CTOs and engineers alike. It should be far easier to find information to help evaluate Rust for your use case, and to stay up to date with the latest tooling and ecosystem improvements. Four target domains Part of our goal with Rust 2018 is to demonstrate Rust’s productivity in specific domains of use. We’ve selected four such domains to invest in and highlight this year: Network services. Rust’s reliability and low footprint make it an excellent match for network services and infrastructure, especially at high scale. Command-line apps (CLI). Rust’s portability, reliability, ergonomics, and ability to produce static binaries come together to great effect for writing CLI apps. WebAssembly. The “wasm” web standard allows shipping native-like binaries to all major browsers, but GC support is still years away. Rust is extremely well positioned to target this domain, and provides a reasonable on-ramp for programmers coming from JS. Embedded devices. Rust has the potential to make programming resource-constrained devices much more productive—and fun! We want embedded programming to reach first-class status this year. Each of these domains has a dedicated working group for the year. These WGs will work in a cross-cutting fashion, interfacing with language, tooling, library, and documentation work. Compatibility across editions TL;DR: Rust will continue its stability guarantee of hassle-free updates to new versions. Editions will have a meaning for the compiler. You will be able to write: edition = "2018" in your Cargo.toml to opt in to the new edition for your crate. Doing so may introduce new keywords or otherwise require adjustments to code. However: You can use old editions indefinitely on new compilers; editions are opt-in. Editions are set on a per-crate basis and can be mixed and matched; you can be on a different edition from your dependencies. Warning-free code in one edition must compile, and have the same behavior, on the next. Edition-related warnings, e.g. that an identifier will become a keyword in the next edition, must be easily fixable via an automated migration tool (rustfix). Only a small minority of crates should require any manual work to opt in to a new edition, and that manual work must be minimal. Most new features are edition-independent, and will be usable on new compilers even when an older edition is selected. In other words, the progression of new compiler versions is independent from editions; you can migrate at your leisure, and don’t have to worry about ecosystem compatibility; and edition migration is normally trivial. Additional 2018 goals While the Rust 2018 release is our major focus this year, there are some additional ongoing concerns that we want to give attention to. Better serving intermediate Rustaceans One of the strongest messages we’ve heard from production users, and the 2017 survey, is that people need more resources to take them from understanding Rust’s concepts to knowing how to use them effectively. The roadmap does not stipulate exactly what these resources should look like — probably there should be several kinds — but commits us as a community to putting significant work into this space, and ending the year with some solid new material. Community Connect and empower Rust’s global community. We will pursue internationalization as a first-class concern, and proactively work to build ties between Rust subcommunities currently separated by language, geography, or culture. We will spin up and support Rust events worldwide, including further growth of the RustBridge program. Grow Rust’s teams and new leaders within them. We will refactor the Rust team structure to support more scale, agility, and leadership growth. We will systematically invest in mentoring, both by creating more on-ramp resources and through direct mentorship relationships. A call to action As always in the Rust world, the goals laid out here will ultimately be the result of a community-wide effort—maybe one including you! Here are some of the teams where we could use the most help. Note that all IRC channels refer to the irc.mozilla.org network. WebAssembly WG. Compiling Rust to WebAssembly should be the best choice for fast code on the Web. Check out rust-lang-nursery/rust-wasm to learn more and get involved! CLI WG. Writing CLI apps in Rust should be a frictionless experience–from finding the right libraries and writing concise integration tests up to cross-platform distribution. Join us at rust-lang-nursery/cli-wg and help us reach that goal! Embedded Devices WG. Quality, productivity, accessibility: Rust can change the embedded industry for the better. Let’s get this process started in 2018! Join us at https://github.com/rust-lang-nursery/embedded-wg Ecosystem WG. We’ll be providing guidance and support to important crates throughout the ecosystem. Drop into the WG-ecosystem room and we’ll guide you to places that need help! Dev Tools Team. There are always interesting things to tackle with developer tools (IDEs, Cargo, rustdoc, Clippy, Rustfmt, custom test frameworks, and more). Drop in to #rust-dev-tools and have a chat with the team! Rustdoc Team. With your help, we can make documentation better for everyone. Come join us in #rustdoc on IRC, and we can help you get started! Release Team. Drop by #rust-release on IRC to get involved with regression triage and release production! Community Team. We’ve kicked off several new Teams within the Community Team and are eager to add new members: Events, Content, Switchboard, RustBridge, Survey, and Localization! Check out our team repo or stop by our IRC channel, #rust-community, to learn more and get involved! [Less]
Posted about 6 years ago by [email protected] (ClassicHasClass)
TenFourFox Feature Parity Release 6 is now available for testing (downloads, hashes, release notes). Other than finishing the security patches and adding a couple more entries to the basic adblock, there are no other changes in this release. Assuming ... [More] no issues, it will become live Monday evening Pacific time as usual. The backend for the main download page at Floodgap has been altered such that the Downloader is now only offered to browsers that do not support TLS 1.2 (this is detected by checking for a particular JavaScript math function Math.hypot, the presence of which I discovered roughly correlates with TLS 1.2 support in Google Chrome, Microsoft Edge, Safari and Firefox/TenFourFox). This is to save bandwidth on our main server since those browsers are perfectly capable of downloading directly from SourceForge and don't need the Downloader to help them. This is also true of Leopard WebKit, assuming the Security framework update is also installed. For FPR7, I have already exposed basic adblock in the TenFourFox preferences pane, and am looking at some efficiency updates as well as updates to the supported TLS ciphers and hopefully date pickers if there is still time. Also, the limited profiling tools I have at my disposal suggest that some of the browser's occasional choppiness is at least partially associated with improperly scheduled garbage collection slices. I'm experimenting with retuning the runtime environment to see if we can stave off some types of collection to preserve CPU cycles and not bloat peak memory usage too much. So far, 24 hours into testing with some guesswork numbers, it doesn't seem to be exploding. More on that later. [Less]
Posted about 6 years ago by Wladimir Palant
There is a weakness common to any software letting you protect a piece of data with a password: how does that password translate into an encryption key? If that conversion is a fast one, then you better don’t expect the encryption to hold. Somebody ... [More] who gets hold of that encrypted data will try to guess the password you used to protect it. And modern hardware is very good at validating guesses. Case in question: Firefox and Thunderbird password manager. It is common knowledge that storing passwords there without defining a master password is equivalent to storing them in plain text. While they will still be encrypted in logins.json file, the encryption key is stored in key3.db file without any protection whatsoever. On the other hand, it is commonly believed that with a master password your data is safe. Quite remarkably, I haven’t seen any articles stating the opposite. However, when I looked into the source code, I eventually found the sftkdb_passwordToKey() function that converts a password into an encryption key by means of applying SHA-1 hashing to a string consisting of a random salt and your actual master password. Anybody who ever designed a login function on a website will likely see the red flag here. This article sums it up nicely: Out of the roughly 320 million hashes, we were able to recover all but 116 of the SHA-1 hashes, a roughly 99.9999% success rate. The problem here is: GPUs are extremely good at calculating SHA-1 hashes. Judging by the numbers from this article, a single Nvidia GTX 1080 graphics card can calculate 8.5 billion SHA-1 hashes per second. That means testing 8.5 billion password guesses per second. And humans are remarkably bad at choosing strong passwords. This article estimates that the average password is merely 40 bits strong, and that estimate is already higher than some of the others. In order to guess a 40 bit password you will need to test 239 guesses on average. If you do the math, cracking a password will take merely a minute on average then. Sure, you could choose a stronger password. But finding a considerably stronger password that you can still remember will be awfully hard. Turns out that the corresponding NSS bug has been sitting around for the past 9 (nine!) years. That’s also at least how long software to crack password manager protection has been available to anybody interested. So, is this issue so hard to address? Not really. NSS library implements PBKDF2 algorithm which would slow down bruteforcing attacks considerably if used with at least 100,000 iterations. Of course, it would be nice to see NSS implement a more resilient algorithm like Argon2 but that’s wishful thinking seeing a fundamental bug that didn’t find an owner in nine years. But before anybody says that I am unfair to Mozilla and NSS here, other products often don’t do any better. For example, if you want to encrypt a file you might be inclined to use OpenSSL command line tools. However, the password-to-key conversion performed by the openssl enc command is even worse than what Firefox password manager does: it’s essentially a single MD5 hash operation. OpenSSL developers are aware of this issue but: At the end of the day, OpenSSL is a library, not an end-user product, and enc(1) and friends are developer utilities and “demo” tools. News flash: there are plenty of users out there not realizing that OpenSSL command line tools are insecure and not actually meant to be used. [Less]
Posted about 6 years ago by chuttenc
In JavaScript, if you want to use a function that was introduced only in certain versions of browsers, you use Feature Detection. For example, you can ask “Hey, browser, do you have a function called `includes` on Array?” If the browser has it, you ... [More] use it; and if it doesn’t, you either get along without it or load your own implementation. It turns out that this same concept can be (and, in Firefox, is) done with Windows APIs. Firefox for Windows is built against the Windows 10 SDK. This means the compiler knows the API calls and type definitions for all sorts of wondrous modern features like toast notifications and enumerating graphics adapters in a specific order. However, as of writing, Firefox for Windows supports Windows 7 and up. What would happen if Firefox tried to use those fancy new Windows 10 features when running on Windows 7? Well, at compile time (when Mozilla builds Firefox), it knows everything it needs to about the sizes and names of things used in the new features thanks to the SDK. At runtime (when a user runs Firefox), it needs to ask Windows at what address exactly all of those fancy new features live so that it can use them. If Firefox can’t find a feature it expects to be there, it won’t start. We want Firefox to start, though, and we want to use the new features when available. So how do we both use the new feature (if it’s there) and not (if it’s not)? Windows provides an API called GetProcAddress that allows the running program to perform some Feature Detection. It is asking Windows “Hey, so I’m looking for the address of this fancy new feature named FancyNewAPI. Do you know where that is?”. Windows will either reply “No, sorry” at which point you work around it, or “Yes, it’s over at address X” at which point to convert address X into a function pointer that takes the same number and types of arguments that the documentation said it takes and then instruct your program to jump into it and start executing. We use this in Firefox to detect gamepad input modules, cancelable synchronous IO, display density measurements, and a whole bunch of graphics and media acceleration stuff. And today (well, yesterday at this point) I learned about it. And now so have you. :chutten –edited to remove incorrect note that GetProcAddress started in WinXP– :aklotz noted that GetProcAddress has been around since ancient times, MSDN just periodically updates its “Minimum Supported Release” fields to drop older versions. Advertisements &b &b [Less]
Posted about 6 years ago by chuttenc
In JavaScript, if you want to use a function that was introduced only in certain versions of browsers, you use Feature Detection. For example, you can ask “Hey, browser, do you have a function called `includes` on Array?” If the browser has it, you ... [More] use it; and if it doesn’t, you either get along without it or load your own implementation. It turns out that this same concept can be (and, in Firefox, is) done with Windows APIs. Firefox for Windows is built against the Windows 10 SDK. This means the compiler knows the API calls and type definitions for all sorts of wondrous modern features like toast notifications and enumerating graphics adapters in a specific order. However, as of writing, Firefox for Windows supports Windows 7 and up. What would happen if Firefox tried to use those fancy new Windows 10 features when running on Windows 7? Well, at compile time (when Mozilla builds Firefox), it knows everything it needs to about the sizes and names of things used in the new features thanks to the SDK. At runtime (when a user runs Firefox), it needs to ask Windows at what address exactly all of those fancy new features live so that it can use them. If Firefox can’t find a feature it expects to be there, it won’t start. We want Firefox to start, though, and we want to use the new features when available. So how do we both use the new feature (if it’s there) and not (if it’s not)? Windows provides an API called GetProcAddress that allows the running program to perform some Feature Detection. It is asking Windows “Hey, so I’m looking for the address of this fancy new feature named FancyNewAPI. Do you know where that is?”. Windows will either reply “No, sorry” at which point you work around it, or “Yes, it’s over at address X” at which point to convert address X into a function pointer that takes the same number and types of arguments that the documentation said it takes and then instruct your program to jump into it and start executing. We use this in Firefox to detect gamepad input modules, cancelable synchronous IO, display density measurements, and a whole bunch of graphics and media acceleration stuff. And today (well, yesterday at this point) I learned about it. And now so have you. :chutten –edited to remove incorrect note that GetProcAddress started in WinXP– :aklotz noted that GetProcAddress has been around since ancient times, MSDN just periodically updates its “Minimum Supported Release” fields to drop older versions. [Less]
Posted about 6 years ago by Nicholas Nethercote
Firefox’s preferences system uses data files to store information about default preferences within Firefox, and user preferences in a user’s profile (such as prefs.js, which records changes to preference values, and user.js, which allows users to ... [More] override default preference values). A new parser These data files use a custom format, and therefore Firefox has a custom parser for them. I recently rewrote the parser. The new parser has the following benefits over the old parser. It is faster (raw parsing speed is close to 2x faster). It is safer (because it’s written in Rust rather than C++). It is more correct and better tested (the old one got various obscure edge cases wrong). It is more readable, and easier to modify. It issues no warnings, only errors. It is slightly stricter (e.g. doesn’t allow any malformed input, and it catches integer overflow). It has error recovery and better error messages (including correct line numbers). Modifiability Modifiability was the prime motivation for the change. I wanted to make some adjustments to the preferences file grammar, but this would have been very difficult in the old parser, because it was written in an awkward style. It was essentially a single loop containing a giant switch statement on a state variable. This switch was executed for every single char in a file. The states held by the state variable had names like PREF_PARSE_QUOTED_STRING, PREF_PARSE_UNTIL_OPEN_PAREN, PREF_PARSE_COMMENT_BLOCK_MAYBE_END. It also had a second state variable, because in some places a single one wasn’t enough; the parser had to return to the previous state after exiting the current state. Furthermore, lexing and parsing were not separate, so code to handle comments and whitespace was spread around in various places. The new parser is a recursive descent parser — even though the grammar doesn’t actually have any recursion — in which the structure of the code reflects the structure of the grammar. Lexing is distinct from parsing. As a result, the new parser is much easier to read and modify. In particular, after landing it I added error recovery without too much effort; that would have been almost impossible in the old parser. Note that the idea of error recovery for preferences parsing was first proposed in bug 107264, filed in 2001! After landing it, I tweeted the following. I fixed an old bug: https://t.co/llDURdHUN8 Imagine going back in time and telling the reporter “this bug will get fixed 16 years from now, and the code will be written in a systems programming language that doesn’t exist yet”. — Nicholas Nethercote (@nnethercote) February 20, 2018 Amazingly enough, the original reporter is on Twitter and responded! I kept getting emails on this bug over the years — dependencies and stuff — and I’d be like, “this bug is still open?!” Great job, @nnethercote! https://t.co/uVLYK8Tn6U — Kevin Basil Fritts (@kevinbasil) March 1, 2018 Strictness The new parser is slightly stricter and rejects some malformed input that the old parser accepted. Junk chars Disconcertingly, the old parser allowed arbitrary junk between preferences (including at the start and end of the prefs file) so long as that junk didn’t include any of the following chars: ‘/’, ‘#’, ‘u’, ‘s’, ‘p’. This means that lines like these: !foo@bar&pref("prefname", true); ticky_pref("prefname", true);  // missing 's' at start User_pref("prefname", true);   // should be 'u' at start would all be treated the same as this: pref("prefname", true); The new parser disallows such junk because it isn’t necessary and seems like an unintentional botch by the old parser. In practice, this caught a couple of prefs that accidentally had an extra ‘;’ at the end. SUB char The old parser allowed the SUB (0x1a) character between tokens and treated it like ‘\n’. The new parser does not allow this character. SUB was used to indicate end-of-file (not end-of-line) in some old operating systems such as MS-DOS, but this doesn’t seem necessary today. Invalid escapes The old parser tolerated (with a warning) invalid escape sequences within  string literals — such as “\q” (not a valid escape) and “\x1” and “\u12″(both of which have insufficient hex digits) — accepting them literally. The new parser does not tolerate invalid escape sequences because it doesn’t seem necessary and would complicate things. NUL char The old parser tolerated the NUL character (0x00) within string literals; this is dangerous because C++ code that manipulates string values with embedded NULs will almost certainly consider those chars as end-of-string markers. The new parser treats the NUL character as end-of-file, to avoid this danger. (The escape sequences “\x00” and “\u0000” are also disallowed.) Integer overflow The old parser allowed integer literals to overflow, silently wrapping them. The new parser treats integer overflow as a parse error. This seems better, and it caught overflows of several existing prefs. Consequences Error recovery minimizes the risk of data loss caused by the increased strictness because malformed pref lines in prefs.js will be removed but well-formed pref lines afterwards are preserved. Nonetheless, please keep an eye out for any other problems that might arise from this change. Attributes I mentioned before that I wanted to make some adjustments to the preferences file grammar. Specifically, I changed the grammar used by default preference files (but not user preference files) to support annotating each preference with one or more boolean attributes. The attributes supported so far are ‘sticky’ and ‘locked’. For example: pref("sticky.pref", true, sticky); pref("locked.pref", 123, locked); pref("sticky-and-locked-pref", "blah", sticky, locked); Note that the addition of the ‘locked’ attribute fixed a 10 year old bug. When will this ship? All of these changes are on track to ship in Firefox 60, which is due to release on May 9th. [Less]
Posted about 6 years ago by Les Orchard
TL;DR: Last year, I started work on a new Test Pilot experiment playing with themes in Firefox.New theme APIs are funAt the core of this experiment are new theme APIs for add-ons shipping with Firefox.These APIs take inspiration from static themes in ... [More] Google Chrome, building from there to enable the creation of dynamic themes.For example, Quantum Lights changes based on the time of day.VivaldiFox reflects the sites you’re visiting.You could even build themes that use data from external HTTP services — e.g. to change based on the weather.To explore these new APIs, Firefox Themer consists of a website and a companion add-on for Firefox. The website offers a theme editor with a paper doll preview — you can click on parts of a simulated browser interface and dress it up however you like. The add-on grants special powers to the website, applying changes from the theme in the editor onto the browser itself.Editing themes on the webThe site is built using Webpack, React, and Redux. React offers a solid foundation for composing the editor. Personally, I really like working with stateless functional components — they’re kind of what tipped me over into becoming a React convert a few years ago. I’m also a terrible visual designer with weak CSS-fu — but using Webpack to bundle assets from per-component directories makes it easier for teammates to step in where I fall short.Further under the hood, Redux offers a clean way to manage theme data and UI state. Adding undo & redo buttons is easy, thanks to redux-undo. And, by way of some simple Redux middleware, I was able to easily add a hook to push every theme changes into the browser via the add-on.The website is just a static page — there’s no real server-side application. When you save a theme, it ends up in your browser’s localStorage. Though we plan to move Themer to a proper production server when we launch in Test Pilot, I’ve been deploying builds to GitHub Pages during development.Another interesting feature of the website is that we encode themes as a parameter in the URL. Rather than come up with a bespoke scheme, I use this json-url module to compress JSON and encode it as Base64, which makes for a long URL but not unreasonably so. This approach enables folks to simply copy & paste a URL to share a theme they’ve made. You can even link to themes from a blog post, if you wanted to!When the page loads and sees the ?theme URL, it unpacks the data and loads it into editor’s Redux store. I’ve also been able to work this into the location bar with the HTML5 History API and Redux middleware. The browser location represents the current theme, while back & forward buttons double as undo & redo.Add-ons can be expansion cartridgesThe companion add-on is also built using Webpack. It acts as an expansion cartridge for the theme editor on the website.(Can you tell I’ve had retro computers on the mind, lately?)Add-ons in Firefox can install content scripts that access content and data on web pages. Content scripts can communicate with the parent add-on by way of a message port. They can also communicate with a web page by way of synthetic events. Put the two together, and you’ve got a messaging channel between a web page and an add-on in Firefox.Here’s the heart of that messaging bridge:https://medium.com/media/feb1fc1bf18d02f911abb59e40f25aab/hrefWith this approach, the web page doesn’t actually gain access to any Firefox APIs. The add-on can decide what to do with with messages it receives. If the page sends invalid data or asks to do something not supported — nothing happens. Here’s a snippet of that logic from the extension:https://medium.com/media/fc7e5274802c7067ff2f9ae29a1e1c1c/hrefAnd here’s a peek at that Redux middleware I mentioned earlier which updates the add-on from the web:https://medium.com/media/1ef7ddd8d483e76f3762abced41e528e/hrefThe add-on can also restrict the set of pages from which it will accept messages: We hardcode the URL for the theme editor into the add-on’s content script configuration at build time, which means no other web page should be able to ask the add-on to alter the theme in Firefox.Add-on detection is hardThere is a wrinkle to the relationship between website and add-on, though: A normal web page cannot detect whether or not a particular add-on has been installed. All the page can do is send a message. If the add-on responds, then we know the add-on is available.Proving a negative, however, is impossible: the web page can’t know for sure that the add-on is not available. Responses to asynchronous messages take time — not necessarily a long time, but more than zero time.If the page sends a message and doesn’t get a response, that doesn’t mean the add-on is missing. It could just mean that the add-on is taking awhile to respond. So, we have to render the theme editor such that it starts off by assuming the add-on is not installed. If the add-on shows up, minutes or milliseconds later, the page can update itself to reflect the new state of things.Left as-is, you’d see several flashes of color and elements on the page move as things settle. That seems unpleasant and possibly confusing, so we came up with a loading spinner:When the page loads, it displays the spinner and a timer starts. If that timer expires, we consider things ready and reveal the editor. But, if there’s any change to the Redux store while that timer is running, we restart the clock.This is the gist of what that code does:https://medium.com/media/981f3be21f0a076b2d89ba81d0731283/hrefEarly changes to the store are driven by things like decoding a shared theme and responses from the add-on. Again, these are asynchronous and unpredictable. The timer duration is an arbitrary guess I made that seems to feel right. It’s a dirty hack, but it seems like a good enough effort for now.Using npm scripts and multiple Webpack configsOne of the things that has worked nicely on this project is building everything in parallel with a single npm command. You can clone the repo and kick things off for development with a simple npm install && npm start dance.The add-on and the site both use Webpack. There’s a shared config as a base and then specific configurations with tweaks for the site and the add-on. So, we want to run two separate instances of Webpack to build everything, watch files, and host the dev server.This is where npm-run-all comes in: It’s a CLI tool that lets you run multiple npm scripts. I used to use gulp to orchestrate this sort of thing, but npm-run-all lets me arrange it all in package.json. It would be fine if this just enabled running scripts in series. But, npm-run-all also lets you run scripts in parallel. The cherry on top is that this parallelization works on Linux, OS X, and Windows.https://medium.com/media/27272dec5ca08729cef0c23a77054bc1/hrefIn past years, Windows support might have been an abstract novelty for me. But, in recent months, I’ve switched from Apple hardware to a PC laptop. I’ve found the new Windows Subsystem for Linux to be essential to that switch. But, sometimes it’s nice to just fire up a Node.js dev environment directly in PowerShell — npm-run-all lets me (and you) do that!So, the start script in our package.json is able to fire up both Webpack processes for the site and add-on. It can also start a file watcher to run linting and tests (when we have them) alongside. That simplifies using everything in a single shell window across platforms.I used to lean on Vagrant or Docker to offer something “simple” to folks interested in contributing to a project. But, though virtual machines and containers can hide apparent complexity in development, it’s hard to beat just running things in node on the native OS.Help us make themes more fun!We’re launching this experiment soon. And, though it only makes limited use of the new theme APIs for now, we’re hoping that the web-based editor and ease of sharing makes it fun & worth playing with. We’ve got some ideas on what to add over the course of the experiment and hope to get more from the community.Whether you can offer code, give feedback, participate in discussions, or just let us watch how you use something — everyone has something valuable to offer. In fact, one of the overarching goals of Test Pilot is to expand channels of contribution for folks interested in helping us build Firefox.As with all Test Pilot experiments, we’ll be watching how folks use this stuff as input for what happens next. We also encourage participation in our Discourse forums. And finally, the project itself is open source on GitHub and open to pull requests.In the meantime, start collecting color swatches for your own theme. Personally, I might try my hand at a Dracula theme or maybe raid my Vim config directory for some inspiration.Originally published at blog.lmorchard.com on March 1, 2018.Fun with Themes in Firefox was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted about 6 years ago
Servo had amazing year in 2017. We saw the style system ship and deliver performance improvements as a flagship element of the highly regarded Firefox Quantum release. And we’ve continued to build out the engine platform and experiment with new ... [More] embedding APIs, innovations in graphics and font rendering, and graduate subsystems to production readiness for inclusion in Firefox. Consistently throughout those efforts, we saw work in Servo demonstrate breakthrough advances in parallelism, graphics rendering, and robustness. Coming in to 2018, we see virtual and augmented reality devices transitioning from something just for hardcore gamers and enterprises into broad consumer adoption. These platforms will transform the way that users create and consume content on the internet. As part of the Emerging Technologies and Mozilla Research missions to enable the web platform on these new systems, we will be adopting the Mozilla Servo team as part of the Mixed Reality team and doubling down on our investigations in virtual and augmented reality. Servo is already the platform where we first implemented support for mobile VR, extensions, such as, WebGL MultiView, and even our sneak peak running on the Qualcomm Snapdragon 835 developer kit and compatible AR glasses from last September. Servo’s lean, modern code base and leading-edge strengths in parallelism and graphics are ideal for prototyping new technology for the web and growing the results into production code usable both inside and outside of Servo. What does this look like concretely? The first thing we will do is get Servo implementing the GeckoView API, working inside one of our existing mobile browser shell apps, and working with a ton of VR and AR devices so that it can run hand-in-hand with our existing use of Gecko in our Mixed Reality Browser. Like our WebXR iOS Viewer, this will give us a platform where we can experiment, drive standards forward, and build compelling pilot experiences. Some of the experiments we’re looking to invest more in during 2018: Declarative VR. We have libraries like Three.js, Babylon.js, A-Frame, and ReactVR and tools like PlayCanvas and Unity to produce 3D content, but there are no standards yet for how traditional web pages should behave when loaded into a headset. We will continue to experiment with things like DOM to texture. It is still difficult to allow web content to be part of a 3D scene. Higher quality text rendering with WebRender and Pathfinder, originally designed for desktop but now tuned for VR and AR hardware. Experiment with new AR APIs and computer vision. Experiment with new WebGL extensions (multiview, lens-matched shading, etc.) Experiments with device & voice APIs (WebBluetooth, Physical Web/Beacon successors, etc.) Keep tuned here and to the Mozilla Mixed Reality blog for more updates! It’s going to be a thrilling year. [Less]