Posted
over 6 years
ago
by
Ralph Giles
Bitmovin and Mozilla partner to enable HTML5 AV1 Playback
Bitmovin and Mozilla, both members of the Alliance for Open Media (AOM), are partnering to bring AV1 playback with HTML5 to Firefox as the first browser to play AV1 MPEG-DASH/HLS streams.
... [More]
While the AV1 bitstream is still being finalized, the industry is gearing for fast adoption of the new codec, which promises to be 25-35% more efficient than VP9 and H.265/HEVC.
The AV1 bitstream is set to be finalized by the end of 2017. You may ask – “How does playback work on the bitstream that is not yet finalized?”. Indeed, this is a good question as there are still many things in the bitstream that may change during the current state of the development. However, to make playback possible, we just need to ensure that the encoder and decoder use the same version of the bitstream. Bitmovin and Mozilla agreed on a simple, but for the time being useful, codec string, to ensure compatibility between the version of the bitstream in the Bitmovin AV1 encoder and the AV1 decoder in Mozilla Firefox:
"av1.experimental."
A test page has been prepared to demonstrate playback of MPEG-DASH test assets encoded in AV1 by the Bitmovin Encoder and played with the Bitmovin HTML5 Player (7.3.0-b7) in the Firefox Nightly browser.
AV1 DASH playback demo by Bitmovin and Firefox Nightly. Short film “Tears of Steel” cc-by Blender Foundation.
Visit the demo page at https://demo.bitmovin.com/public/firefox/av1/. You can download Firefox Nightly here to view it.
Bitmovin AV1 End-to-End
The Bitmovin AV1 encoder is based on the AOM specification and scaled on Bitmovin’s cloud native architecture for faster throughput. Earlier this year, the team wrote about the world’s first AV1 livestream at broadcast quality, which was demoed during NAB 2017 and brought the company the Best of NAB 2017 Award from Streaming Media.
The current state of the AV1 encoder is still far away from delivering reasonable encoding times without extensive tuning to the code base: e.g. it takes about 150 seconds on an off-the-shelf desktop computer to encode one second of video. For this reason, Bitmovin’s ability to provide complete ABR test assets (multiple qualities and resolutions) of high quality in reasonable times was extremely useful for testing of the MPEG-DASH/HLS playback of AV1 in Firefox. (HLS playback of AV1 is not officially supported by Apple, but technically possible of course.) The fast encoding throughput can be achieved thanks to Bitmovin’s flexible cloud native architecture, which allows massive horizontal scaling of a single VoD asset to multiple nodes, as depicted in the following figure. An additional benefit of the scalable architecture is that quality doesn’t need to be compromised for speed, as is often the case with a typical encoding setup.
Bitmovin’s scalable video encoder.
The test assets provided by Bitmovin are segmented WebM outputs that can be used with HLS and MPEG-DASH. For the demo page, we decided to go with MPEG-DASH and encode the assets to the following quality levels:
100 kbps, 480×200
200 kbps, 640×266
500 kbps, 1280×532
800 kbps, 1280×532
1 Mbps, 1920×800
2 Mbps, 1920×800
3 Mbps, 1920×800
We used the royalty-free Opus audio codec and encoded with 32 kbps, which provides for a reasonable quality audio stream.
Mozilla Firefox
Firefox has a long history of pioneering open compression technology for audio and video. We added support for the royalty-free Theora video codec a decade ago in our initial implementation of HTML5 video. WebM support followed a few years later. More recently, we were the first browser to support VP9, Opus, and FLAC in the popular MP4 container.
After the success of the Opus audio codec, our research arm has been investing heavily in a next-generation royalty-free video codec. Mozilla’s Daala project has been a test bed for new ideas, approaching video compression in a totally new way. And we’ve been contributing those ideas to the AV1 codec at the IETF and the Alliance for Open Media.
AV1 is a new video compression standard, developed by many contributors through the IETF standards process. This kind of collaboration was part of what made Opus so successful, with contributions from several organizations and open engineering discussions producing a design that was better than the sum of its parts.
While Opus was adopted as a mandatory format for the WebRTC wire protocol, we don’t have a similar mandate for a video codec. Both the royalty-free VP8 and the non-free H.264 codecs are considered part of the baseline. Consensus was blocked on the one side by the desire for a freely-implementable spec and on the other for hardware-supported video compression, which VP8 didn’t have at the time.
Major hardware vendors have been involved with AV1 from the start, which we expect will result in accelerated support being available much sooner.
In April, Bitmovin demonstrated the first live stream using the new AV1 compression technology.
In June, Bitmovin and Mozilla worked together to demonstrate the first playback of AV1 video in a web page, using Bitmovin’s adaptive bitrate video technology. The demo is available now and works with Firefox Nightly.
The codec work is open source. If you’re interested in testing this, you can compile an encoder yourself. The format is still under development, so it’s important to match the version you’re testing with the decoder version in Firefox Nightly. We’ve extended the MediaSource.isTypeSupported api to take a git commit as a qualifier. You can test for this, e.g.:
var container = 'video/webm';
var codec = 'av1.experimental.e87fb2378f01103d5d6e477a4ef6892dc714e614';
var mimeType = container + '; codecs="' + codec + '"';
var supported = MediaSource.isTypeSupported(mimeType);
Then select an alternate resource or display an error if your encoded resource isn’t supported in that particular browser.
Past commit ids we’ve supported are aadbb0251996 and f5bdeac22930.The currently-supported commit id, built with default configure options, is available here. Once the bitstream is stable we will drop this convention and you can just test for codecs=av1 like any other format.
As an example, running this code inside the current page, we can report:
Since the initial demo, we’ve continued to develop AV1, providing feedback from real-world application testing and periodically updating the version we support to take advantage of ongoing improvements. The compression efficiency continues to improve. We hope to stabilize the new format next year and begin deployment across the internet of this exciting new format for video. [Less]
|
Posted
over 6 years
ago
by
Christoph Kerschbaumer
End users rely on the address bar of a web browser to identify what web page they are on. However, most end users are not aware of the concept of a data URL which can contain a legitimate address string making the end user believe they are browsing a
... [More]
particular web page. In reality, attacker provided data URLs can show disguised content tricking end users into providing their credentials. The fact that the majority of end users are not aware that data URLs can encode untrusted content makes them popular amongst scammers for spoofing and particularly for phishing attacks.
To mitigate the risk that Firefox users are tricked into phishing attacks by malicious actors encoding legitimate address strings in a data URL, Firefox 58 will prevent web pages from navigating the top-level window to a data URL and hence will prevent stealing an end user’s credentials. At the same time, Firefox will allow navigations to data URLs that truly result because of any end user action.
In more detail, the following cases will be blocked:
Web page navigating to a new top-level data URL document using:
window.open(“data:…”);
window.location = “data:…”
clicking (including ctrl+click, ‘open-link-in-*’, etc).
Web page redirecting to a new top-level data URL document using:
302 redirects to “data:…”
meta refresh to “data:…”
External applications (e.g., ThunderBird) opening a data URL in the browser
Whereas the following cases will be allowed:
User explicitly entering/pasting “data:…” into the address bar
Opening all plain text data files
Opening “data:image/*” in top-level window, unless it’s “data:image/svg+xml”
Opening “data:application/pdf” and “data:application/json”
Downloading a data: URL, e.g. ‘save-link-as’ of “data:…”
Starting with Firefox 58, web pages attempting to navigate the top-level window to a data URL will be blocked and the following message will be logged to the console:
For the Mozilla Security Team:
Christoph Kerschbaumer
The post Blocking Top-Level Navigations to data URLs for Firefox 58 appeared first on Mozilla Security Blog. [Less]
|
Posted
over 6 years
ago
by
Corey Richardson
Hello and welcome to another issue of This Week in Rust!
Rust is a systems language pursuing the trifecta: safety, concurrency, and speed.
This is a weekly summary of its progress and community.
Want something mentioned? Tweet us at @ThisWeekInRust
... [More]
or send us a pull request.
Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub.
If you find any errors in this week's issue, please submit a PR.
Updates from Rust Community
News & Blog Posts
Announcing Rust 1.22 (and 1.22.1).
GTK Rust tutorials - a series.
Writing fast and safe native Node.js modules with Rust.
Improving Ruby performance with Rust.
This week in Rust docs 83.
[podcast] New Rustacean News: Rust 1.21 and 1.22. Quality of life improvements, Failure, wasm, and rustdoc fun – or, a bunch of highlights from the new releases and the community since 1.20.
[podcast] Rusty Spike Podcast - episode 9. We chat about impl trait, Rust/GNOME hackfest, memory layouts, Visual Studio, failure, suricata, wasm, and some feel-good news.
Crate of the Week
This week's crate is faster, a crate for zero-overhead, cross-platform, beautiful explicit SIMD code. Thanks to Vikrant for the suggestion.
Submit your suggestions and votes for next week!
Call for Participation
Always wanted to contribute to open-source projects but didn't know where to start?
Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
Contribute to Rust's 2017 impl period.
tera: Allow other type of quotes for strings in the parser. Tera is a template engine for Rust based on Jinja2/Django.
tera: Sort filter (and possibly some others).
smallvec: Dedup functionality. "Small vector" optimization for Rust: Smallvec lets you store up to a small number of items on the stack.
If you are a Rust project owner and are looking for contributors, please submit tasks here.
Updates from Rust Core
118 pull requests were merged in the last week
rustbuild: Enable WebAssembly backend by default
rustc: Add support for some more x86 SIMD ops
rustc: don't mark lifetimes as early-bound in the presence of impl Trait
implement in-band lifetime bindings
impl Trait Lifetime Handling
Display negative traits implementation
Properly handle reexport of foreign items
Make accesses to fields of packed structs unsafe
support ::crate in paths
allocators: don’t assume MIN_ALIGN for small sizes
Kill the storage for all locals on returning terminators
incr.comp.: Make sure we don't lose unused green results from the query cache
InstCombine Len([_; N]) => const N in MIR
do match-check for consts
rustc_trans: don't apply noalias on returned references
allow filtering analysis by reachability
typeck aggregate rvalues in MIR type checker
add a MIR pass to lower 128-bit operators to lang item calls
add a MIR-borrowck-only output mode
MIR Borrowck: Parity with Ast for E0384 (Cannot assign twice to immutable)
add structured suggestions for various "use" suggestions
be more obvious when suggesting dereference
add hints for the case of confusing enum with its variants
dead code lint to say "never constructed" for variants
add process::parent_id
impl From for Mutex and RwLock
optimize read_to_end
make float::from_bits transmute
implement Rc/Arc conversions for string-like types
add Box::leak<'a>(Box) -> &'a mut T where T: 'a
move closure kind, signature into ClosureSubsts
add RefCell::replace_with
rustdoc: Fix path search
show in docs whether the return type of a function impls Iterator/Read/Write
rustdoc: include external files in documentation (RFC #1990)
New Contributors
colinmarsh19
David Alber
Julien Cretin
Maxim Zholobak
Mazdak
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments)
process. These
are the RFCs that were approved for implementation this week:
No RFCs were approved this week.
Final Comment Period
Every week the team announces the
'final comment period' for RFCs and key PRs which are reaching a
decision. Express your opinions now. This week's FCPs are:
[disposition: merge] Fallible collection allocation 1.0.
[disposition: merge] Implicit caller location (third try to the unwrap/expect line info problem).
[disposition: merge] Unsized rvalues.
[disposition: merge] eRFC: Cargo build system integration.
[disposition: merge] Type privacy and private-in-public lints.
New RFCs
Cargo publish with internal path dependencies.
Hexadecimal integers with fmt::Debug, including within larger types.
Upcoming Events
Nov 30. Rust Munich: Rust Machine Learning with Juice.
Nov 30. Rust Detroit - Introducing Tock OS 1.0.
Nov 30. Rust release triage.
Dec 6. Rust Cologne: impl Glühwein.
Dec 6. Rust Atlanta: Grab a beer with fellow Rustaceans.
Dex 6. Rust Roma: Rust learning and hacking evening #4.
Dec 6. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
Dec 6. Rust Documentation Team Meeting at #rust-docs on irc.mozilla.org.
Dec 11. Seattle Rust Meetup.
Dec 13. Rust Amsterdam: Theme night on Procedural Macros & Custom Derive
Dec 13. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
Dec 13. Rust Documentation Team Meeting at #rust-docs on irc.mozilla.org.
Dec 13. OpenTechSchool Berlin - Rust Hack and Learn.
Dec 14. Rust release triage.
Dec 14. Rust DC - Mid-month Rustful: Falcon.
Dec 14. Columbus Rust Society - Monthly Meeting.
If you are running a Rust event please add it to the calendar to get
it mentioned here. Email the Rust Community Team for access.
Rust Jobs
Port 2,200 lines of C++ to Rust (face_detection)
Tweet us at @ThisWeekInRust to get your job offers listed here!
Quote of the Week
Indeed. I notice even when after some Rust I return to the “main day job” C, I start to think differently, and it is excellent. Rust is like a complement to good diet and exercise.
— AndrewY on TRPLF.
Thanks to juleskers for the suggestion!
Submit your quotes for next week!
This Week in Rust is edited by: nasa42 and llogiq. [Less]
|
Posted
over 6 years
ago
by
Bobby Holley
Two weeks ago, we released Firefox Quantum to the world. It’s been a big moment for Mozilla, shaping up to be a blockbuster release that’s changing how people think and talk about Firefox. It’s also a cathartic moment for me personally: I’ve spent
... [More]
the last two years pouring my heart and soul into Quantum’s headline act, known as Stylo, and it means a lot to see it so well-received.
But while all the positive buzz is gratifying, it’s easy to miss the deeper significance of what we just shipped. Stylo was the culmination of a near-decade of R&D, a multiple-moonshot effort to build a better browser by building a better language. This is the story of how it happened.
Safety at Scale
Systems programmers have been struggling with memory safety for a long time. It is virtually impossible to develop and maintain a large-scale C/C++ application without introducing bugs that, under the right conditions and input, cause control flow to go off the rails and compromise security. There are those who claim otherwise, but I’m quite skeptical.
Browsers are the canonical example here. They’re enormous - millions of lines of C++ code, thousands of contributors, decades of cruft - and there’s enough at stake to create large incentives to find and avoid security-sensitive bugs. Mozilla, Google, Apple, and Microsoft have been at this for decades with access to some of the best talent in the world, and vulnerabilities haven’t stopped. So it’s pretty clear by now that “don’t make mistakes” is not a viable strategy.
Adding concurrency into the mix makes things exponentially worse, which is a shame because concurrency is the only way a program can utilize more than a fraction of the resources in a modern CPU. But with engineers struggling to keep the core pipeline correct under single-threaded execution, multi-threaded algorithms haven’t been a luxury any browser vendor could afford. There are too many details to get right, and getting any of them even slightly wrong can be catastrophic.
Getting details right at scale generally requires the right tools. For example, register allocation is a tedious process that bedeviled assembly programmers, whereas higher-level languages like C++ handle it automatically and get it right every single time. But while C++ effortlessly handles many low-level details, it just wasn’t built to guarantee memory and thread safety.
Could the right tool be built? In the late 2000s, some people at Mozilla decided to try, and announced Rust and Servo. The plan was simple: build a replacement for C++, and use the result to build a replacement for Gecko. In other words, Boil the Ocean - twice.
Rust
I am a firm proponent of incrementalism. I think the desire to throw everything away and start from scratch tends to be an emotional one, and generally indicates a lack of focus and clear thinking about what will actually move the needle.
This may sound antithetical to big, bold changes, but it’s not. Almost everything successful is incremental in one way or another. The teams behind revolutionary products succeed because they make strategic bets about which things to reinvent, and don’t waste energy rehashing stuff that doesn’t matter.
The creators of Rust understood this, and the language owes its remarkable success to careful and pragmatic decisions about scope and focus:
They borrowed Apple’s C++ compiler backend, which lets Rust match C++ in speed without reimplementing decades of platform-specific code generation optimizations.
They leaned on the existing corpus of research languages, which contained droves of well-vetted ideas that nonetheless hadn’t been or couldn’t be integrated into C++.
They included the unsafe keyword - an escape hatch which, for an explicit section of code, allows programmers to override the safety checks and do anything they might do in C++. This allowed people to start building real things in Rust without waiting for the language to grow idiomatic support for each and every use case.
They built a convenient package ecosystem, allowing the out-of-the-box capabilities of Rust to grow while the core language and standard library remained small.
These tactics were by no means the only ingredients to Rust’s success. But they made success possible by neutralizing the structural advantages of C++ and allowing Rust’s good ideas - particularly its control over mutable aliasing - to reach production code.
Servo
Rust is a big leap forward for the industry, and should make its creators proud. But the grand plan for Firefox required a second moonshot, Servo, with an even steeper path to success.
At first glance, the two phases seem analogous: build Rust to replace C++, and then build Servo to replace Gecko. However, there’s a crucial difference - nobody expects the Rust compiler to handle C++ code, but browsers must maintain backwards-compatibility with every single webpage ever written. What’s more, the breadth of the web platform is staggering. It grew organically over almost three decades, has no clear limits in scope, and has lots of tricky observables that thwart attempts to simplify. Reimplementing every last feature and quirk from scratch would probably require thousands of engineer-years. And Mozilla, already heavily outgunned by its for-profit rivals, could only afford to commit a handful of heads to the Servo project.
That kind of headcount math led some people within Mozilla to dismiss Servo as a boondoggle, and the team needed to move fast to demonstrate that Rust could truly build the engine of the future. Rather than grinding through features indiscriminately, they stood up a skeleton, stubbed out the long tail, and focused on reimagining the core pipeline to eliminate performance bottlenecks. Meanwhile, they also invested heavily in community outreach and building a smooth workflow for volunteers. If they could build a compelling next-generation core, they wagered that a safe language and more-accurate specifications from WHATWG could allow an army of volunteers to fill in the rest.
By 2015, the Servo team had built some seriously impressive stuff. They had CSS and layout engines with full type-safe concurrency, which allowed them to run circles around production browsers on multi-core machines. They also had an early prototype of a full-GPU graphics layer called WebRender which dramatically lowered the cost of rendering. With Firefox falling behind in the market, Servo seemed like just the sort of secret sauce that could get us back in the game. But while Servo continued to build volunteer momentum, the completion curve still stretched too far into the future to make it an actionable replacement for Gecko.
Stylo
Whenever a problem seems impossibly hard, tackling it incrementally is a reliable way to gain traction. So near the end of 2015, some of us started brainstorming ways to use parts of Servo in Firefox. Several proposals floated around, but the two that seemed most workable were the CSS engine and WebRender. This post is about the former, but WebRender integration is also making exciting progress, and you can expect to hear more about it soon.
Servo’s CSS engine was an attractive integration target because it was extremely fast and relatively mature. It also serves as the bridge between the DOM and layout, providing a beachhead for further expansion of Rust into rendering code. Unfortunately, CSS engines are also tightly coupled with DOM and layout code, so there is no clean API surface at which to cut. Swapping it out is a daunting task, to say nothing of the complexities of mixing in a new programming language. So there was a lot of skepticism and some chuckling when we started telling people what we were up to.
But we dove in anyway. It was a small team - just me and Cameron McCormack for the first few months, after which point Emilio Cobos joined us as a volunteer. We picked our battles carefully, seeking to maintain momentum and prove viability without drowning in too many tricky details. In April 2016, we got our first pixels on the screen. In May, we rendered Wikipedia. In June, we rendered Wikipedia fast. The numbers were encouraging enough to convince management to launch it as part of Project Quantum, and scale up resourcing to get it done.
Over the next fifteen months, we transformed that prototype into the most advanced CSS engine ever built, one which harnesses the guarantees of Rust to achieve a degree of parallelism that would be intractable to replicate in C++. The technical details are too involved to get into here, but you can learn more about them in Lin Clark’s excellent writeup, Manish Goregaokar’s release-day post, or my high-level overview from last December.
The Team
Stylo shipped, first and foremost, thanks to the dedication and passion of the people who worked on it. They tackled challenge after challenge, pushing themselves to the limit and learning whatever new skills or roles were required to move things forward. The core team of staff and volunteers spanned more than ten countries, and worked (quite literally) around the clock for over a year to get it done on time.
But the real team was also much larger than the set of people working on it full-time. Stylo needed the expertise of a lot of different groups with different goals. We had to ask for a lot of help, and we rarely needed to ask twice. The entire Mozilla community (including the Rust community) deeply wanted us to succeed, so much so that almost everyone was willing to drop what they were doing to get us unblocked. I originally kept a list of people to thank, but I gave up when it got too big, and when I realized the countless ways in which so many Mozillians helped us in some way, big or small.
So thank you, Mozilla community. Stylo is a testament to your hard work, your ingenuity, and your good-natured, scrappy grit. Be proud of this release - it’s a game-changer for the open web, and you made it happen. [Less]
|
Posted
over 6 years
ago
by
Anthony Hughes
Update for the week ending Friday, November 24, 2017.
New Requests
Hotfix to disable NV12 format for AMD graphics card users on Windows
Metrics
Requests
This Week
This Month
This Year
New
1
35
278
Responded
1 (100%)
33 (94%)
268 (96%)
... [More]
Responded within 48h
1 (100%)
27 (77%)
237 (85%)
Avg. Response Time
48 hours
33 hours
23 hours
Looking for last week’s report? Click Here [Less]
|
Posted
over 6 years
ago
by
Air Mozilla
The Monday Project Meeting
|
Posted
over 6 years
ago
by
Vlad Filippov
Today we are beginning the rollout of the TestPilot Notes 1.9.0 update. This update brings a new editor, basic Markdown formatting and stability improvements.Markdown formatting in NotesNew EditorWe never stop experimenting at TestPilot, this is why
... [More]
we decided to upgrade our editor in Notes. This new version of Notes uses the brand new CKEditor 5. This upgrade speeds up Notes, helps us keep the code modular and allows us to collaborate with the CKEditor team to build the best notes editor for the Web and Firefox. This version of CKEditor is very new, but we are very excited to where the development is going!If you used Notes already then your existing note content should migrate to the new editor with a few minor adjustments.Markdown formattingMany users requested Markdown support and we are enabling the first iteration of this feature with this release. Once you get the update you will get access to the following Markdown formatting options:
Heading (## Title)
Bold (__bold__ or **bold**)
Italic (_italic_ or *italic*)
Unordered Lists (* item or -item), Ordered Lists ( 1. or 1) )
We are planning to expand the list of these formatting in the future!Notes 1.9.0We would like to extend a huge thank you to:
all the development contributors for bug fixes and features
CKEditor developers for helping us out with the new editor
Localization contributors for translating Notes
the QA team for being very detailed and super fast with testing this project
If you are interested in contributing to open source projects mentioned above, then please check out the Notes and CKEditor repositories.TestPilot Notes v1.9.0 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
|
Posted
over 6 years
ago
by
Nical
Another long overdue newsletter is here (I still consistently underestimate the time it takes to gather information and write these up, oops!).
I have been using Firefox with WebRender enabled as my main browser for a little while now on various
... [More]
computers. It’s great to see that it actually works and on many pages outperforms Firefox without WebRender. Of course we are still hitting pages where WebRender doesn’t perform very well but these tend to highlight specific areas of WebRender with a naive implementation that hasn’t been optimized yet. It’s quite satisfying because these tend to be quickly fixed. We aren’t running into fundamental design flaws which is always a risk when rewriting such a large piece of tech from scratch.
There are some configurations with graphics drivers that play very well with WebRender (intel on linux and nvidia on windows have worked quite well for me so far) and others that don’t (I’ve had a lot of issues with the proprietary nvidia drivers on linux and whatever intel chip is on the low end surface tablets for example). Fine tuning WebRender to play nice with the most common hardware and driver configurations is going to be a big challenge that we’ll have to go through before shipping, and we aren’t quite there yet.
Before I go through some of the code changes, I want to give shout-out for Darkspirit who is helping a lot with testing and triaging on bugzilla and github. Thanks a lot!
Notable WebRender changes
Glenn is incrementally refactoring WebRender’s frame building and batching architecture to better support segmenting primitives into parts and move as many pixels as possible to the opaque pass. This ongoing work is spread over many pull requests and has already yield great performance improvements.
Morris greatly improved the performance of blurs with large radii.
Nical fixed a bug in the rendering of dotted borders.
Gankro further improved serialization of glyphs.
Glenn removed the need to re-build scenes every frame during scrolling (this is a big CPU win).
Glenn implemented filtering out render tasks for filters that have no effect such as opacity(1.0).
Kvark made it possible for GPU queries to be toggled at runtime.
Nical improved the UI of the integrated GPU profiler a bit and exposed the settings to gecko.
Kvark investigated some of the remaining performance issues with motion mark.
Ethan fixed some issues related to pre-multiplied alpha and filters.
Kats fixed a hit testing bug.
Nical Implemented rendering common bullet points with webrender display items instead of blob images.
Kvark made the depth buffer optional in render passes to save memory bandwidth where it isn’t needed
Ethan fixed another rendering error with pre-multiplied alpha.
Kvark implemented support for the new document API on the renderer side.
Kvark worked around a GLSL compiler bug.
Kvark improved the logic that recycles render targets.
Notable Gecko changes
Kats completed the implementation of position:sticky.
A collection of motionmark work:
jrmuizel landed a change to avoid invalidating blob images on tiny scale changes
Kats fixed a bug which was taking about 19% of client side motionmark time
Lee optimized the way we send fonts to WR and eliminated font copying as a source of main thread jank.
Vincent fixed a crash.
Ethan fixed a bug that caused blob images to be missing.
Andrew landed scaled image container support which should reduce frequency of fallback rendering (See bugs 1183378, 1368776, 1366097.
Andrew made shared surfaces reuse the same image key across display items , need to set the pref “image.mem.shared” to true.
Andrew improved the performance of using SVGs as mask-image.
Jeff ensured we don’t fall back with -moz-border-*-colors on border sides that don’t have a border (the fallback was hitting us on gmail).
Sotaro improved the performance of recompiling shaders by caching the shader binary.
Ethan fixed a memory leak.
Lee implemented rendering pre-transformed glyphs in WebRender.
Sotaro removed a synchronization happening when submitting frames with ANGLE on Windows.
Morris and Sotaro fixed pipeline leaks.
Nical avoided using blob image serialization for content that must be panted on the content thread (such as native themed widgets).
Morris enabled WebRender support for filters (hue-rotate, opacity, saturate).
Kats Made APZ use WebRender’s hit testing code and later completed it to work with scroll bars and scroll thumbs.
Jerry integrated WebRender’s threads with gecko’s built-in profiler.
Markus improved the performance of rendering the title bar on mac.
Sotaro fixed some issues with google maps.
… and whole lot of other things as shown in the list of bugs closed since the previous newsletter.
Enabling WebRender in Firefox Nightly
In about:config:
– set “gfx.webrender.enabled” to true,
– set “gfx.webrender.blob-images” to true,
– set “image.mem.shared” to true,
– if you are on Linux, set “layers.acceleration.force-enabled” to true.
Note that WebRender can only be enabled in Firefox Nightly. [Less]
|
Posted
over 6 years
ago
by
Nical
Another long overdue newsletter is here (I still consistently underestimate the time it takes to gather information and write these up, oops!).
I have been using Firefox with WebRender enabled as my main browser for a little while now on various
... [More]
computers. It’s great to see that it actually works and on many pages outperforms Firefox without WebRender. Of course we are still hitting pages where WebRender doesn’t perform very well but these tend to highlight specific areas of WebRender with a naive implementation that hasn’t been optimized yet. It’s quite satisfying because these tend to be quickly fixed. We aren’t running into fundamental design flaws which is always a risk when rewriting such a large piece of tech from scratch.
There are some configurations with graphics drivers that play very well with WebRender (intel on linux and nvidia on windows have worked quite well for me so far) and others that don’t (I’ve had a lot of issues with the proprietary nvidia drivers on linux and whatever intel chip is on the low end surface tablets for example). Fine tuning WebRender to play nice with the most common hardware and driver configurations is going to be a big challenge that we’ll have to go through before shipping, and we aren’t quite there yet.
Before I go through some of the code changes, I want to give shout-out for Darkspirit who is helping a lot with testing and triaging on bugzilla and github. Thanks a lot!
Update – What does image.mem.shared do?
This is a popular question in the comments section of this post, let’s see:
Gecko has an internal representation of the page that we call the Display list. When WebRender is disabled we walk through this display list and each display item on the list knows how to paint itself into the destination surface on the content process. WebRender has its own display list format, which is a little different, and we have to turn the Gecko display list into a WebRender display list and send it to the GPU/compositor process where WebRender renders it.
At the early days of WebRender’s integration we started with a rather naïve transformation of the display list that would have each Gecko image display item create a WebRender image object, copy the decoded image into it and create a WebRender image display item that refers to it. This meant that if two display items were referring to the same image, they would each create their own WebRender copy of the image. Ouch! If you have a “sprite sheet” (a big image that has for example all of the icons in the page and many HTML elements refer to portions of that image) in a site, which is fairly common, this would go very wrong very quickly, because the sprite sheet would end up duplicated many times (lots of CPU time, and memory bandwidth spent copying data around, and a lot more memory used as well).
Andrew’s work, in a nutshell, made it so that we can decode the image in shared memory directly (removing an expensive copy) and have all image display items that are using the same image refer to that shared image object instead of creating their own copy (no more duplication). And this will soon be enabled by default but is currently behind the “image.mem.shared” pref. Note that we are (well, Andrew is, single-handedly) still in the process of getting SVG images to work well but it isn’t implemented yet (If you are wondering why your memory usage explodes when adding an emoji in mastdon for example, that’s what is happening, and will soon be fixed).
Notable WebRender changes
Glenn is incrementally refactoring WebRender’s frame building and batching architecture to better support segmenting primitives into parts and move as many pixels as possible to the opaque pass. This ongoing work is spread over many pull requests and has already yield great performance improvements.
Morris greatly improved the performance of blurs with large radii.
Nical fixed a bug in the rendering of dotted borders.
Gankro further improved serialization of glyphs.
Glenn removed the need to re-build scenes every frame during scrolling (this is a big CPU win).
Glenn implemented filtering out render tasks for filters that have no effect such as opacity(1.0).
Kvark made it possible for GPU queries to be toggled at runtime.
Nical improved the UI of the integrated GPU profiler a bit and exposed the settings to gecko.
Kvark investigated some of the remaining performance issues with motion mark.
Ethan fixed some issues related to pre-multiplied alpha and filters.
Kats fixed a hit testing bug.
Nical Implemented rendering common bullet points with webrender display items instead of blob images.
Kvark made the depth buffer optional in render passes to save memory bandwidth where it isn’t needed
Ethan fixed another rendering error with pre-multiplied alpha.
Kvark implemented support for the new document API on the renderer side.
Kvark worked around a GLSL compiler bug.
Kvark improved the logic that recycles render targets.
Notable Gecko changes
Kats completed the implementation of position:sticky.
A collection of motionmark work:
jrmuizel landed a change to avoid invalidating blob images on tiny scale changes
Kats fixed a bug which was taking about 19% of client side motionmark time
Lee optimized the way we send fonts to WR and eliminated font copying as a source of main thread jank.
Vincent fixed a crash.
Ethan fixed a bug that caused blob images to be missing.
Andrew landed scaled image container support which should reduce frequency of fallback rendering (See bugs 1183378, 1368776, 1366097.
Andrew made shared surfaces reuse the same image key across display items , need to set the pref “image.mem.shared” to true.
Andrew improved the performance of using SVGs as mask-image.
Jeff ensured we don’t fall back with -moz-border-*-colors on border sides that don’t have a border (the fallback was hitting us on gmail).
Sotaro improved the performance of recompiling shaders by caching the shader binary.
Ethan fixed a memory leak.
Lee implemented rendering pre-transformed glyphs in WebRender.
Sotaro removed a synchronization happening when submitting frames with ANGLE on Windows.
Morris and Sotaro fixed pipeline leaks.
Nical avoided using blob image serialization for content that must be panted on the content thread (such as native themed widgets).
Morris enabled WebRender support for filters (hue-rotate, opacity, saturate).
Kats Made APZ use WebRender’s hit testing code and later completed it to work with scroll bars and scroll thumbs.
Jerry integrated WebRender’s threads with gecko’s built-in profiler.
Markus improved the performance of rendering the title bar on mac.
Sotaro fixed some issues with google maps.
… and whole lot of other things as shown in the list of bugs closed since the previous newsletter.
Enabling WebRender in Firefox Nightly
In about:config:
– set “gfx.webrender.enabled” to true,
– set “gfx.webrender.blob-images” to true,
– set “image.mem.shared” to true,
– if you are on Linux, set “layers.acceleration.force-enabled” to true.
Note that WebRender can only be enabled in Firefox Nightly. [Less]
|
Posted
over 6 years
ago
by
Daniel Stenberg
We’ve had volunteers donating bandwidth to the curl project basically since its inception. They mirror our download archives so that you can download them directly from their server farms instead of hitting the main curl site.
On the main site we
... [More]
check the mirrors daily and offers convenient download links from the download page. It has historically been especially useful for the rare occasions when our site has been down for administrative purpose or others.
Since May 2017 the curl site is fronted by Fastly which then has reduced the bandwidth issue as well as the downtime problem. The mirrors are still there though.
Starting now, we will only link to download mirrors that offer the curl downloads over HTTPS in our continued efforts to help our users to stay secure and avoid malicious manipulation of data. I’ve contacted the mirror admins and asked if they can offer HTTPS instead.
The curl download page still contains links to HTTP-only packages and pages, and we would really like to fix them as well. But at the same time we’ve reasoned that it is better to still help users to find packages than not, so for the packages where there are no HTTPS linkable alternatives we still link to HTTP-only pages. For now.
If you host curl packages anywhere, for anyone, please consider hosting them over HTTPS for all the users’ sake. [Less]
|