I Use This!
Very High Activity

News

Analyzed 2 days ago. based on code collected 2 days ago.
Posted over 6 years ago by Daniel Stenberg
The never-ending series of curl releases continued today when we released version 7.57.0. The 171th release since the beginning, and the release that follows 37 days after 7.56.1. Remember that 7.56.1 was an extra release that fixed a few most ... [More] annoying regressions. We bump the minor number to 57 and clear the patch number in this release due to the changes introduced. None of them very ground breaking, but fun and useful and detailed below. 41 contributors helped fix 69 bugs in these 37 days since the previous release, using 115 separate commits. 23 of those contributors were new, making the total list of contributors now contain 1649 individuals! 25 individuals authored commits since the previous release, making the total number of authors 540 persons. The curl web site currently sends out 8GB data per hour to over 2 million HTTP requests per day. Support RFC7616 – HTTP Digest This allows HTTP Digest authentication to use the must better SHA256 algorithm instead of the old, and deemed unsuitable, MD5. This should be a transparent improvement so curl should just be able to use this without any particular new option has to be set, but the server-side support for this version seems to still be a bit lacking. (Side-note: I’m credited in RFC 7616 for having contributed my thoughts!) Sharing the connection cache In this modern age with multi core processors and applications using multi-threaded designs, we of course want libcurl to enable applications to be able to get the best performance out of libcurl. libcurl is already thread-safe so you can run parallel transfers multi-threaded perfectly fine if you want to, but it doesn’t allow the application to share handles between threads. Before this specific change, this limitation has forced multi-threaded applications to be satisfied with letting libcurl has a separate “connection cache” in each thread. The connection cache, sometimes also referred to as the connection pool, is where libcurl keeps live connections that were previously used for a transfer and still haven’t been closed, so that a subsequent request might be able to re-use one of them. Getting a re-used connection for a request is much faster than having to create a new one. Having one connection cache per thread, is ineffective. Starting now, libcurl’s “share concept” allows an application to specify a single connection cache to be used cross-thread and cross-handles, so that connection re-use will be much improved when libcurl is used multi-threaded. This will significantly benefit the most demanding libcurl applications, but it will also allow more flexible designs as now the connection pool can be designed to survive individual handles in a way that wasn’t previously possible. Brotli compression The popular browsers have supported brotli compression method for a while and it has already become widely supported by servers. Now, curl supports it too and the command line tool’s –compressed option will ask for brotli as well as gzip, if your build supports it. Similarly, libcurl supports it with its CURLOPT_ACCEPT_ENCODING option. The server can then opt to respond using either compression format, depending on what it knows. According to CertSimple, who ran tests on the top-1000 sites of the Internet, brotli gets contents 14-21% smaller than gzip. As with other compression algorithms, libcurl uses a 3rd party library for brotli compression and you may find that Linux distributions and others are a bit behind in shipping packages for a brotli decompression library. Please join in and help this happen. At the moment of this writing, the Debian package is only available in experimental. (Readers may remember my libbrotli project, but that effort isn’t really needed anymore since the brotli project itself builds a library these days.) Three security issues In spite of our hard work and best efforts, security issues keep getting reported and we fix them accordingly. This release has three new ones and I’ll describe them below. None of them are alarmingly serious and they will probably not hurt anyone badly. Two things can be said about the security issues this time: 1. You’ll note that we’ve changed naming convention for the advisory URLs, so that they now have a random component. This is to reduce potential information leaks based on the name when we pass these around before releases. 2. Two of the flaws happen only on 32 bit systems, which reveals a weakness in our testing. Most of our CI tests, torture tests and fuzzing are made on 64 bit architectures. We have no immediate and good fix for this, but this is something we must work harder on. 1. NTLM buffer overflow via integer overflow (CVE-2017-8816) Limited to 32 bit systems, this is a flaw where curl takes the combined length of the user name and password, doubles it, and allocates a memory area that big. If that doubling ends up larger than 4GB, an integer overflow makes a very small buffer be allocated instead and then curl will overwrite that. Yes, having user name plus password be longer than two gigabytes is rather excessive and I hope very few applications would allow this. 2. FTP wildcard out of bounds read (CVE-2017-8817) curl’s wildcard functionality for FTP transfers is not a not very widely used feature, but it was discovered that the default pattern matching function could erroneously read beyond the URL buffer if the match pattern ends with an open bracket ‘[‘ ! This problem was detected by the OSS-Fuzz project! This flaw  has existed in the code since this feature was added, over seven years ago. 3. SSL out of buffer access (CVE-2017-8818) In July this year we introduced multissl support in libcurl. This allows an application to select which TLS backend libcurl should use, if it was built to support more than one. It was a fairly large overhaul to the TLS code in curl and unfortunately it also brought this bug. Also, only happening on 32 bit systems, libcurl would allocate a buffer that was 4 bytes too small for the TLS backend’s data which would lead to the TLS library accessing and using data outside of the heap allocated buffer. Next? The next release will ship no later than January 24th 2018. I think that one will as well add changes and warrant the minor number to bump. We have fun pending stuff such as: a new SSH backend, modifiable happy eyeballs timeout and more. Get involved and help us do even more good! [Less]
Posted over 6 years ago
User is not authorized to perform iam:ChangePassword. Summary: A user who is otherwise authorized to change their password may get this error when attempting to change their password to a string which violates the Password Policy in your IAM Account ... [More] Settings. So, I was setting up the 3rd or 4th user in a small team’s AWS account, and I did the usual: Go to the console, make a user, auto-generate a password for them, tick “force them to change their password on next login”, chat them the password and an admonishment to change it ASAP. It’s a compromise between convenience and security that works for us at the moment, since there’s all of about 10 minutes during which the throwaway credential could get intercepted by an attacker, and I’d have the instant feedback of “that didn’t work” if anyone but the intended recipient performed the password change. So, the 8th or 10th user I’m setting up, same way as all the others, gets that error on the change password screen: “User is not authorized to perform iam:ChangePassword”. Oh no, did I do their permissions wrong? I try explicitly attaching the Amazon’s IAMUserChangePassword policy to them, because that should fix their not being authorized, right? Wrong; they try again and they’re still “not authorized”. OK, I have their temp password because I just gave it to them, so I’ll pop open private browsing and try logging in as them. When I try putting in the same autogenerated password at the reset screen, I get “Password does not conform to the account password policy.”. This makes sense; there’s a “prevent password reuse” policy enabled under Account Settings within IAM. OK, we won’t reuse the password. I’ll just set it to that most seekrit string, “hunter2”. Nope, the “User is not authorized to perform iam:ChangePassword” is back. That’s funny, but consistent with the rules just being checked in a slightly funny order. Then, on a hunch, I try the autogenerated password with a 1 at the end as the new password. It changes just fine and allows me to log in! So, the user did have authorization to change their password all along... they were just getting an actively misleading error message about what was going wrong. So, if you get this “User is not authorized to perform iam:ChangePassword” error but you should be authorized, take a closer look at the temporary password that was generated for you. Make sure that your new password matches or exceeds the old one for having lowercase letters, uppercase letters, numbers, special characters, and total length. When poking at it some more, I discovered that one also gets the “User is not authorized to perform iam:ChangePassword” message when one puts an invalid value into the “current password” box on the change password screen. So, check for typos there as well. This yak shave took about an hour to pin down the fact that it was the contents of the password string generating the permissions error, and I haven’t been able to find the error string in any of Amazon’s actual documentation, so hopefully I’ve said “User is not authorized to perform iam:ChangePassword” enough times in this post that it pops up in search results for anyone else frustrated by the same challenge. [Less]
Posted over 6 years ago
Firefox Quantum has been out in the world for a couple of weeks and we have been getting amazing feedback from users and press. We thought we’d share some of … Read more The post The reports are in and you’re wild about Firefox Quantum appeared first on The Firefox Frontier.
Posted over 6 years ago by Harly Hsu
(Photo: Ruby Hsu)On July 14, the Taipei UX team embarked on an incredible journey to Jakarta to explore the mobile life of Indonesians. It was the first time that we had conducted a formal user research on our own. And thanks to Ruby Hsu, a new user ... [More] researcher addition to the team, that we finally had the talent, time and resource to do it.Why Indonesia? First of all, it has the third largest number of internet users in Asia. Secondly, it’s an emerging market with high mobile usage. Finally, the Mozilla community in Indonesia has historically been (and still is) extremely strong and active. The local contexts and research logistics they have provided were indispensable.Some facts about Indonesia (Source: Internet World Stats)Preliminary researchPrior to this research, the team had looked through a few relevant research, which included: 2013 Firefox and Southeast Asia research report Online marketing reports and surveys Speeches around the next billion users “Things you don’t necessarily know about Indonesia”. A book written by a Taiwanese who married an Indonesian husband We’ve also conducted several online surveys to get a first peek at the Indonesian user needs. Some of our top preliminary findings regarding the factors for choosing their current browser included ease of use, fast page loading, being lightweight, and being a default install alongside the phone.The other questions we’ve asked was about the features they find most useful. The results are: saving downloaded items to SD card, easy to clear cookies and history, sharing downloaded content without using data, night mode, reminder of the data you have left, and so on.Off to IndonesiaIndonesia is an emerging country that is constantly growing and changing. The city is mixed with modern skyscrapers and traditional neighborhoods hidden in small alleys that locals call kampung. You can feel the energy of the city as the streets are always filled with busy cars, buses and scooters. New buildings are under construction, and tens and hundreds of malls are always full of people.Skyscrapers & traditional neighborhoods (Photo: Ruby Hsu)On the streets of Jakarta, I was really surprised to see a lot of motorcycle riders wearing the same coat and helmet with a logo on it. They are GO-Jek and Grab rider, and similar to Uber, it is a transportation taxi service. But instead of taking a car, you sit behind a motorcycle rider to travel around the city. Most people pay the driver with cash or top-up points instead of a credit card. This is quite different from how we are costumed to but has became something that is deeply rooted in their daily life.GO-Jek on the street of Jakarta (Photo: Harly Hsu)During the 2 weeks in Indonesia, the team with Ruby & Bram leading the effort (plus 3 UX designers and 1 Product Manager) conducted 10 home interviews, 8 user testing in Mozilla community space in Jakarta and 6 sessions of street intercepts. We also did some field research like buying phones and sim cards, going to Bandung to visit a hackerspace, and even got a chance to visit and conduct 3 sessions in a school with students from 9~12th grades with the help of a teacher there who happens to be a community member.There’s a lot of research findings and Ruby is going to share out a detailed report later, but here is a summary of the findings related to mobile browser:Most people knew an app from friends, relatives or tech experts.They acquire apps not only through Google Play Store but also from sideloading through retailers when you purchase a phone or get it from friends by transferring APK via apps like ShareIt.Sideloading apps via USB OTG Flash Drive when purchasing a new phone (Photo: Ruby Hsu)Furthermore, people just use whatever browser that is pre-installed on their phone, and a second browser is usually used as a backup when some websites can’t be opened using the primary browser or in needs of a specific feature like faster downloading.On top of that, web page loading speed is the most important thing when they compare between different browsers.They care about application size since their phone has limited storage space and they will uninstall apps that take up too much storage.And because people have limited phone storage space; therefore, almost everyone buys an SD card and insert it into their phone to expand the phone storage.When asked, Indonesian will select data saving as an important thing to them, but they don’t seem to be using those data saving features in the app. Instead, they care more about storage saving in comparison.System screenshot is the most used method for them to save a web page because it is straightforward and universal.Moreover, we observed lots of people took multiple screenshots to save an entire web page like shopping site and Instagram post.Last but not least, privacy doesn’t mean protection against tech giants like Google, Microsoft or even government, but means personal data or files protected against friends, relatives or hackers. Notifications, surprisingly are not viewed as an annoyance, and they actually like receiving them.Interviewing in Mozilla community space in Jakarta (Photo: Harly Hsu)What’s next?Some of the research results were fed into the design of Firefox Rocket, a browser tailor-made for Indonesia which just got launched on November 7th. We are eager to see if Indonesians like it and how they interact with it. So stay tuned for more~Meet Firefox Rocket: A fast & lightweight browser tailor-made for Indonesia.Journey to explore the Indonesian mobile life was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted over 6 years ago by Jochai Ben-Avie
We congratulate the Telecom Regulatory Authority of India (TRAI) on the release of their Recommendations on Net Neutrality. The recommendations are unequivocal: net neutrality should be the law of the land in India, and the licenses of all service ... [More] providers should be amended to include strong net neutrality protections. While it is now up to the Department of Telecommunications (DOT) to enact these license changes — and we urge them to do so swiftly — this is a major step to protecting Indian users and the open internet. Moreover, TRAI’s thoughtful analysis and guidance should serve as a model to regulators around the world. TRAI’s recommendations include many good provisions: TRAI recommends hard coding strong net neutrality and non-discrimination clauses into the operating licenses of all service providers. This is a very strong approach, as it means any net neutrality violations could lead to a service providers operating license being revoked. TRAI has also provided good, draft language that could be used by DOT, which should speed up the process of actually making these license changes. TRAI requires that any deviance from net neutrality, including for traffic management practices, must be “proportionate, transient and transparent in nature.” Requires that specialized services not be “usable or offered as a replacement for Internet Access Services;” and that “the provision of the Specialised Services is not detrimental to the availability and overall quality of Internet Access Service.” Good definitions throughout of other key terms like “Internet Access Service,” “Content,” and “Differential Treatment.” The creation of a multistakeholder body to collaborate and assist TRAI in the monitoring and enforcement of net neutrality. While we must be vigilant that this body not become subject to industry capture, there are good international examples of various kinds of multistakeholder bodies working collaboratively with regulators, including the Brazilian Internet Steering Committee (CGI.br) and the Broadband Internet Technical Advisory Group (BITAG). While TRAI’s discussion of the importance of net neutrality can be traced back to at least as early as 2007, this regulatory conversation began in earnest in March 2015 when TRAI released a controversial consultation paper on Over-The-Top (OTT) Services. In response, more than a million Indians filed comment with TRAI calling for strong net neutrality protections via SaveTheInternet.in. Mozilla’s Executive Chairwoman Mitchell Baker wrote an open letter to Prime Minister Modi at the time stating: “We stand firm in the belief that all users should be able to experience the full diversity of the Web. For this to be possible, Internet Service Providers must treat all content transmitted over the Internet equally, regardless of the sender or the receiver. At a time when users are increasingly being pushed into private, walled gardens and Internet malls providing access to only a limited number of sites, action is needed to protect the free and open Web.” Fast forward through several consultations to February 2016, when TRAI released an order banning differential pricing (AKA zero rating). While this move was certainly progressive, it was also unusual, no other country had banned differential pricing without already having a net neutrality rule. Today’s recommendations are a welcome and important step to finish this process and ensure that not just differential pricing but differential treatment is also banned. Mozilla has engaged at each step of the two and half years of consultations and discussions on this topic (see our filings here), and we applaud TRAI for taking these actions to protect the open internet. However, the work isn’t done yet. We urge the DOT to move quickly to make the license changes recommended by TRAI. We also note that TRAI has identified several provisions that require further regulatory guidance — most notably, the definition of traffic management practices and requirements around transparency disclosures. We look forward to working with TRAI, DOT, and other stakeholders to finalize the enactment of strong net neutrality protections in India   The post Mozilla applauds TRAI net neutrality recommendations appeared first on Open Policy & Advocacy. [Less]
Posted over 6 years ago
After upgrading to Android Studio 3.0, I discovered a few mysterious new files in my app’s data directory (/data/data/org.mozilla.focus.debug/, in this case): libperfa_x86.so perfa.jar perfd (binary file) After some investigation, I ... [More] discovered these files are a product of Android Studio 3.0’s newly rewritten profiler and are added under the following conditions: Built with Android Studio 3.0+ (i.e. not Gradle) Running on an API 26+ device 1 Have opened the “Android Profiler” tab at least once since the AS process started The new Android profiler documentation supports these conditions: To show you advanced profiling data, Android Studio must inject monitoring logic into your compiled app. It provides some instructions to enable it, after which I see the “Enable advanced profiling” checkbox is disabled and states “required for API level < 26 only”: I originally started investigating this in Firefox Focus for Android issue #1842 so you can find more investigation details in that issue. You can try it out for yourself by downloading my WhatsInMyDataDirectory Android project (essentially an empty Android app) and notice that the files are only added after meeting the conditions above. Why might you care? In our case, as part of Firefox Focus, we verify there are no unknown files left over on disk after a browsing session ends, just in case they leak user data. These new files were unknown to our test, triggering our assertion. We confirmed these files would not leak user data because they do not appear for users on release builds. Notes 1: You can enable advanced debugging on older devices, which may also inject these files - I didn’t test this. To do so, see the “Enable advanced profiling” on the new Android profiler overview page. [Less]
Posted over 6 years ago by Alex Klepel
In this fourth article in our series on being ‘Open by Design’ we take a closer look at Kubernetes, an open source container solution which in its short two years of existence, has grown a velocity of contributors which is unprecedented among open ... [More] source projects. The Cloud Native Computing Foundation (CNCF)’s strategic approach to structuring an initial coalition of member organisations, and their focus on growing membership, has been key to this success.The strategic intent behind Kubernetes is well documented in the industry by now. The project’s origins are lay in an internal container management system that began life in 2005, “Borg”, developed in-house at Google and essential in scaling Google’s many, massive services. By 2015 the commercial cloud hosting market had become a lucrative business — notably for Amazon — and Google was seeking ways to address the market. Google took the learning from Borg, used it to develop Kubernetes, and established it as an anchor project under the Linux Foundation — the CNCF — to serve as steward and growth engine for an ecosystem of cloud native computing applications, effectively lowering switching costs from Amazon’s AWS to other cloud providers.“Google developed Kubernetes based on their 15 years of experience running all of their infrastructure on top of containers. They were willing to give away all of this knowledge because a ubiquitous, open source container orchestration platform would help level the playing field between Google Cloud and its top competitors, Amazon and Microsoft.”Dan Kohn, Executive Director — Cloud Native Computing FoundationAs is the case with the majority of open source projects, Kubernetes was established through Gifting valuable software, and then benefitting from the collective efforts of a larger community of software engineers who enhance and maintain it by Creating Together. Where the project really stands out, however, is the decision to strategically Network Common Interests with the establishment of a founding set of influential members of the CNCF, whose workforce joined forces to enhance and grow the project. By inviting a carefully-selected set of influential founding member firms with a shared mission, the CNCF ensured wide, early adoption of an open-source container solution that could weaken Amazon Web Services’ grip on the market.The CNCF has continued to solidify a network and ensure longevity of the project by diversifying with scaled memberships, as well as options for SMEs, individuals and academic institutions. The work of the foundation staff is primarily focused on this effort — matching new engagement models to the needs of member communities and ensuring continued PR and promotion of the platform through conferences and articles about the platform.Benefits CNCF Realises Via Participation ModesBy serving as the neutral collaborative forum of the set of partners with Common Interests, the CNCF relies on collaboration to ensure wide Market Share & Adoption of Kubernetes and the growing collection of Cloud Native Computing Foundation projects. The CNCF members achieve Better Products & Service, and Lowered Product Development Costs through sharing a workforce of developers.Gitte Jonsdatter & Alex KlepelApplying Open Practices — Kubernetes was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted over 6 years ago by Air Mozilla
Copyright reform is a necessary process to be at the level of development of the digital age. Join us for it to go in the...
Posted over 6 years ago by Giorgio
You may have noticed I'm rapid-firing NoScript updates to steer the new UI toward most reasonable directions emerging from your feedback. Unfortunately (or not, in time) it couldn't ever be exactly the same as before, simply because the underlying ... [More] "legacy" Firefox technology (XUL/XPCOM) is not available to extensions developers anymore. But it can become even better than before, with some patience and some. Now to the pains. This morning version 10.1.3rc2 has been available for a couple of hours, with some important fixeds but an even more annoying regression: it erased all permissions from the TRUSTED preset except for "script" (so no objects, no media, no fonts, no background loads and so on). Worse, the checkboxes to restore them were disabled. Since then I've released 10.1.3RC3 which fixes the disabled checkboxes issue, but you still need to restore the TRUSTED permissions (I suggest to check everything, like in the screenshot before, in order to make TRUSTED sites behave as if NoScript wasn't there). Sorry for the inconvenience, and please keep the suggestions coming, thank you. [Less]
Posted over 6 years ago by Ralph Giles
Bitmovin and Mozilla partner to enable HTML5 AV1 Playback Bitmovin and Mozilla, both members of the Alliance for Open Media (AOM), are partnering to bring AV1 playback with HTML5 to Firefox as the first browser to play AV1 MPEG-DASH/HLS streams. ... [More] While the AV1 bitstream is still being finalized, the industry is gearing for fast adoption of the new codec, which promises to be 25-35% more efficient than VP9 and H.265/HEVC. The AV1 bitstream is set to be finalized in early 2018. You may ask – “How does playback work on the bitstream that is not yet finalized?”. Indeed, this is a good question as there are still many things in the bitstream that may change during the current state of the development. However, to make playback possible, we just need to ensure that the encoder and decoder use the same version of the bitstream. Bitmovin and Mozilla agreed on a simple, but for the time being useful, codec string, to ensure compatibility between the version of the bitstream in the Bitmovin AV1 encoder and the AV1 decoder in Mozilla Firefox: "av1.experimental." A test page has been prepared to demonstrate playback of MPEG-DASH test assets encoded in AV1 by the Bitmovin Encoder and played with the Bitmovin HTML5 Player (7.3.0-b7) in the Firefox Nightly browser. AV1 DASH playback demo by Bitmovin and Firefox Nightly. Short film “Tears of Steel” cc-by Blender Foundation. Visit the demo page at https://demo.bitmovin.com/public/firefox/av1/. You can download Firefox Nightly here to view it. Bitmovin AV1 End-to-End The Bitmovin AV1 encoder is based on the AOM specification and scaled on Bitmovin’s cloud native architecture for faster throughput. Earlier this year, the team wrote about  the world’s first AV1 livestream at broadcast quality, which was demoed during NAB 2017 and brought the company the Best of NAB 2017 Award from Streaming Media. The current state of the AV1 encoder is still far away from delivering reasonable encoding times without extensive tuning to the code base: e.g. it takes about 150 seconds on an off-the-shelf desktop computer to encode one second of video. For this reason, Bitmovin’s ability to provide complete ABR test assets (multiple qualities and resolutions) of high quality in reasonable times was extremely useful for testing of the MPEG-DASH/HLS playback of AV1 in Firefox. (HLS playback of AV1 is not officially supported by Apple, but technically possible of course.) The fast encoding throughput can be achieved thanks to Bitmovin’s flexible cloud native architecture, which allows massive horizontal scaling of a single VoD asset to multiple nodes, as depicted in the following figure. An additional benefit of the scalable architecture is that quality doesn’t need to be compromised for speed, as is often the case with a typical encoding setup. Bitmovin’s scalable video encoder. The test assets provided by Bitmovin are segmented WebM outputs that can be used with HLS and MPEG-DASH. For the demo page, we decided to go with MPEG-DASH and encode the assets to the following quality levels: 100 kbps, 480×200 200 kbps, 640×266 500 kbps, 1280×532 800 kbps, 1280×532 1 Mbps, 1920×800 2 Mbps, 1920×800 3 Mbps, 1920×800 We used the royalty-free Opus audio codec and encoded with 32 kbps, which provides for a reasonable quality audio stream. Mozilla Firefox Firefox has a long history of pioneering open compression technology for audio and video. We added support for the royalty-free Theora video codec a decade ago in our initial implementation of HTML5 video. WebM support followed a few years later. More recently, we were the first browser to support VP9, Opus, and FLAC in the popular MP4 container. After the success of the Opus audio codec, our research arm has been investing heavily in a next-generation royalty-free video codec. Mozilla’s Daala project has been a test bed for new ideas, approaching video compression in a totally new way. And we’ve been contributing those ideas to the AV1 codec at the IETF and the Alliance for Open Media. AV1 is a new video compression standard, developed by many contributors through the IETF standards process. This kind of collaboration was part of what made Opus so successful, with contributions from several organizations and open engineering discussions producing a design that was better than the sum of its parts. While Opus was adopted as a mandatory format for the WebRTC wire protocol, we don’t have a similar mandate for a video codec. Both the royalty-free VP8 and the non-free H.264 codecs are considered part of the baseline. Consensus was blocked on the one side by the desire for a freely-implementable spec and on the other for hardware-supported video compression, which VP8 didn’t have at the time. Major hardware vendors have been involved with AV1 from the start, which we expect will result in accelerated support being available much sooner. In April, Bitmovin demonstrated the first live stream using the new AV1 compression technology. In June, Bitmovin and Mozilla worked together to demonstrate the first playback of AV1 video in a web page, using Bitmovin’s adaptive bitrate video technology. The demo is available now and works with Firefox Nightly. The codec work is open source. If you’re interested in testing this, you can compile an encoder yourself. The format is still under development, so it’s important to match the version you’re testing with the decoder version in Firefox Nightly. We’ve extended the MediaSource.isTypeSupported api to take a git commit as a qualifier. You can test for this, e.g.: var container = 'video/webm'; var codec = 'av1.experimental.e87fb2378f01103d5d6e477a4ef6892dc714e614'; var mimeType = container + '; codecs="' + codec + '"'; var supported = MediaSource.isTypeSupported(mimeType); Then select an alternate resource or display an error if your encoded resource isn’t supported in that particular browser. Past commit ids we’ve supported are aadbb0251996 and f5bdeac22930.The currently-supported commit id, built with default configure options, is available here. Once the bitstream is stable we will drop this convention and you can just test for codecs=av1 like any other format. As an example, running this code inside the current page, we can report: Since the initial demo, we’ve continued to develop AV1, providing feedback from real-world application testing and periodically updating the version we support to take advantage of ongoing improvements. The compression efficiency continues to improve. We hope to stabilize the new format next year and begin deployment across the internet of this exciting new format for video. [Less]