I Use This!
Very High Activity

News

Analyzed about 8 hours ago. based on code collected about 15 hours ago.
Posted over 4 years ago by Daniel Stenberg
I’m happy to announce that curl now supports a third SSH library option: wolfSSH. Using this, you can build curl and libcurl to do SFTP transfers in a really small footprint that’s perfectly suitable for embedded systems and others. This goes ... [More] excellent together with the tiny-curl effort. SFTP only The initial merge of this functionality only provides SFTP ability and not SCP. There’s really no deeper thoughts behind this other than that the work has been staged and the code is smaller for SFTP-only and it might be that users on these smaller devices are happy with SFTP-only. Work on adding SCP support for the wolfSSH backend can be done at a later time if we feel the need. Let me know if you’re one such user! Build time selection You select which SSH backend to use at build time. When you invoke the configure script, you decide if wolfSSH, libssh2 or libssh is the correct choice for you (and you need to have the correct dev version of the desired library installed). The initial SFTP and SCP support was added to curl in November 2006, powered by libssh2 (the first release to ship it was 7.16.1). Support for getting those protocols handled by libssh instead (which is a separate library, they’re just named very similarly) was merged in October 2017. Number of supported SSH backends over time in the curl project. WolfSSH uses WolfSSL functions If you decide to use the wolfSSH backend for SFTP, it is also possibly a good idea to go with WolfSSL for the TLS backend to power HTTPS and others. A plethora of third party libs WolfSSH becomes the 32nd third party component that curl can currently be built to use. See the slide below and click on it to get the full resolution version. 32 possible third party dependencies curl can be built to use Credits I, Daniel, wrote the initial new wolfSSH backend code. Merged in this commit. Wolf image by David Mark from Pixabay [Less]
Posted over 4 years ago by [email protected] (ClassicHasClass)
Firefox 72 builds out of the box and uneventfully on OpenPOWER. The marquee feature this time around is picture-in-picture, which is now supported in Linux and works just fine for playing Trooper Clerks ("salsa shark! we're gonna need a bigger ... [More] boat!"). The blocking of fingerprinting scripts should also be very helpful since it will reduce the amount of useless snitchy JavaScript that gets executed. The irony of that statement on a Blogger site is not lost on me, by the way. The bug that mashed Firefox 71 (ultimately fallout from bug 1601707 and its many dupes) did not get fixed in time for Firefox 72 and turned out to be a compiler issue. The lifetime change that the code in question relies upon is in Clang 7 and up, but unless you are using a pre-release build this fix is not (yet) in any official release of gcc 9 or 10. As Clang is currently unable to completely build the browser on ppc64le, if your extensions are affected (mine aren't) you may want to add this patch which was also landed on the beta release channel for Firefox 73. The debug and opt configurations are, again, otherwise unchanged from Firefox 67. [Less]
Posted over 4 years ago
Happy new year from the SpiderMonkey team! Heads up: the next newsletter will likely cover both Firefox 74 and Firefox 75 due to the shorter release cycles this year. JavaScript New features The relatedYear field type for ... [More] Intl.DateTimeFormat.prototype.formatToParts is now part of the spec so André Bargull made it ride the trains. Project Visage Project Visage is a project to write a new frontend (parser and bytecode emitter) for JavaScript in Rust that’s more maintainable, modular, efficient, and secure than the current frontend. The team (Jason Orendorff, Nicolas Pierron, Tooru Fujisawa, Yulia Startsev) is currently experimenting with a parser generator that generates a custom LR parser. There’s a fork of mozilla-central where passing --rust-frontend to the shell makes it try the new frontend, falling back on the C++ frontend for scripts it can’t handle (currently almost everything). LibFuzzer is used as a way to identify issues where the new parser accepts inputs which are currently rejected by the current parser. Jason also improved most of our bytecode documentation. (Jason pronounces it “VIZZ-udge”, like the English word, but you can say whatever you want.) JSScript/LazyScript unification Ted Campbell fixed the parser to avoid saving trivial data between syntax parsing and the eventual delazification. This saves memory and brings us closer to being able to reconstruct a lazy script directly from its non-lazy version. The js::BaseScript type now contains the fields it needed to represent lazy and non-lazy scripts. This is an important milestone on the path to unifying JSScript and LazyScript. Project Stencil As the GC-free parser work continues, Project Stencil aims to define a meaningful data format for the parser to generate. This paves the way to integrating a new frontend (Visage) and allows us to modernize the bytecode caches and improve page-load performance. We call it ‘stencil’ because this data structure is the template from which the VM’s JSScript will be instantiated. Matthew Gaudet started fuzzing the deferred allocation path in the front end. The hope is that we will enable this code path by default early in the Firefox 74 cycle. The end goal of turning this on by default is to avoid having to maintain two allocation paths indefinitely. The LazyScript unification work continues to simplify (and clarify) the semantics of internal script flags that the parser must generate. These flags will become part of the stencil data structures. Regular expression engine update Iain Ireland is writing a shim layer to enable Irregexp, V8’s regexp engine, to be embedded in SpiderMonkey with minimal changes relative to upstream. He is currently working on rewriting the existing regular expression code to call the new engine. Bytecode and IonBuilder simplifications The previous newsletter mentioned some large IonBuilder code simplifications landing in Firefox 72. This cycle Jan de Mooij changed all loops to have the same bytecode structure so IonBuilder can use the same code for all of them. This also allowed us to remove more source notes and JIT compilation no longer has to look up any source notes. These changes help the new frontend because it’s now easier to generate correct bytecode and also laid the groundwork for more Ion cleanup work this year. Finally, it ended up fixing some performance cliffs: yield* expressions, for example, can be at least 5x faster. toSource/uneval removal Tom Schuster is investigating removing the non-standard toSource and uneval functions. This requires fixing a lot of code and tests in Firefox so as a first step we may do this only for content code. André Bargull helped out by fixing SpiderMonkey tests to stop using these functions. Debugger Logan Smyth rewrote debugger hooks to use the exception-based implementation for forced returns. This ended up removing and simplifying a lot of code in the debugger, interpreter and JITs because the exception handler is now the only place where forced returns have to be handled. Miscellaneous André Bargull fixed an Array.prototype.reverse performance issue by avoiding GC post barriers if the array is in the nursery. André also contributed more BigInt optimizations. He reduced allocations and added fast paths for uint64 BigInts. For example, certain multiplications are now 3-11x faster and exponentiations can be up to 30x faster! WebAssembly JS BigInt <-> wasm I64 conversion Igalia (Asumu Takikawa) has landed the JS-BigInt-integration proposal, so i64 values in WebAssembly can be converted to/from JavaScript BigInt. This is behind a flag and Nightly-only for the time being. Reference types and bulk memory SpiderMonkey continues to track the reference types and bulk memory proposals, with several tweaks and bug fixes having landed recently. Wasm Cranelift Cranelift is a code generator (written in Rust) that we want to use in Firefox as the next optimizing compiler for WebAssembly. Andrew Brown (from Intel) has kept on implementing more opcodes for the wasm SIMD proposal. Undergoing work is still being carried out by Julian Seward & Benjamin Bouvier to implement different register allocation algorithms: a simple linear scan as well as a backtracking allocator à la IonMonkey. Ryan Hunt has added WebAssembly bulk memory operations support to Cranelift. This is enabled when using Cranelift as the wasm backend in Firefox. Yury Delendik has implemented basic support for the reference types proposal. It is not complete yet, so it is not available in general in Cranelift. Sean Stangl and @bjorn3 have landed initial support for automatically determining what the REX prefix should be for x86 instructions, which will ease supporting more x86 instructions. Ongoing work The multi-value proposal allows WebAssembly functions and blocks to return multiple values. Andy Wingo from Igalia is making steady progress implementing this feature in SpiderMonkey. It works for blocks, function calls are in progress. Tom Tung, Anne van Kesteren and others are working on re-enabling SharedArrayBuffer by default. Lars Hansen has done initial work (based on older work by David Major) on implementing the exception handling proposal in our production compilers and in the runtime. [Less]
Posted over 4 years ago
If you are using feature detection with SharedArrayBuffer objects today you are likely impacted by upcoming changes to shared memory. In particular, you can no longer assume that if you have access to a SharedArrayBuffer object you can also use it ... [More] with postMessage(). Detecting if SharedArrayBuffer objects are exposed can be done through the following code: if (self.SharedArrayBuffer) { // SharedArrayBuffer objects are available. } Detecting if shared memory is possible by using SharedArrayBuffer objects in combination with postMessage() and workers can be done through the following code: if (self.crossOriginIsolated) { // Passing SharedArrayBuffer objects to postMessage() will succeed. } Please update your code accordingly! (As indicated in the aforelinked changes document obtaining a cross-origin isolated environment (i.e., one wherein self.crossOriginIsolated returns true) requires setting two headers and a secure context. Simply put, the Cross-Origin-Opener-Policy header to isolate yourself from attackers and the Cross-Origin-Embedder-Policy header to isolate yourself from victims.) [Less]
Posted over 4 years ago by J.C. Jones
CRLite is a technology to efficiently compress revocation information for the whole Web PKI into a format easily delivered to Web users. It addresses the performance and privacy pitfalls of the Online Certificate Status Protocol (OCSP) while avoiding ... [More] a need for some administrative decisions on the relative value of one revocation versus another. For details on the background of CRLite, see our first post, Introducing CRLite: All of the Web PKI’s revocations, compressed. To discuss CRLite’s design, let’s first discuss the input data, and from that we can discuss how the system is made reliable. Designing CRLite When Firefox securely connects to a website, the browser validates that the website’s certificate has a chain of trust back to a Certificate Authority (CA) in the Mozilla Root CA Program, including whether any of the CAs in the chain of trust are themselves revoked. At this time Firefox knows the issuing certificate’s identity and public key, as well as the website’s certificate’s identity and public key. To determine whether the website’s certificate is trusted, Firefox verifies that the chain of trust is unbroken, and then determines whether the website’s certificate is revoked. Normally that’s done via OCSP, but with CRLite Firefox simply has to answer the following questions: Is this website’s certificate older than my local CRLite Filter, e.g., is my filter fresh enough? Is the CA that issued this website’s certificate included in my local CRLite Filter, e.g. is that CA participating? If “yes” to the above, and Firefox queries the local CRLite Filter, does it indicate the website’s certificate is revoked? That’s a lot of moving parts, but let’s inspect them one by one. Freshness of CRLite Filter Data Mozilla’s infrastructure continually monitors all of the known Certificate Transparency logs for new certificates using our CRLite tooling; the details of how that works will be in a later blog post about the infrastructure. Since multiple browsers now require that all website certificates are disclosed to Certificate Transparency logs to be trusted, in effect the tooling has total knowledge of the certificates in the public Web PKI. Figure 1: CRLite Information Flow. More details on the infrastructure will be in Part 4 of this blog post series. Four times per day, all website certificates that haven’t reached their expiration date are processed, drawing out lists of their Certificate Authorities, their serial numbers, and the web URLs where they might be mentioned in a Certificate Revocation List (CRL). All of the referenced CRLs are downloaded, verified, processed, and correlated against the lists of unexpired website certificates. Figure 2: CRLite Filter Generation Process At the end, we have a set of all known issuers that publish CRLs we could use, the identification numbers of every certificate they issued that is still unexpired, and the identification numbers of every certificate they issued that hasn’t expired but was revoked. With this knowledge, we can build a CRLite Filter. Structure of A CRLite Filter CRLite data comes in the form of a series of cascading Bloom filters, with each filter layer adding data to the one before it. Individual Bloom filters have a certain chance of false-positives, but using Certificate Transparency as an oracle, the whole Web PKI’s certificate corpus is verified through the filter. When a false-positive is discovered, the algorithm adds it to another filter layer to resolve the false positive. Figure 3: CRLite Filter Structure The certificate’s identifier is defined as shown in Figure 4: Figure 4: CRLite Certificate Identifier For complete details of this construction see Section III.B of the CRLite paper. After construction, the included Web PKI’s certificate corpus is again verified through the filter, ensuring accuracy at that point-in-time. Ensuring Filter Accuracy A CRLite filter is accurate at a given point-in-time, and should only be used for the certificates that were both known to the filter generator, and for which there is revocation information. We can know whether a certificate could be included in the filter if that certificate has delivered with it a Signed Certificate Timestamp from a participating Certificate Transparency log that is at least one Maximum Merge Delay older than our CRLite filter date. If that is true, we also determine whether the certificate’s issuer is included in the CRLite filter, by referencing our preloaded Intermediate data for a boolean flag reporting whether CRLite includes its data. Specifically, the CA must be publishing accessible, fresh, verified CRL files at a URL included within their certificates’ Authority Information Access data. This flag is updated with the same cadence as CRLite itself, and generally remains constant. Firefox’s Revocation Checking Algorithm Today Today, Firefox Nightly is using CRLite in telemetry-only mode, meaning that Firefox will continue to rely on OCSP to determine whether a website’s certificate is valid. If an OCSP response is provided by the webserver itself — via OCSP Stapling — that is used. However, at the same time, CRLite is evaluated, and that result is reported via Firefox Telemetry but not used for revocation. At a future date, we will prefer to use CRLite for revocation checks, and only if the website cannot be validated via CRLite would we use OCSP, either live or stapled. Firefox Nightly has a preference security.pki.crlite_mode which controls CRLite; set to 1 it gathers telemetry as stated above. Set to 2, CRLite will enforce revocations in the CRLite filter, but still use OCSP if the CRLite filter does not indicate a revocation.  A future mode will permit CRLite-eligible certificates to bypass OCSP entirely, which is our ultimate goal. Participating Certificate Authorities Only public CAs within the Mozilla Root Program are eligible to be included, and CAs are automatically enrolled when they publish CRLs. If a CA stops publishing CRLs, or problems arise with their CRLs, they will be automatically excluded from CRLite filters until the situation is resolved. As mentioned earlier, if a CA chooses not to log a certificate to a known Certificate Transparency log, then CRLite will not be used to perform revocation checking for that certificate. Ultimately, we expect CAs to be very interested in participating in CRLite, as it could significantly reduce the cost of operating their OCSP infrastructure. Listing Enrolled Certificate Authorities The list of CAs currently enrolled is in our Intermediate Preloading data served via Firefox Remote Settings. In the FAQ for CRLite on Github, there’s information on how to download and process that data yourself to see what CAs revocations are included in the CRLite state. Notably, Let’s Encrypt currently does not publish CRLs, and as such their revocations are not included in CRLite. The CRLite filters will increase in size as more CAs become enrolled, but the size increase is modeled to be modest. Portion of the Web PKI Enrolled Currently CRLite covers only a portion of the Web PKI as a whole, though a sizable portion: As-generated through roughly a period covering December 2019, CRLite covered approximately 100M certificates in the WebPKI, of which about 750k were revoked. Figure 5: Number of Enrolled Revoked vs Enrolled But Not Revoked Certificates The whole size of the WebPKI trusted by Mozilla with any CRL distribution point listed is 152M certificates, so CRLite today includes 66% of the potentially-compatible WebPKI  [Censys.io]. The missing portion is mostly due to CRL downloading or processing errors which are being addressed. That said, approximately 300M additional trusted certificates do not include CRL revocation information, and are not currently eligible to be included in CRLite. Data Sizes, Update Frequency, and the Future CRLite promises substantial compression of the dataset; the binary form of all unexpired certificate serial numbers comprises about 16 GB of memory in Redis; the hexadecimal form of all enrolled and unexpired certificate serial numbers comprises about 6.7 GB on disk, while the resulting binary Bloom filter compresses to approximately 1.3 MB. Figure 6: CRLite Filter Sizes over the month of December 2019 (in kilobytes) To ensure freshness, our initial target was to produce new filters four times per day, with Firefox users generally downloading small delta difference files to catch-up to the current filter. At present, we are not shipping delta files, as we’re still working toward an efficient delta-expression format. Filter generation is a reasonably fast process even on modest hardware, with the majority of time being spent aggregating together all unexpired certificate serial numbers, all revoked serial numbers, and producing a final set of known-revoked and known-not-revoked certificate issuer-serial numbers (mean of 35 minutes). These aggregated lists are then fed into the CRLite bloom filter generator, which follows the process in Figure 2 (mean of 20 minutes).   Figure 7: Filter Generation Time [source] For the most part, faster disks and more efficient (but not human-readable) file formats would speed this process up, but the current speeds are more than sufficient to meet our initial goals, particularly while we continue improving other aspects of the system. Our next blog post in this series, Part 3, will discuss the telemetry results that our current users of Firefox Nightly are seeing, while Part 4 will discuss the design of the infrastructure. The post The End-to-End Design of CRLite appeared first on Mozilla Security Blog. [Less]
Posted over 4 years ago by J.C. Jones
CRLite is a technology proposed by a group of researchers at the IEEE Symposium on Security and Privacy 2017 that compresses revocation information so effectively that 300 megabytes of revocation data can become 1 megabyte. It accomplishes this by ... [More] combining Certificate Transparency data and Internet scan results with cascading Bloom filters, building a data structure that is reliable, easy to verify, and easy to update. Since December, Firefox Nightly has been shipping with with CRLite, collecting telemetry on its effectiveness and speed. As can be imagined, replacing a network round-trip with local lookups makes for a substantial performance improvement. Mozilla currently updates the CRLite dataset four times per day, although not all updates are currently delivered to clients. Revocations on the Web PKI: Past and Present The design of the Web’s Public Key Infrastructure (PKI) included the idea that website certificates would be revocable to indicate that they are no longer safe to trust: perhaps because the server they were used on was being decommissioned, or there had been a security incident. In practice, this has been more of an aspiration, as the imagined mechanisms showed their shortcomings: Certificate Revocation Lists (CRLs) quickly became large, and contained mostly irrelevant data, so web browsers didn’t download them; The Online Certificate Status Protocol (OCSP) was unreliable, and so web browsers had to assume if it didn’t work that the website was still valid. Since revocation is still crucial for protecting users, browsers built administratively-managed, centralized revocation lists: Firefox’s OneCRL, combined with Safe Browsing’s URL-specific warnings, provide the tools needed to handle major security incidents, but opinions differ on what to do about finer-grained revocation needs and the role of OCSP. The Unreliability of Online Status Checks Much has been written on the subject of OCSP reliability, and while reliability has definitely improved in recent years (per Firefox telemetry; failure rate), it still suffers under less-than-perfect network conditions: even among our Beta population, which historically has above-average connectivity, over 7% of OCSP checks time out today. Because of this, it’s impractical to require OCSP to succeed for a connection to be secure, and in turn, an adversarial monster-in-the-middle (MITM) can simply block OCSP to achieve their ends. For more on this, a couple of classic articles are: Cloudflare: High-reliability OCSP stapling and why it matters Adam Langley: Revocation doesn’t work Mozilla has been making improvements in this realm for some time, implementing OCSP Must-Staple, which was designed as a solution to this problem, while continuing to use online status checks whenever there’s no stapled response. We’ve also made Firefox skip revocation information for short-lived certificates; however, despite improvements in automation, such short-lived certificates still make up a very small portion of the Web PKI, because the majority of certificates are long-lived. Does Decentralized Revocation Bring Dangers? The ideal in question is whether a Certificate Authority’s (CA) revocation should be directly relied upon by end-users. There are legitimate concerns that respecting CA revocations could be a path to enabling CAs to censor websites. This would be particularly troubling in the event of increased consolidation in the CA market. However, at present, if one CA were to engage in censorship, the website operator could go to a different CA. If censorship concerns do bear out, then Mozilla has the option to use its root store policy to influence the situation in accordance with our manifesto. Does Decentralized Revocation Bring Value? Legitimate revocations are either done by the issuing CA because of a security incident or policy violation, or they are done on behalf of the certificate’s owner, for their own purposes. The intention becomes codified to render the certificate unusable, perhaps due to key compromise or service provider change, or as was done in the wake of Heartbleed. Choosing specific revocations to honor and refusing others dismisses the intentions of all left-behind revocations attempts. For Mozilla, it violates principle 6 of our manifesto, limiting participation in the Web PKI’s security model. There is a cost to supporting all revocations – checking OCSP: Slows down our first connection by ~130 milliseconds (CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, https://mzl.la/2ogT8TJ), Fails unsafe, if an adversary is in control of the web connection, and Periodically reveals to the CA the HTTPS web host that a user is visiting. Luckily, CRLite gives us the ability to deliver all the revocation knowledge needed to replace OCSP, and do so quickly, compactly, and accurately. Can CRLite Replace OCSP? Firefox Nightly users are currently only using CRLite for telemetry, but by changing the preference security.pki.crlite_mode to 2, CRLite can enter “enforcing” mode and respect CRLite revocations for eligible websites. There’s not yet a mode to disable OCSP; there’ll be more on that in subsequent posts. This blog post is the first in a series discussing the technology for CRLite, the observed effects, and the nature of a collaboration of this magnitude between industry and academia. The next post discusses the end-to-end design of the CRLite mechanism, and why it works. Additionally, some FAQs about CRLite are available on Github. The post Introducing CRLite: All of the Web PKI’s revocations, compressed appeared first on Mozilla Security Blog. [Less]
Posted over 4 years ago by marco
Gutenberg 7.2 has just been released as a plugin. The development cycle was longer than usual. As a result, this version contains a lot of changes. Several of them improve Gutenberg’s accessibility. The tab order in the editor When editing ... [More] a block, the tab order has been adjusted. Rather than tabbing to the next block, for example from one paragraph to the next, pressing tab will now put focus into the side bar for the active block. Further tabbing will move through the controls of said side bar. Shift+Tab will go in the opposite direction. Likewise, when in the main contents area of a block, Shift+Tab will now move focus to the toolbar consistently and through its controls. It will also skip the drag handle for a block, because this is not keyboard operable. Tab will stop on the items to move the block up or down within the current set of blocks. This makes the keyboard focus much more consistent and alleviates the need to use the custom keyboard shortcuts for the side bar and toolbar. These do still work, so if you have memorized them, you can continue using them. But you do not need to, tab and shift+tab will now also take you to expected places consistently. Improvements to the Welcome guide The modal for the Welcome guide has been enhanced. The modal now always gets a proper title for screen readers, so it no longer speaks an empty dialog when focus moves into it. The current page is now indicated for screen readers so it is easy to know which of the steps in the current guide is showing. The main contents is now a document so screen readers which apply a special reading mode for content sections can provide this functionality inside the modal. This was one of the first two code contributions to Gutenberg by yours truly. More enhancements and fixes The justification radio menu items in the formatting toolbar are now properly exposed as such. This was the other of the two code contributions I made to this Gutenberg version. The Social block now has proper labels. The block wrapper, which contains the current set of blocks, now properly identifies as a group rather than a section. This will make it easier when dealing with nested blocks or parallel groups of blocks when building pages. In conclusion Gutenberg continues to improve. And now that I am a team member as well, I’ll try to help as time and capacity permit. The changes especially to the keyboard focus and semi-modality of blocks is a big step in improving usability. One other thing that will hopefully land soon once potential plugin compatibility issues are resolved, will be that toolbars conform to the WAI-ARIA design pattern. That will mean that every toolbar container will be one tab stop, and elements within will be navigable via arrow keys. That will reduce the amount of tab stops and thus improve efficiency and compliance. [Less]
Posted over 4 years ago by Marco
Gutenberg 7.2 has just been released as a plugin. The development cycle was longer than usual. As a result, this version contains a lot of changes. Several of them improve Gutenberg’s accessibility. The tab order in the editor When editing ... [More] a block, the tab order has been adjusted. Rather than tabbing to the next block, for example from one paragraph to the next, pressing tab will now put focus into the side bar for the active block. Further tabbing will move through the controls of said side bar. Shift+Tab will go in the opposite direction. Likewise, when in the main contents area of a block, Shift+Tab will now move focus to the toolbar consistently and through its controls. It will also skip the drag handle for a block, because this is not keyboard operable. Tab will stop on the items to move the block up or down within the current set of blocks. This makes the keyboard focus much more consistent and alleviates the need to use the custom keyboard shortcuts for the side bar and toolbar. These do still work, so if you have memorized them, you can continue using them. But you do not need to, tab and shift+tab will now also take you to expected places consistently. Improvements to the Welcome guide The modal for the Welcome guide has been enhanced. The modal now always gets a proper title for screen readers, so it no longer speaks an empty dialog when focus moves into it. The current page is now indicated for screen readers so it is easy to know which of the steps in the current guide is showing. The main contents is now a document so screen readers which apply a special reading mode for content sections can provide this functionality inside the modal. This was one of the first two code contributions to Gutenberg by yours truly. More enhancements and fixes The justification radio menu items in the formatting toolbar are now properly exposed as such. This was the other of the two code contributions I made to this Gutenberg version. The Social block now has proper labels. The block wrapper, which contains the current set of blocks, now properly identifies as a group rather than a section. This will make it easier when dealing with nested blocks or parallel groups of blocks when building pages. In conclusion Gutenberg continues to improve. And now that I am a team member as well, I’ll try to help as time and capacity permit. The changes especially to the keyboard focus and semi-modality of blocks is a big step in improving usability. One other thing that will hopefully land soon once potential plugin compatibility issues are resolved, will be that toolbars conform to the WAI-ARIA design pattern. That will mean that every toolbar container will be one tab stop, and elements within will be navigable via arrow keys. That will reduce the amount of tab stops and thus improve efficiency and compliance. [Less]
Posted over 4 years ago by Daniel Stenberg
I’m please to invite you to our live webinar, “Why everyone is using curl and you should too!”, hosted by wolfSSL. Daniel Stenberg (me!), founder and Chief Architect of curl, will be live and talking about why everyone is using curl and you ... [More] should too! This is planned to last roughly 20-30 minutes with a following 10 minutes Q&A. Space is limited so please register early! When: Jan 14, 2020 08:00 AM Pacific Time (US and Canada) (16:00 UTC) Register in advance for this webinar! After registering, you will receive a confirmation email containing information about joining the webinar. Not able to attend? Register now and after the event you will receive an email with link to the recorded presentation. [Less]
Posted over 4 years ago by [email protected] (ClassicHasClass)
Firefox 72 builds out of the box and uneventfully on OpenPOWER. The marquee feature this time around is picture-in-picture, which is now supported in Linux and works just fine for playing Trooper Clerks ("salsa shark! we're gonna need a bigger ... [More] boat!"). The blocking of fingerprinting scripts should also be very helpful since it will reduce the amount of useless snitchy JavaScript that gets executed. The irony of that statement on a Blogger site is not lost on me, by the way. The bug that mashed Firefox 71 (ultimately fallout from bug 1601707 and its many dupes) did not get fixed in time for Firefox 72 and turned out to be a compiler issue. The lifetime change that the code in question relies upon is in Clang 7 and up, but not yet in gcc, and unless you are using a pre-release build this fix is not (yet) in any official release of gcc 9 or 10. As Clang is currently unable to completely build the browser on ppc64le, if your extensions are affected (mine aren't) you may want to add this patch which was also landed on the beta release channel for Firefox 73. The debug and opt configurations are, again, otherwise unchanged from Firefox 67. [Less]