I Use This!
Very High Activity

News

Analyzed 8 days ago. based on code collected 9 days ago.
Posted over 4 years ago by Caitlin Neiman
We are making some changes to the submission flow for all add-ons (both AMO- and self-hosted) to improve our ability to detect malicious activity. These changes, which will go into effect later this month, will introduce a small delay in automatic ... [More] approval for all submissions. The delay can be as short as a few minutes, but may take longer depending on the add-on file. If you use a version of web-ext older than 3.2.1, or a custom script that connects to AMO’s upload API, this new delay in automatic approval will likely cause a timeout error. This does not mean your upload failed; the submission will still go through and be approved shortly after the timeout notification. Your experience using these tools should remain the same otherwise. You can prevent the timeout error from being triggered by updating web-ext or your custom scripts before this change goes live. We recommend making these updates this week. For web-ext: update to web-ext version 3.2.1, which has a longer default timeout for `web-ext sign`. To update your global install, use the command `npm install -g web-ext`. For custom scripts that use the AMO upload API: make sure your upload scripts account for potentially longer delays before the signed file is available. We recommend allowing up to 15 minutes. The post Security improvements in AMO upload tools appeared first on Mozilla Add-ons Blog. [Less]
Posted over 4 years ago by David Humphrey
I've been marking student submissions in my open source course this weekend, and with only a half-dozen more to do, the procrastinator in me decided a blog post was in order.Once again I've asked my students to participate in Hacktoberfest.  I wrote ... [More] about the experience last year, and wanted to give an update on how it went this time.I layer a few extra requirements on the students, some of them to deal with things I've learned in the past.  For one, I ask them to set some personal goals for the month, and look at each pull request as a chance to progress toward achieving these goals.  The students are quite different from one another, which I want to celebrate, and this lets them go in different directions, and move at different paces.Here are some examples of the goals I heard this time around: Finish all the required PRs Increase confidence in myself as a developer Master git/GitHub Learn new languages and technologies (Rust, Python, React, etc) Contribute to projects we use and enjoy on a daily basis (e.g., VSCode) Contribute to some bigger projects (e.g., Mozilla) Add more experience to our resume Read other people's code, and get better at understanding new code Work on projects used around the world Work on projects used locally Learn more about how big projects do testing So how did it go?  First, the numbers: 62 students completed all 4 PRs during the month (95% completion rate) 246 Pull Requests were made, consisting of 647 commits to 881 files 32K lines of code were added or modified I'm always interested in the languages they choose.  I let them work on any open source projects, so given this freedom, how will they use it?  The most popular languages by pull request ere: JavaScript/TypeScript - 50% HTML/CSS - 11% C/C++/C# - 11% Python - 10% Java - 5% Web technology projects dominate GitHub, and it's interesting to see that this is not entirely out of sync with GitHub's own stats on language positions.  As always, the long-tail provides interesting info as well.  A lot of people worked on bugs in languages they didn't know previously, including:Swift, PHP, Go, Rust, OCaml, PowerShell, Ruby, Elixir, KotlinBecause I ask the students to "progress" with the complexity and involvement of their pull requests, I had fewer people working in "Hacktoberfest" style repos (projects that popup for October, and quickly vanish).  Instead, many students found their way into larger and well known repositories and organizations, including:Polymer, Bitcoin, Angular, Ethereum, VSCode, Microsoft Calculator, React Native for Windows, Microsoft STL, Jest, WordPress, node.js, Nasa, Mozilla, Home Assistant, Google, InstacartThe top GitHub organization by pull request volume was Microsoft.  Students worked on many Microsoft projects, which is interesting, since they didn't coordinate their efforts.  It turns out that Microsoft has a lot of open source these days.When we were done, I asked the students to reflect on the process a bit, and answer a few questions.  Here's what I heard.1. What are you proud of?  What did you accomplish during October? Contributing to big projects (e.g., Microsoft STL, Nasa, Rust) Contributing to small projects, who really needed my help Learning a new language (e.g., Python) Having PRs merged into projects we respect Translation work -- using my personal skills to help a project Seeing our work get shipped in a product we use Learning new tech (e.g., complex dev environments, creating browser extensions) Successfully contributing to a huge code base Getting involved in open source communities Overcoming the intimidation of getting involved 2. What surprised you about Open Source?  How was it different than you expected? People in the community were much nicer than I expected I expected more documentation, it was lacking The range of projects: big companies, but also individuals and small communities People spent time commenting on, reviewing, and helping with our PRs People responded faster than we anticipated At the same time, we also found that some projects never bothered to respond Surprised to learn that everything I use has some amount of open source in it Surprised at how many cool projects there are, so many that I don’t know about Even on small issues, lead contributors will get involved in helping (e.g., 7 reviews in a node.js fix) Surprised at how unhelpful the “Hacktoberfest” label is in general “Good First Issue” doesn’t mean it will be easy.  People have different standards for what this means Lots of things on GitHub are inactive, be careful you don’t waste your time Projects have very different standards from one to the next, in terms of process, how professional they are, etc. Surprised to see some of the hacks even really big projects use Surprised how willing people were to let us get involved in their projects Lots of camaraderie between devs in the community 3. What advice would you give yourself for next time? Start small, progress from there Manage your time well, it takes way longer than you think Learn how to use GitHub’s Advanced Search well Make use of your peers, ask for help Less time looking for a perfect issue, more time fixing a good-enough issue Don’t rely on the Hacktoberfest label alone. Don’t be afraid to fail.  Even if a PR doesn’t work, you’ll learn a lot in the process Pick issues in projects you are interested in, since it takes so much time Don’t be afraid to work on things you don’t (yet) know.  You can learn a lot more than you think. Read the contributing docs, and save yourself time and mistakes Run and test code locally before you push Don’t be too picky with what you work on, just get involved Look at previously closed PRs in a project for ideas on how to solve your own. One thing that was new for me this time around was seeing students get involved in repos and projects that didn't use English as their primary language.  I've had lots of students do localization in projects before.  But this time, I saw quite a few students working in languages other than English in issues and pull requests.  This is something I've been expecting to see for a while, especially with GitHub's Trending page so often featuring projects not in English.  But it was the first time it happened organically with my own students.Once again, I'm grateful to the Hacktoberfest organizers, and to the hundreds of maintainers we encountered as we made our way across GitHub during October.  When you've been doing open source a long time, and work in git/GitHub everyday, it can be hard to remember what it's like to begin.  Because I continually return to the place where people start, I know first-hand how valuable it is to be given the chance to get involved, for people to acknowledge and accept your work, and for people to see that it's possible to contribute. [Less]
Posted over 4 years ago by Ryan T. Harter
I found this article a few weeks ago and I really enjoyed the read. The author outlines what a role can look like for very senior ICs. It's the first in a (yet to be written) series about technical leadership and long term IC career paths. I'm ... [More] excited to read more! In particular, I am delighted to see her call out strategic work as a way for a senior IC to deliver value. I think there's a lot of opportunity for senior ICs to deliver strategic work, but in my experience organizations tend to under-value this type of work (often unintentionally). My favorite projects to work on are high impact and difficult to execute even if there not deeply technical. In fact, I've found that my most impactful projects tend to only have a small technical component. Instead, the real value tends to come from spanning a few different technical areas, tackling some cultural change, or taking time to deeply understand the problem before throwing a solution at it. Framing these projects as "strategic" help me put my thumb on the type of work I like doing. Keavy also calls out strike teams as a valuable way for ICs to work on high impact projects without moving into management. In my last three years at Mozilla, I've been fortunate to be a part of several strike teams and upon reflection I find that these are the projects I'm most proud of. I'm fortunate that Mozilla has a well documented growth path for senior ICs. All the same, I am learning a lot from her framing. I'm excited to read more! [Less]
Posted over 4 years ago by Ryan T. Harter
I found this article a few weeks ago and I really enjoyed the read. The author outlines what a role can look like for very senior ICs. It's the first in a (yet to be written) series about technical leadership and long term IC career paths. I'm excited to read …
Posted over 4 years ago
Poll time! No judgment if you’re in the high end of the range. Keeping a pile of open tabs is the sign of an optimistic, enthusiastic, curious digital citizen, and … Read more The post Nine tips for better tab management appeared first on The Firefox Frontier.
Posted over 4 years ago
Poll time! No judgment if you’re in the high end of the range. Keeping a pile of open tabs is the sign of an optimistic, enthusiastic, curious digital citizen, and … Read more The post Nine tips for better tab management appeared first on The Firefox Frontier.
Posted over 4 years ago by The Rust Release Team
The Rust team is happy to announce a new version of Rust, 1.39.0. Rust is a programming language that is empowering everyone to build reliable and efficient software. If you have a previous version of Rust installed via rustup, getting Rust 1.39.0 is ... [More] as easy as: rustup update stable If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.39.0 on GitHub. What's in 1.39.0 stable The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters. Also, see the detailed release notes for additional information. The .await is over, async fns are here Previously in Rust 1.36.0, we announced that the Future trait is here. Back then, we noted that: With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future. A promise made is a promise kept. So in Rust 1.39.0, we are pleased to announce that async / .await is stabilized! Concretely, this means that you can define async functions and blocks and .await them. An async function, which you can introduce by writing async fn instead of fn, does nothing other than to return a Future when called. This Future is a suspended computation which you can drive to completion by .awaiting it. Besides async fn, async { ... } and async move { ... } blocks, which act like closures, can be used to define "async literals". For more on the release of async / .await, read Niko Matsakis's blog post. References to by-move bindings in match guards When pattern matching in Rust, a variable, also known as a "binding", can be bound in the following ways: by-reference, either immutably or mutably. This can be achieved explicitly e.g. through ref my_var or ref mut my_var respectively. Most of the time though, the binding mode will be inferred automatically. by-value -- either by-copy, when the bound variable's type implements Copy, or otherwise by-move. Previously, Rust would forbid taking shared references to by-move bindings in the if guards of match expressions. This meant that the following code would be rejected: fn main() { let array: Box<[u8; 4]> = Box::new([1, 2, 3, 4]); match array { nums // ---- `nums` is bound by move. if nums.iter().sum::() == 10 // ^------ `.iter()` implicitly takes a reference to `nums`. => { drop(nums); // ----------- `nums` was bound by move and so we have ownership. } _ => unreachable!(), } } With Rust 1.39.0, the snippet above is now accepted by the compiler. We hope that this will give a smoother and more consistent experience with match expressions overall. Attributes on function parameters With Rust 1.39.0, attributes are now allowed on parameters of functions, closures, and function pointers. Whereas before, you might have written: #[cfg(windows)] fn len(slice: &[u16]) -> usize { slice.len() } #[cfg(not(windows))] fn len(slice: &[u8]) -> usize { slice.len() } ...you can now, more succinctly, write: fn len( #[cfg(windows)] slice: &[u16], // This parameter is used on Windows. #[cfg(not(windows))] slice: &[u8], // Elsewhere, this one is used. ) -> usize { slice.len() } The attributes you can use in this position include: Conditional compilation: cfg and cfg_attr Controlling lints: allow, warn, deny, and forbid Helper attributes used by procedural macro attributes applied to items. Our hope is that this will be used to provide more readable and ergonomic macro-based DSLs throughout the ecosystem. Borrow check migration warnings are hard errors in Rust 2018 In the 1.35.0 release, we announced that NLL had come to Rust 2015 after first being released for Rust 2018 in 1.31. As noted in the 1.35.0 release, the old borrow checker had some bugs which would allow memory unsafety. These bugs were fixed by the NLL borrow checker. As these fixes broke some stable code, we decided to gradually phase in the errors by checking if the old borrow checker would accept the program and the NLL checker would reject it. If so, the errors would instead become warnings. With Rust 1.39.0, these warnings are now errors in Rust 2018. In the next release, Rust 1.40.0, this will also apply to Rust 2015, which will finally allow us to remove the old borrow checker, and keep the compiler clean. If you are affected, or want to hear more, read Niko Matsakis's blog post. More const fns in the standard library With Rust 1.39.0, the following functions became const fn: Vec::new, String::new, and LinkedList::new str::len, [T]::len, and str::as_bytes abs, wrapping_abs, and overflowing_abs Additions to the standard library In Rust 1.39.0 the following functions were stabilized: Pin::into_inner Instant::checked_duration_since and Instant::saturating_duration_since Other changes There are other changes in the Rust 1.39.0 release: check out what changed in Rust, Cargo, and Clippy. Please also see the compatibility notes to check if you're affected by those changes. Contributors to 1.39.0 Many people came together to create Rust 1.39.0. We couldn't have done it without all of you. Thanks! [Less]
Posted over 4 years ago by Niko Matsakis
On this coming Thursday, November 7, async-await syntax hits stable Rust, as part of the 1.39.0 release. This work has been a long time in development -- the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex ... [More] Crichton in 2016! -- and we are very proud of the end result. We believe that Async I/O is going to be an increasingly important part of Rust's story. While this first release of "async-await" is a momentous event, it's also only the beginning. The current support for async-await marks a kind of "Minimum Viable Product" (MVP). We expect to be polishing, improving, and extending it for some time. Already, in the time since async-await hit beta, we've made a lot of great progress, including making some key diagnostic improvements that help to make async-await errors far more approachable. To get involved in that work, check out the Async Foundations Working Group; if nothing else, you can help us by filing bugs about polish issues or by nominating those bugs that are bothering you the most, to help direct our efforts. Many thanks are due to the people who made async-await a reality. The implementation and design would never have happened without the leadership of cramertj and withoutboats, the implementation and polish work from the compiler side (davidtwco, tmandry, gilescope, csmoe), the core generator support that futures builds on (Zoxc), the foundational work on Future and the Pin APIs (aturon, alexcrichton, RalfJ, pythonesque), and of course the input provided by so many community members on RFC threads and discussions. Major developments in the async ecosystem Now that async-await is approaching stabilization, all the major Async I/O runtimes are at work adding and extending their support for the new syntax: the tokio runtime recently announced a number of scheduler improvements, and they are planning a stable release in November that supports async-await syntax; the async-std runtime has been putting out weekly releases for the past few months, and plans to make their 1.0 release shortly after async-await hits stable; using wasm-bindgen-futures, you can even bridge Rust Futures with JavaScript promises; the hyper library has migrated to adopt standard Rust futures; the newly released 0.3.0 version of the futures-rs library includes support for async-await; finally, async-await support is starting to become available in higher-level web frameworks as well, as well as other interesting applications such as the futures_intrusive crate. Async-await: a quick primer (This section and the next are reproduced from the "Async-await hits beta!" post.) So, what is async await? Async-await is a way to write functions that can "pause", return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses. You may be familiar with the async-await from JavaScript or C#. Rust's version of the feature is similar, but with a few key differences. To use async-await, you start by writing async fn instead of fn: async fn first_function() -> u32 { .. } Unlike a regular function, calling an async fn doesn't have any immediate effect. Instead, it returns a Future. This is a suspended computation that is waiting to be executed. To actually execute the future, use the .await operator: async fn another_function() { // Create the future: let future = first_function(); // Await the future, which will execute it (and suspend // this function if we encounter a need to wait for I/O): let result: u32 = future.await; ... } This example shows the first difference between Rust and other languages: we write future.await instead of await future. This syntax integrates better with Rust's ? operator for propagating errors (which, after all, are very common in I/O). You can simply write future.await? to await the result of a future and propagate errors. It also has the advantage of making method chaining painless. Zero-cost futures The other difference between Rust futures and futures in JS and C# is that they are based on a "poll" model, which makes them zero cost. In other languages, invoking an async function immediately creates a future and schedules it for execution: awaiting the future isn't necessary for it to execute. But this implies some overhead for each future that is created. In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them. If you'd like a closer look at how futures work under the hood, take a look at the executor section of the async book, or watch the excellent talk that withoutboats gave at Rust LATAM 2019 on the topic. Summary We believe that having async-await on stable Rust is going to be a key enabler for a lot of new and exciting developments in Rust. If you've tried Async I/O in Rust in the past and had problems -- particularly if you tried the combinator-based futures of the past -- you'll find async-await integrates much better with Rust's borrowing system. Moreover, there are now a number of great runtimes and other libraries available in the ecosystem to work with. So get out there and build stuff! [Less]
Posted over 4 years ago by Daniel Stenberg
There has been 56 days since curl 7.66.0 was released. Here comes 7.67.0! This might not be a release with any significant bells or whistles that will make us recall this date in the future when looking back, but it is still another steady step ... [More] along the way and thanks to the new things introduced, we still bump the minor version number. Enjoy! As always, download curl from curl.haxx.se. If you need excellent commercial support for whatever you do with curl. Contact us at wolfSSL. Numbers the 186th release3 changes56 days (total: 7,901)125 bug fixes (total: 5,472)212 commits (total: 24,931)1 new public libcurl function (total: 81)0 new curl_easy_setopt() option (total: 269)1 new curl command line option (total: 226)68 contributors, 42 new (total: 2,056)42 authors, 26 new (total: 744)0 security fixes (total: 92)0 USD paid in Bug Bounties The 3 changes Disable progress meter Since virtually forever you’ve been able to tell curl to “shut up” with -s. The long version of that is --silent. Silent makes the curl tool disable the progress meter and all other verbose output. Starting now, you can use --no-progress-meter, which in a more granular way only disables the progress meter and lets the other verbose outputs remain. CURLMOPT_MAX_CONCURRENT_STREAMS When doing HTTP/2 using curl and multiple streams over a single connection, you can now also set the number of parallel streams you’d like to use which will be communicated to the server. The idea is that this option should be possible to use for HTTP/3 as well going forward, but due to the early days there it doesn’t yet. CURLU_NO_AUTHORITY This is a new flag that the URL parser API supports. It informs the parser that even if it doesn’t recognize the URL scheme it should still allow it to not have an authority part (like host name). Bug-fixes Here are some interesting bug-fixes done for this release. Check out the changelog for the full list. Winbuild build error The winbuild setup to build with MSVC with nmake shipped in 7.66.0 with a flaw that made it fail. We had added the vssh directory but not adjusted these builds scripts for that. The fix was of course very simple. We have since added several winbuild builds to the CI to make sure we catch these kinds of mistakes earlier and better in the future. FTP: optimized CWD handling At least two landed bug-fixes make curl avoid issuing superfluous CWD commands (FTP lingo for “cd” or change directory) thereby reducing latency. HTTP/3 Several fixes improved HTTP/3 handling. It builds on Windows better, the ngtcp2 backend now also behaves correctly on macOS, the build instructions are clearer. Mimics socketpair on Windows Thanks to the new socketpair look-alike function, libcurl now provides a socket for the application to wait for even when doing name resolves in the dedicated resolver thread. This makes the Windows code work catch up with the similar change that landed in 7.66.0. This makes it easier for applications to behave correctly during the short time gaps when libcurl resolves a host name and nothing else is happening. curl with lots of URLs With the introduction of parallel transfers in 7.66.0, we changed how curl allocated handles and setup transfers ahead of time. This made command lines that for example would use [1-1000000] ranges create a million CURL handles and thus use a lot of memory. It did in fact break a few existing use cases where people did very large ranges with curl. Starting now, curl will just create enough curl handles ahead of time to allow the maximum amount of parallelism requested and users should yet again be able to specify ranges with many million iterations. curl -d@ was slow It was discovered that if you ask curl to post data with -d @filename, that operation was unnecessary slow for large files and was sped up significantly. DoH fixes Several corrections were made after some initial fuzzing of the DoH code. A benign buffer overflow, a memory leak and more. HTTP/2 fixes We relaxed the :authority push promise checks, fixed two cases where libcurl could “forget” a stream after it had delivered all data and dup’ed HTTP/2 handles could issue dummy PRIORITY frames! connect with ETIMEDOUT now makes CURLE_OPERATION_TIMEDOUT When libcurl’s connect attempt fails and errno says ETIMEDOUT it means that the underlying TCP connect attempt timed out. This will now be reflected back in the libcurl API as the timed out error code instead of the previously used CURLE_COULDNT_CONNECT. One of the use cases for this is curl’s --retry option which now considers this situation to be a timeout and thus will consider it fine to retry… Parsing URL with fragment and question mark There was a regression in the URL parser that made it mistreat URLs without a query part but with a question mark in the fragment. [Less]
Posted over 4 years ago by Marco
Over the past year and a half, I have ventured time and again into the federated Mastodon social network. In those ventures, I have contributed bug reports to both the Mastodon client as well as some alternative clients on the web, iOS, and Android. ... [More] One of those clients, a single-page, progressive web app, is Pinafore by Nolan Lawson. He had set out to create a fast, light-weight, and accessible, client from the ground up. When I started to use Pinafore, I immediately noticed that a lot of thought and effort had already gone into the client and I could immediately start using it. I then started contributing some bug reports, and over time, Nolan has improved what was already very good tremendously, by adding more keyboard support, so that even as a screen reader user, one can use Pinafore without using virtual buffers, various light and dark themes, support for reducing animations, and much, much more. And now, Nolan has shared what he has learned about accessibility in the process. His post is an excellent recollection of some of the challenges when dealing with an SPA, cross-platform, taking into account screen readers, keyboard users, styling stuff etc., and how to overcome those obstacles. It is an excellent read which contains suggestions and food for thought for many web developers. Enjoy the read! [Less]