I Use This!
Very High Activity

News

Analyzed about 20 hours ago. based on code collected 1 day ago.
Posted over 4 years ago by Karl Dubost
Week Notes? Week Notes. I'm not sure I will be able to commit to this. But they have a bit of revival around my blogging reading echo chamber. Per revival, I mean I see them again. The Open Data Institute just started one with a round about them. I ... [More] subscribed again to the feed of Brian Suda and his own week notes. Alice Bartlett has also a very cool personal, down to earth and simple summary of her week. I love that she calls them weaknotes She's on week 63 by now. So these will not be personal but more covering a bit of the things I (we?) do, learn, fail about webcompat. The only way to do that is to write down properly things. The possible issues: redundancy in writing things elsewhere, the fatigue associated with the regularity. I did a stretch of worklogs in the past. Bugs Apple is using HLS for their videos. This is only implemented in Safari, but still this is working in Chrome, because of HLS.js. But this library fails in Firefox because this is using RegExp named groups, which are not yet implemented. Second bug this week showing serious performance issue when the blur filter is applied to a large context in the page. There was an issue about the image viewer on Wikipedia, so I ran mozregression and found this changelog. Gecko 3 months made history navigation asynchronous. Kohei Yoshino had already written about it in his excellent Firefox Site Compatibility notes. wikimedia is aware of it. Another regression related to Make CSP frame-ancestors work with fission enabled An additional input event is fired after compositionend. It probably should not and that might create a webcompat issue. There are still issues and differences in the Quirks mode. In this issue, the top relative positioning is not handled properly in quirks mode, it was closed as a duplicate of this previously reported bug. missing images are not rendered the same in Chrome, Safari and Firefox. I created tests and I need to file bugs at the appropriate places. data:text/html, Update: bugs already exists, see the comment by Emilio. a code using Event.path (only in Blink) instead of Event.composedPath. I wonder if it's a recurrent issue. So there was an issue to drop it on Blink on July 2017. And it was not because it had 2.19% usage on Chromium. And this is even worse now… it has above 15% of usage. WebKit had Event.deepPath in the past, but it was renamed as composedPath. Firefox Usage Counters I need to better understand how counters are working inside Firefox so the numbers become more meaningful. And probably it would be good to understand how they operate at Chrome too. How the counter works when a property is used inside a condition. For example in JavaScript with a construct like: var mypath = event.path || event.composedPath() These are probably questions for Boris Bzarsky. Maybe a presentation at All Hands Berlin would be cool on the topic. What is happening if the browser implements both, how are they counted? What is happening if the browser implements one of these, how are they counted? Is the order matters for the counter? What are the induced differences if the counter is tracking only one of the property and not the two? Can a counter track something which is in the source code but not implemented in the engine. For instance, tracking event.path which is undefined. Python tests We currently do AB testing for webcompat.com for a new form with the goal to improve the quality of the bugs reported. The development has not been an entirely smooth road, and there are still a lot of things to fix, and particulary the missing tests. Our objective is that if the AB testing experiment is successful. We will be rewriting properly the code, and more specifically the tests. So instead of fixing the code, I was thinking that we could just add the tests, so we have a solid base when it's time for rewriting. We'll see. Then Mike was worried that we would break continuous integration. We use nose for running our unittest tests. There is a plugin in nose for creating groups of tests by setting an attr. from nose.plugins.attrib import attr @attr(form='wizard') class WizardFormTest: def test_exclusive_wizard(self): pass So we could probably deactivate these specific tests. So this is something to explore. Webcompat dev Discussions with Kate about DB migrations. Trying to understand what GitHub really does with linked images, because it might have consequences for our own images hosting. Making a local prototype of image upload with the Bottle framework. So I can think differently about it. Bottle is super nice for quick prototyping/thinking. That looks doable. In the end it will be probably done with Flask. It helped identified some issues and some cool things we do. Writings Wrote a blog post about my talk at Mozilla Dev Roadshow in Asia. Some thoughts on separating the images upload on webcompat.com from the rest of the app. Reading I have the feeling I could write a counterpart for this blog post about work commuting. There's probably something about work and the circumstances of your country. This blog post was followed by a series of internal discussions on the nature of commuting, the reason to commute or not, etc. As usual, a lot of things need to be unpacked when we talk about commuting. Impressive and interesting to look at the differences. from hackerrank System abuse? or goofing A user reported two invalid bugs and deleted his accounts. It's always for me a surprise when people try to abuse a system which has no power. Some notes about the week notes Should adding pieces be about a linear timeline of events when they happen OR should it be about categories like I did above? Was it too long? Oversharing? All of these are notes taken on the last 5 days. And I'm surprised by the amount. My work is not linear on one task, which means updates to many tasks are happening in a couple of hours or days. Otsukare! [Less]
Posted over 4 years ago by Nicholas D. Matsakis
Hello all! I’m going to be trying something new, which I call the “Async Interviews”. These interviews are going to be a series of recorded video calls with various “luminaries” from Rust’s Async I/O effort. In each one, I’m going to be asking ... [More] roughly the same question: Now that the async-await MVP is stable, what should we be doing next? After each call, I’ll post the recording from the interview, along with a blog post that leaves a brief summary. My intention in these interviews is to really get into details. That is, I want to talk about what our big picture goals should be, but also what the specific concerns are around stabilizing particular traits or macros. What sorts of libraries do they enable? And so forth. (You can view my rough interview script, but I plan to tailor the meetings as I go.) I view these interviews as serving a few purposes: Help to survey what different folks are thinking and transmit that thinking out to the community. Help me to understand better what some of the tradeoffs are, especially around discussions that occurred before I was following closely. Experiment with a new form of Rust discussion, where we substitute 1-on-1 exploration and discussion for bigger discussion threads. First video: Rust and WebAssembly The first video in this series, which I expect to post next week, will be me chatting with Alex Crichton and Nick Fitzgerald about Async I/O and WebAssembly. This video is a bit different from the others, since it’s still early days in that area – as a result, we talked more about what role Async I/O (and Rust!) might eventually play, and less about immediate priorities for Rust. Along with the video, I’ll post a blog post summarizing the main points that came up in the conversation, so you don’t necessarily have to watch the video itself. What videos will come after that? My plan is to be posting a fresh async interview roughly once a week. I’m not sure how long I’ll keep doing this – I guess as long as it seems like I’m still learning things. I’ll announce the people I plan to speak to as I go, but I’m also very open to suggestions! I’d like to talk to folks who are working on projects at all levels of the “async stack”, such as runtimes, web frameworks, protocols, and consumers thereof. If you can think of a project or a person that you think would provide a useful perspective, I’d love to hear about it. Drop me a line via e-mail or on Zulip or Discord. Creating design notes One thing that I have found in trying to get up to speed on the design of Async I/O is that the discussions are often quite distributed, spread amongst issues, RFCs, and the like. I’d like to do a better job of organizing this information. Therefore, as part of this effort to talk to folks, one of the things I plan to be doing is to collect and catalog the concerns, issues, and unknowns that are being brought up. I’d love to find people to help in this effort! If that is something that interests you, come join the #wg-async-foundations stream on the rust-lang Zulip and say hi! So what are the things we might do now that async-await is stable? If you take a look at my rough interview script, you’ll see a long list of possibilities. But I think they break down into two big categories: improving interoperability extending expressive power, convenience, and ergonomics Let’s look a bit more at those choices. Improving interoperability A long time back, Rust actually had a built-in green-threading library. It was removed in RFC #230, and a big part of the motivation was that we knew we were unlikely to find a single runtime design that was useful for all tasks. And, even if we could, we certainly knew we hadn’t found it yet. Therefore, we opted to pare back the stdlib to just expose the primitives that the O/S had to offer. Learning from this, our current design is intentionally much more “open-ended” and permits runtimes to be added as simple crates on crates.io. Right now, to my knowledge, we have at least five distinct async runtimes for Rust, and I wouldn’t be surprised if I’ve forgotten a few:1 fuschia’s runtime, used for the Fuschia work at Google; tokio, a venerable, efficient runtime with a rich feature set; async-std, a newer contender which aims to couple libstd-like APIs with highly efficient primitives; bastion, exploring a resilient, Erlang-like model2; embrio-rs, exploring the embedded space. I think this is great: I love to see people experimenting with different tradeoffs and priorities. Not only do I think we’ll wind up with better APIs and more efficient implementations, this also means we can target ‘exotic’ environments like the Fuschia operating system or smaller embedded platforms. Very cool stuff. However, that flexibility does come with some real risks. Most notably, I want us to be sure that it is possible to “mix and match” libraries from the ecosystem. No matter what base runtime you are using, it should be possible to take a protocol implementation like quinn, combine it with “middleware” crates like async-compression, and starting sending payloads. In my mind, the best way to ensure interoperability is to ensure that we offer standard traits that define the interfaces between libraries. Adding the std::Future trait was a huge step in this direction – it means that you can create all kinds of combinators and things that are fully portable between runtimes. But what are the next steps we can take to help improve things further? One obvious set of things we can do improve interop is to try and stabilize additional traits. Currently, the futures crate contains a number of interfaces that have been evolving over time, such as Stream, AsyncRead, and AsyncWrite. Maybe some of these traits are good candidates to be moved to the standard library next? Here are some of the main things I’d like to discuss around interop: As a meta-point, should we be moving the crates to the standard library, or should we move try to promote the futures crate (or, more likely, some of its subcrates, such as futures-io) as the standard for interop? I’ve found from talking to folks that there is a fair amount of confusion on “how standard” the futures crates are and what the plan is there. Regardless of how we signal stability, I also want to talk about the specific traits or other things we might stabilizing. For each such item, there are two things I’d like to drill into: What kinds of interop would be enabled by stabilizing this item? What are some examples of the sorts of libraries that could now exist independently of a runtime because of the existence of this item? What are the specific concerns that remain about the design of this item? The AsyncRead and AsyncWrite traits, for example, presently align quite closely with their synchronous counterparts Read and Write. However, this interface does require that the buffer used to store data must be zeroed. The tokio crate is considering altering its own local definition of AsyncRead for this reason, is that something we should consider as well? If so, how? On a broader note, what are the sorts of things crates need to truly operate that are not covered by the existing traits? For example, the global executors that boats recently proposed would give people the ability to “spawn tasks” into some ambient context… is that a capability that would enable more interop? Perhaps access to task-local data? Inquiring minds want to know. Improving expressive power, convenience, and ergonomics Interoperability isn’t the only thing that we might try to improve. We might also focus on language extensions that either grow our expressive power or add convenience and ergonomics. Something like supporting async fn in traits or async closures, for example, could be a huge enabler, even if there are some real difficulties to making them work. Here are some of the specific features we might discuss: Async destructors. As boats described in this blog post, there is sometimes a need to “await” things when running destructors, and our current system can’t support that. Async fn in traits. We support async fn in free functions and inherent methods, but not in traits. As I explained in this blog post, there are a lot of challenges to support async fn in traits properly (but consider using the async-trait crate). Async closures. Currently, we support async blocks (async move { .. }), which evaluate to a future, and async functions (async fn foo()), which are a function that returns a future. But, at least on stable, we have no way to make a closure that returns a future. Presumably this would be something like async || { ... }. (In fact, on nightly, we do have support for async closures, but there are some issues in the design that we need to work out.) Combinator methods like map, or macros like join! and select!. The futures crate offers a number of useful combinators and macros. Maybe we should move some of those to the standard library? Conclusion I think these interviews are going to be a lot of fun, and I expect to learn a lot. Stay tuned for the first blog post, coming next week, about Async I/O and WebAssembly. Comments? There is a thread on the Rust users forum for questions and discussion. Footnotes Indeed, shortly after I published this post, I was directed to the drone-os project. ↩ Woohoo! I just want to say that I’ve been hoping to see something like OTP for Rust for…quite some time. ↩ [Less]
Posted over 4 years ago by Nicholas D. Matsakis
Hello all! I’m going to be trying something new, which I call the “Async Interviews”. These interviews are going to be a series of recorded video calls with various “luminaries” from Rust’s Async I/O effort. In each one, I’m going to be asking ... [More] roughly the same question: Now that the async-await MVP is stable, what should we be doing next? After each call, I’ll post the recording from the interview, along with a blog post that leaves a brief summary. My intention in these interviews is to really get into details. That is, I want to talk about what our big picture goals should be, but also what the specific concerns are around stabilizing particular traits or macros. What sorts of libraries do they enable? And so forth. (You can view my rough interview script, but I plan to tailor the meetings as I go.) I view these interviews as serving a few purposes: Help to survey what different folks are thinking and transmit that thinking out to the community. Help me to understand better what some of the tradeoffs are, especially around discussions that occurred before I was following closely. Experiment with a new form of Rust discussion, where we substitute 1-on-1 exploration and discussion for bigger discussion threads. First video: Rust and WebAssembly The first video in this series, which I expect to post next week, will be me chatting with Alex Crichton and Nick Fitzgerald about Async I/O and WebAssembly. This video is a bit different from the others, since it’s still early days in that area – as a result, we talked more about what role Async I/O (and Rust!) might eventually play, and less about immediate priorities for Rust. Along with the video, I’ll post a blog post summarizing the main points that came up in the conversation, so you don’t necessarily have to watch the video itself. What videos will come after that? My plan is to be posting a fresh async interview roughly once a week. I’m not sure how long I’ll keep doing this – I guess as long as it seems like I’m still learning things. I’ll announce the people I plan to speak to as I go, but I’m also very open to suggestions! I’d like to talk to folks who are working on projects at all levels of the “async stack”, such as runtimes, web frameworks, protocols, and consumers thereof. If you can think of a project or a person that you think would provide a useful perspective, I’d love to hear about it. Drop me a line via e-mail or on Zulip or Discord. Creating design notes One thing that I have found in trying to get up to speed on the design of Async I/O is that the discussions are often quite distributed, spread amongst issues, RFCs, and the like. I’d like to do a better job of organizing this information. Therefore, as part of this effort to talk to folks, one of the things I plan to be doing is to collect and catalog the concerns, issues, and unknowns that are being brought up. I’d love to find people to help in this effort! If that is something that interests you, come join the #wg-async-foundations stream on the rust-lang Zulip and say hi! So what are the things we might do now that async-await is stable? If you take a look at my rough interview script, you’ll see a long list of possibilities. But I think they break down into two big categories: improving interoperability extending expressive power, convenience, and ergonomics Let’s look a bit more at those choices. Improving interoperability A long time back, Rust actually had a built-in green-threading library. It was removed in RFC #230, and a big part of the motivation was that we knew we were unlikely to find a single runtime design that was useful for all tasks. And, even if we could, we certainly knew we hadn’t found it yet. Therefore, we opted to pare back the stdlib to just expose the primitives that the O/S had to offer. Learning from this, our current design is intentionally much more “open-ended” and permits runtimes to be added as simple crates on crates.io. Right now, to my knowledge, we have at least five distinct async runtimes for Rust, and I wouldn’t be surprised if I’ve forgotten a few: fuschia’s runtime, used for the Fuschia work at Google; tokio, a venerable, efficient runtime with a rich feature set; async-std, a newer contender which aims to couple libstd-like APIs with highly efficient primitives; bastion, exploring a resilient, Erlang-like model1; embrio-rs, exploring the embedded space. I think this is great: I love to see people experimenting with different tradeoffs and priorities. Not only do I think we’ll wind up with better APIs and more efficient implementations, this also means we can target ‘exotic’ environments like the Fuschia operating system or smaller embedded platforms. Very cool stuff. However, that flexibility does come with some real risks. Most notably, I want us to be sure that it is possible to “mix and match” libraries from the ecosystem. No matter what base runtime you are using, it should be possible to take a protocol implementation like quinn, combine it with “middleware” crates like async-compression, and starting sending payloads. In my mind, the best way to ensure interoperability is to ensure that we offer standard traits that define the interfaces between libraries. Adding the std::Future trait was a huge step in this direction – it means that you can create all kinds of combinators and things that are fully portable between runtimes. But what are the next steps we can take to help improve things further? One obvious set of things we can do improve interop is to try and stabilize additional traits. Currently, the futures crate contains a number of interfaces that have been evolving over time, such as Stream, AsyncRead, and AsyncWrite. Maybe some of these traits are good candidates to be moved to the standard library next? Here are some of the main things I’d like to discuss around interop: As a meta-point, should we be moving the crates to the standard library, or should we move try to promote the futures crate (or, more likely, some of its subcrates, such as futures-io) as the standard for interop? I’ve found from talking to folks that there is a fair amount of confusion on “how standard” the futures crates are and what the plan is there. Regardless of how we signal stability, I also want to talk about the specific traits or other things we might stabilizing. For each such item, there are two things I’d like to drill into: What kinds of interop would be enabled by stabilizing this item? What are some examples of the sorts of libraries that could now exist independently of a runtime because of the existence of this item? What are the specific concerns that remain about the design of this item? The AsyncRead and AsyncWrite traits, for example, presently align quite closely with their synchronous counterparts Read and Write. However, this interface does require that the buffer used to store data must be zeroed. The tokio crate is considering altering its own local definition of AsyncRead for this reason, is that something we should consider as well? If so, how? On a broader note, what are the sorts of things crates need to truly operate that are not covered by the existing traits? For example, the global executors that boats recently proposed would give people the ability to “spawn tasks” into some ambient context… is that a capability that would enable more interop? Perhaps access to task-local data? Inquiring minds want to know. Improving expressive power, convenience, and ergonomics Interoperability isn’t the only thing that we might try to improve. We might also focus on language extensions that either grow our expressive power or add convenience and ergonomics. Something like supporting async fn in traits or async closures, for example, could be a huge enabler, even if there are some real difficulties to making them work. Here are some of the specific features we might discuss: Async destructors. As boats described in this blog post, there is sometimes a need to “await” things when running destructors, and our current system can’t support that. Async fn in traits. We support async fn in free functions and inherent methods, but not in traits. As I explained in this blog post, there are a lot of challenges to support async fn in traits properly (but consider using the async-trait crate). Async closures. Currently, we support async blocks (async move { .. }), which evaluate to a future, and async functions (async fn foo()), which are a function that returns a future. But, at least on stable, we have no way to make a closure that returns a future. Presumably this would be something like async || { ... }. (In fact, on nightly, we do have support for async closures, but there are some issues in the design that we need to work out.) Combinator methods like map, or macros like join! and select!. The futures crate offers a number of useful combinators and macros. Maybe we should move some of those to the standard library? Conclusion I think these interviews are going to be a lot of fun, and I expect to learn a lot. Stay tuned for the first blog post, coming next week, about Async I/O and WebAssembly. Footnotes Woohoo! I just want to say that I’ve been hoping to see something like OTP for Rust for…quite some time. ↩ [Less]
Posted over 4 years ago by [email protected] (ClassicHasClass)
TenFourFox Feature Parity Release 17 beta 1 is now available (downloads, hashes, release notes). SourceForge seems to have fixed whatever was making TenFourFox barf on its end which now might actually be an issue over key exchange. For a variety of ... [More] reasons, but most importantly backwards compatibility, my preference has been to patch up the NSS security library in TenFourFox to support new crypto and ciphers rather than just drop in a later version. We will see if the issue recurs. This release fixes the "infinite loop" issue on Github with a trivial "hack" mitigation. This mitigation makes JavaScript slightly faster as a side-effect but it's because it relaxes some syntax constraints in the runtime, so I don't consider this a win really. It also gets rid of some debug-specific functions that are web-observable and clashed on a few pages, an error Firefox corrected some time ago but missed my notice. Additionally, since 68ESR newly adds the ability to generate and click on links without embedding them in the DOM, I backported that patch so that we can do that now too (a 4-year-old bug only recently addressed in Firefox 70). Apparently this functionality is required for certain sites' download features and evidently this was important enough to merit putting in an extended support release, so we will follow suit. I also did an update to cookie security, with more to come, and cleared my backlog of some old performance patches I had been meaning to backport. The most important of these substantially reduces the amount of junk strings JavaScript has hanging around, which in turn reduces memory pressure (important on our 32-bit systems) and garbage collection frequency. Another enables a fast path for layout frames with no properties so we don't have to check the hash tables as frequently. By user request, this version of TenFourFox also restores the old general.useragent.override.* site-specific override pref feature. This was removed in bug 896114 for performance reasons and we certainly don't need anything that makes the browser any slower, so instead of just turning it back on I also took the incomplete patch in that bug as well and fixed and finished it. This means, in the default state with no site-specific overrides, there is no penalty. This is the only officially supported state. I do not have any plans to expose this feature to the UI because I think it will be troublesome to manage and the impact on loading can be up to 9-10%, so if you choose to use this, you do so at your own risk. I've intentionally declined to mention it in the release notes or to explain any further how this works since only the people who already know what it does and how it operates and most importantly why they need it should be using it. For everyone else, the only official support for changing the user agent remains the global selector in the TenFourFox preference pane (which I might add now allows you to select Firefox 68 if needed). Note that if you change the global setting and have site-specific overrides at the same time, the browser's behaviour becomes "officially undefined." Don't file any bug reports on that, please. Finally, this release also updates the ATSUI font blacklist and basic adblock database, and has the usual security, certificate, pin, HSTS and TLD updates. Assuming no issues, it will go live on December 2nd or thereabouts. For FPR18, one thing I would like to improve further is the built-in Reader mode to at least get it more consistent with current Firefox releases. Since layout is rapidly approaching its maximum evolution (as determined by the codebase, the level of work required and my rapidly dissipating free time), the Reader mode is probably the best means for dealing with the (fortunately relatively small) number of sites right now that lay out problematically. There are some other backlogged minor changes I would like to consider for that release as well. However, FPR18 will be parallel with the first of the 4-week cadence Firefox releases and as I have mentioned before I need to consider how sustainable that is with my other workloads, especially as most of the low-hanging fruit has long since been picked. [Less]
Posted over 4 years ago
When the Disney+ streaming service rolled out, millions of people flocked to set up accounts. And within a week, thousands of poor unfortunate souls reported that their Disney passwords were … Read more The post Princesses make terrible passwords appeared first on The Firefox Frontier.
Posted over 4 years ago
When the Disney+ streaming service rolled out, millions of people flocked to set up accounts. And within a week, thousands of poor unfortunate souls reported that their Disney passwords were … Read more The post Princesses make terrible passwords appeared first on The Firefox Frontier.
Posted over 4 years ago
When the Disney+ streaming service rolled out, millions of people flocked to set up accounts. And within a week, thousands of poor unfortunate souls reported that their Disney passwords were … Read more The post Princesses make terrible passwords for Disney+ and every other account appeared first on The Firefox Frontier.
Posted over 4 years ago
We’re entering another holiday shopping season, and while you’re browsing around on the internet looking for thoughtful presents for friends and loved ones, it’s also a good time to give … Read more The post Two ways Firefox protects your holiday shopping appeared first on The Firefox Frontier.
Posted over 4 years ago
We’re entering another holiday shopping season, and while you’re browsing around on the internet looking for thoughtful presents for friends and loved ones, it’s also a good time to give … Read more The post Two ways Firefox protects your holiday shopping appeared first on The Firefox Frontier.
Posted over 4 years ago by Nick Fitzgerald
This article is cross-posted on the Bytecode Alliance web site. Multi-value is a proposed extension to core WebAssembly that enables functions to return many values, among other things. It is also a pre-requisite for Wasm interface types. I’ve been ... [More] adding multi-value support all over the place recently: I added multi-value support to all the various crates in the Rust and WebAssembly toolchain, so that Rust projects can compile down to Wasm code that uses multi-value features. I added multi-value support to Wasmtime, the WebAssembly runtime built on top of the Cranelift code generator, so that it can run Wasm code that uses multi-value features. Now, as my multi-value efforts are wrapping up, it seems like a good time to reflect on the experience and write up everything that’s been required to get all this support in all these places. Wait — What is Multi-Value Wasm? In core WebAssembly, there are a couple of arity restrictions on the language: functions can only return either zero or one value, and instruction sequences like blocks, ifs, and loops cannot consume any stack values, and may only produce zero or one resulting stack value. The multi-value proposal is an extension to the WebAssembly standard that lifts these arity restrictions. Under the new multi-value Wasm rules: functions can return an arbitrary number of values, and instruction sequences can consume and produce an arbitrary number of stack values. The following snippets are only valid under the new rules introduced in the multi-value Wasm proposal: ;; A function that takes an `i64` and returns ;; three `i32`s. (func (param i64) (result i32 i32 i32) ...) ;; A loop that consumes an `i32` stack value ;; at the start of each iteration. loop (param i32) ... end ;; A block that produces two `i32` stack values. block (result i32 i32) ... end The multi-value proposal is currently at phase 3 of the WebAssembly standardization process. But Why Should I Care? Code Size There are a few scenarios where compilers are forced to jump through hoops when producing multiple stack values for core Wasm. Workarounds include introducing temporary local variables, and using local.get and local.set instructions, because the arity restrictions on blocks mean that the values cannot be left on the stack. Consider a scenario where we are computing two stack values: the pointer to a string in linear memory, and its length. Furthermore, imagine we are choosing between two different strings (which therefore have different pointer-and-length pairs) based on some condition. But whichever string we choose, we’re going to process the string in the same fashion, so we just want to push the pointer-and-length pair for our chosen string onto the stack, and control flow can join afterwards. With multi-value, we can do this in a straightforward fashion: call $compute_condition if (result i32 i32) call $get_first_string_pointer call $get_first_string_length else call $get_second_string_pointer call $get_second_string_length end This encoding is also compact: only sixteen bytes! When we’re targeting core Wasm, and multi-value isn’t available, we’re forced to pursue alternative, more convoluted forms. We can smuggle the stack values out of each if and else arm via temporary local values: ;; Introduce a pair of locals to hold the values ;; across the instruction sequence boundary. (local $string i32) (local $length i32) call $compute_condition if call $get_first_string_pointer local.set $string call $get_first_string_length local.set $length else call $get_second_string_pointer local.set $string call $get_second_string_length local.set $length end ;; Restore the values onto the stack, from their ;; temporaries. local.get $string local.get $length This encoding requires 30 bytes, an overhead of fourteen bytes more than the ideal multi-value version. And if we were computing three values instead of two, there would be even more overhead, and the same is true for four values, etc… The additional overhead is proportional to how many values we’re producing in the if and else arms. We can actually go a little smaller than that — still with core Wasm — by jumping through a different hoop. We can split this into two if ... else ... end blocks and duplicate the condition check to avoid introducing temporaries for each of the computed values themselves: ;; Introduce a local for the condition, so that ;; we only re-check it, and don't recompute it. (local $condition i32) ;; Compute and save the condition. call $compute_condition local.set $condition ;; Compute the first stack value. local.get $condition if (result i32) call $get_first_string_pointer else call $get_second_string_pointer end ;; Compute the second stack value. local.get $condition if (result i32) call $get_first_string_length else call $get_second_string_length end This gets us down to 28 bytes. Two fewer than the last version, but still an overhead of twelve bytes compared to the multi-value encoding. And the overhead is still proportional to how many values we’re computing. There’s no way around it: we need multi-value to get the most compact code here. New Instructions The multi-value proposal opens up the possibility for new instructions that produce multiple values: An i32.divmod instruction of type [i32 i32] -> [i32 i32] that takes a numerator and divisor and produces both their quotient and remainder. Arithmetic operations with an additional carry result. These could be used to better implement big ints, overflow checks, and saturating arithmetic. Returning Small Structs More Efficiently Returning multiple values from functions will allow us to more efficiently return small structures like Rust’s Results. Without multi-value returns, these relatively small structs that still don’t fit in a single Wasm value type get placed in linear memory temporarily. With multi-value returns, the values don’t escape to linear memory, and instead stay on the stack. This can be more efficient, since Wasm stack values are generally more amenable to optimization than loads and stores from linear memory. Interface Types Shrinking code size is great, and new instructions would be fancy, but here’s what I’m really excited about: WebAssembly interface types. Interface types used to be called “host bindings,” and they are the key to unlocking: direct, optimized access to the browser’s DOM methods on the Web, “shared-nothing linking” of WebAssembly modules, and defining language-neutral interfaces, like WASI. For all three use cases, we might want to return a string from a callee Wasm module. The caller that is consuming this string might be a Web browser, or it might be another Wasm module, or it might be a WASI-compatible Wasm runtime. In any case, a natural way to return the string is as two i32s: a pointer to the start of the string in linear memory, and the byte length of the string. The interface adapter can then lift that pair of i32s into an abstract string type, and then lower it into the caller’s concrete string representation on the other side. Interface types are designed such that in most cases, this lifting and lowering can be optimized into a quick memory copy from the callee’s linear memory to the caller’s. But before the interface adapters can do that lifting and lowering, they need access to the pointer and length pair, which means the callee Wasm function needs to return two values, which means we need multi-value Wasm for interface types. All The Implementing! Now that we know what multi-value Wasm is, and why it’s exciting, I’ll recount the tale of implementing support for it all over the place. I started with implementing multi-value support in the Rust and WebAssembly toolchain, and then I added support to the Wasmtime runtime, and the Cranelift code generator it’s built on top of. Rust and WebAssembly Toolchain What falls under the Rust and Wasm toolchain umbrella? It is a superset of the general Rust toolchain: cargo: Manages builds and dependencies. rustc: Compiles Rust sources into code. LLVM: Used by rustc under the covers to optimize and generate code. And then additionally, when targeting Wasm, we also use a few more moving parts: wasm-bindgen: Part library and part Wasm post-processor, wasm-bindgen generates bindings for consuming and producing interfaces defined with interface types (and much more!) walrus: A library for transforming and rewriting WebAssembly modules, used by wasm-bindgen‘s post-processor. wasmparser: An event-style parser for WebAssembly binaries, used by walrus. Here’s a summary of the toolchain’s pipeline, showing the inputs and outputs between tools: My goal is to unlock interface types with multi-value functions. For now, I haven’t been focusing on code size wins from generating multi-value blocks. For my purposes, I only need to introduce multi-value functions at the edges of the Wasm module that talk to interface adapters; I don’t need to make all function bodies use the optimal multi-value instruction sequence constructs. Therefore, I decided to have wasm-bindgen‘s post-processor rewrite certain functions to use multi-value returns, rather than add support in LLVM.0 With this approach I only needed to add support to the following tools: cargo rustc LLVM wasm-bindgen walrus wasmparser wasmparser wasmparser is an event-style parser for WebAssembly binaries. It may seem strange that adding toolchain support for generating multi-value Wasm began with parsing multi-value Wasm. But it is necessary to make testing easy and painless, and we needed it eventually for Wasmtime anyways, which also uses wasmparser. In core Wasm, the optional value type result of a block, loop, or if is encoded directly in the instruction: a 0x40 byte means there is no result a 0x7f byte means there is a single i32 result a 0x7e byte means there is a single i64 result etc… With multi-value Wasm, there are not only zero or one resulting value types, there are also parameter types. Blocks can have the same set of types that functions can have. Functions already de-duplicate their types in the “Type” section of a Wasm binary and reference them via index. With multi-value, blocks do that now as well. But how does this co-exist with non-multi-value block types? The index is encoded as a signed variable-length integer, using the LEB128 encoding. If we interpret non-multi-value blocks’ optional result value type as a signed LEB128, we get: -64 (the smallest number that can be encoded as a single byte with signed LEB128) means there is no result -1 means there is a single i32 result -2 means there is a single i64 result etc.. They’re all negative, leaving the positive numbers to be interpreted as indices into the “Type” section for multi-value blocks! A nice little encoding trick and bit of foresight from the WebAssembly standards folks. Adding support for parsing these was straightforward, but wasmparser also supports validating the Wasm as it parses it. Adding validation support was a little bit more involved. wasmparser‘s validation implementation is similar to the validation algorithm presented in the appendix of the WebAssembly spec: it abstractly interprets the Wasm instructions, maintaining a stack of types, rather than a stack of values. If any operation uses operands of the wrong type — for example the stack has an f32 at its top when we are executing an i32.add instruction, and therefore expect two i32s on top of the stack — then validation fails. If there are no type errors, then it succeeds. There are some complications when dealing with stack-polymorphic instructions, like drop, but they don’t really interact with multi-value. Whenever wasmparser encounters a block, loop, or if instruction, it pushes an associated control frame, that keeps track of how deep in the stack instructions within this block can access. Before multi-value, the limit was always the length of the stack upon entering the block, because blocks didn’t take any values from the stack. With multi-value, this limit becomes stack.len() - block.num_params(). When exiting a block, wasmparser pops the associated control frame. It check that the top n types on the stack match the block’s result types, and that the stack’s length is frame.depth + n. Before multi-value, n was always either 0 or 1, but now it can be any non-negative integer. The final bit of validation that is impacted by multi-value is when an if needs to have an else or not. In core Wasm, if the if does not produce a resulting value on the stack, it doesn’t need an else arm since the whole if‘s typing is [] -> [] which is also the typing for a no-op. With multi-value this is generalized to any if where the inputs and outputs are the same types: [t*] -> [t*]. Easy to implement, but also very easy to overlook (like I originally did!) Multi-value support was added to wasmparser in these pull requests: #103: Initial multi-value support #135: Allow [t*] -> [t*]-typed if blocks with no else walrus walrus is a WebAssembly to WebAssembly transformation library. We use it to generate glue code in wasm-bindgen and to polyfill WebAssembly features. walrus constructs its own intermediate representation (IR) for WebAssembly. Similar to how wasmparser validates Wasm instructions, walrus also abstractly interprets the instructions while building up its IR. This meant that adding support for constructing multi-value IR to walrus was very similar to adding multi-value validation support to wasmparser. In fact, walrus also validates the Wasm while it is constructing its IR. But multi-value has big implications for the IR itself. Before multi-value, you could view Wasm’s stack-based instructions as a post-order encoding of an expression tree. Consider the expression (a + b) * (c - d). As an expression tree, it looks like this: * / \ / \ + - / \ / \ a b c d A post-order traversal of a tree is where a node is visited after its children. A post-order traversal of our example expression tree would be: a b + c d - * Assume that a, b, c, and d are Wasm locals of type i32, with the values 9, 7, 5, and 3 respectively. We can convert this post-order directly into a sequence of Wasm instructions that build up their results on the Wasm stack: ;; Instructions ;; Stack ;;;;;;;;;;;;;;;;;;;;;;;;;;;;; local.get $a ;; [9] local.get $b ;; [9, 7] i32.add ;; [16] local.get $c ;; [16, 5] local.get $d ;; [16, 5, 3] i32.sub ;; [16, 2] i32.mul ;; [32] This correspondence between trees and Wasm stack instructions made using a tree-like IR in walrus, where nodes are instructions and a node’s children are the instructions that produce the parent’s input values, very natural.1 Our IR used to look something like this: pub enum Expr { // A `local.get` instruction. LocalGet(LocalId), // An `i32.add` instruction. I32Add(ExprId, ExprId), // Etc... } But multi-value threw a wrench in this tree-like representation: now that an instruction can produce multiple values, when we have a parent⟶child edge in the tree, how do we know which of the child’s resulting values the parent wants to use? And also, if two different parents are each using one of the two values an instruction generates, we fundamentally don’t have a tree anymore, we have a directed, acyclic graph (DAG). We considered generalizing our tree representation into a DAG, and labeling edges with n to represent using the nth resulting value of an instruction. We weighed the complexity of implementing this representation against what our current use cases in wasm-bindgen demand, along with any future use cases we could think of. Ultimately, we decided it wasn’t worth the effort, since we don’t need that level of detail for any of the transformations or manipulations that wasm-bindgen performs, or that we foresee it doing in the future. Instead, we decided that within a block, representing instructions as a simple list is good enough for our use cases, so now our IR looks something like this: pub struct Block(Vec); pub enum Instr { // A `local.get` instruction. LocalGet(LocalId), // An `i32.add` instruction. Note that its // children are left implicit now. I32Add, // Etc... } Additionally, it turns out it is faster to construct and traverse this list-based representation, so switching representations in walrus also gave wasm-bindgen a nice little speed up. The walrus support for multi-value was implemented in these pull requests: #114: Switch walrus from a tree to a list IR #124: Multi-value! And even more fuzzing! wasm-bindgen wasm-bindgen facilitates high-level interactions between Wasm modules and their host. Often that host is a Web browser and its DOM methods, or some user-written JavaScript. Other times it is an outside-the-Web Wasm runtime, like Wasmtime, using WASI and interface types. wasm-bindgen acts as a polyfill for the interface types proposal, plus some extra batteries included for a powerful user experience. One of wasm-bindgen‘s responsibilities is translating the return value of a Wasm function into something that the host caller can understand. When using interface types directly with Wasmtime, this means generating interface adapters that lift the concrete Wasm return values into abstract interface types. When the caller is some JavaScript code on the Web, it means generating some JavaScript code to convert the Wasm values into JavaScript values. Let’s take a look at some Rust functions and the Wasm they get compiled down into. First, consider when we are returning a single integer from a Rust function: // random.rs #[no_mangle] pub extern "C" fn get_random_int() -> i32 { // Chosen by fair dice roll. 4 } And here is the disassembly of that Rust code compiled to Wasm: ;; random.wasm (func $get_random_int (result i32) i32.const 4 ) The resulting Wasm function’s signature is effectively identical to the Rust function’s signature. No surprises here. It is easy for wasm-bindgen to translate the resulting Wasm value to whatever is needed because wasm-bindgen has direct access to it; it’s right there. Now let’s look at returning compound structures from Rust that don’t fit in a single Wasm value: // pair.rs #[no_mangle] pub extern "C" fn make_pair(a: i32, b: i32) -> [i32; 2] { [a, b] } And here is the disassembly of this new Rust code compiled to Wasm: ;; pair.wasm (func $make_pair (param i32 i32 i32) local.get 0 local.get 2 i32.store offset=4 local.get 0 local.get 1 i32.store ) The signature for the make_pair function in pair.wasm doesn’t look like its corresponding signature in pair.rs! It has three parameters instead of two, and it isn’t returning any values, let alone a pair. What’s happening is that LLVM doesn’t support multi-value yet so it can’t return two i32s directly from the function. Instead, callers pass in a “struct return” pointer to some space that they’ve reserved for the return value, and make_pair will write its return value through that struct return pointer into the reserved space. By convention, LLVM uses the first parameter as the struct return pointer, so the second Wasm parameter is our original a parameter in Rust and the third Wasm parameter is our original b parameter in Rust. We can see that the Wasm function is writing the b field first, and then the a field second. How is space reserved for the struct return? Distinct from the Wasm standard’s stack that instructions push values to and pop values from, LLVM emits code to maintain a “shadow stack” in linear memory. There is a global dedicated as the stack pointer, and always points to the top of the stack. Non-leaf functions that need some scratch space of their own will decrement the stack pointer to allocate some space on entry (since the stack grows down, and its “top” of the stack is its lowest address) and will increment it to deallocate that space on exit. Leaf functions that don’t call any other function can skip incrementing and decrementing this stack pointer, which is exactly why we didn’t see make_pair messing with the stack pointer. To verify that callers are allocating space for the return struct in the shadow stack, let’s create a function that calls make_pair and then inspect its disassembly: // pair.rs #[no_mangle] pub extern "C" fn make_default_pair() -> [i32; 2] { make_pair(42, 1337) } I’ve annotated default_make_pair‘s disassembly below to make it clear how the shadow stack pointer is manipulated to create space for return values and how the pointer to that space is passed to make_pair: ;; pair.wasm (func $make_default_pair (param i32) (local i32) ;; Reserve 16 bytes of space in the shadow ;; stack. We only need 8 bytes, but LLVM keeps ;; the stack pointer 16-byte aligned. Global 0 ;; is the stack pointer. global.get 0 i32.const 16 i32.sub local.tee 1 global.set 0 ;; Get a pointer to the last 8 bytes of our ;; shadow stack space. This is our struct ;; return pointer argument, where the result ;; of `make_pair` will be written to. local.get 1 i32.const 8 i32.add ;; Call `make_pair` with the struct return ;; pointer and our default values. i32.const 42 i32.const 1337 call $make_pair ;; Copy the resulting pair into our own struct ;; return pointer's space. LLVM optimized this ;; into a single `i64` load and store, instead ;; of two `i32` load and stores. local.get 0 local.get 1 i64.load offset=8 i64.store align=4 ;; Restore the stack pointer to the original ;; value it had upon entering this function, ;; deallocating our shadow stack frame. local.get 1 i32.const 16 i32.add global.set 0 end ) When the caller is JavaScript, wasm-bindgen can use its knowledge of these calling conventions to generate JavaScript glue code that allocates shadow stack space, calls the function with the struct return pointer argument, reads the values out of linear memory, and finally deallocates the shadow stack space before converting the Wasm values into some JavaScript value. But when using interface types directly, rather than polyfilling them, we can’t rely on generating glue code that has access to the Wasm module’s linear memory. First, the memory might not be exported. Second, the only glue code we have is interface adapters, not arbitrary JavaScript code. We want those values as proper return values, rather than through a side channel. So I wrote a walrus transform in wasm-bindgen that converts functions that use a struct return pointer parameter without any actual Wasm return values, into multi-value functions that don’t take a struct return pointer parameter but return multiple resulting Wasm values instead. This transform is essentially a “reverse polyfill” for multi-value functions. ;; Before. ;; ;; First parameter is a struct return pointer. No ;; return values, as they are stored through the ;; struct return pointer. (func $make_pair (param i32 i32 i32) ;; ... ) ;; After. ;; ;; No more struct return pointer parameter. Return ;; values are actual Wasm results. (func $make_pair (param i32 i32) (result i32 i32) ;; ... ) The transform is only applied to exported functions that take a struct return pointer parameter, and rather than rewriting the source function in place, the transform leaves it unmodified but removes it from the Wasm module’s exports list. It generates a new function that replaces the old one in the Wasm module’s exports list. This new function allocates shadow stack space for the return value, calls the original function, reads the values out of the shadow stack onto the Wasm value stack, and finally deallocates the shadow stack space before returning. For our running make_pair example, the transform produces an exported wrapper function like this: ;; pair.wasm (func $new_make_pair (param i32 i32) (result i32 i32) ;; Our struct return pointer that points to the ;; scratch space we are allocating on the shadow ;; stack for calling `$make_pair`. (local i32) ;; Allocate space on the shadow stack for the ;; result. global.get $shadow_stack_pointer i32.const 8 i32.sub local.tee 2 global.set $shadow_stack_pointer ;; Call the original `$make_pair` with our ;; allocated shadow stack space for its ;; results. local.get 2 local.get 0 local.get 1 call $make_pair ;; Read the return values out of the shadow ;; stack and place them onto the Wasm stack. local.get 2 i32.load local.get 2 i32.load offset=4 ;; Finally, restore the shadow stack pointer. local.get 2 i32.const 8 i32.add global.set $shadow_stack_pointer ) With this transform in place, wasm-bindgen can now generate multi-value function exports along with associated interface adapters that lift the concrete Wasm return values into abstract interface types. The multi-value support and transform were implemented in wasm-bindgen in these pull requests: #1764: Introduce a multi-value transform #1805: Always use multi-value when targeting interface types #1839: Update binding metadata after multi-value transform Wasmtime and Cranelift Ok, so at this point, we can generate multi-value Wasm binaries with the Rust and Wasm toolchain — woo! But now we need to be able to run these binaries. Enter Wasmtime, the WebAssembly runtime built on top of the Cranelift code generator. Wasmtime translates WebAssembly into Cranelift’s IR with the cranelift-wasm crate, and then Cranelift compiles the IR down to native machine code. Implementing multi-value Wasm support in Wasmtime and Cranelift roughly involved two steps: Translating multi-value Wasm into Cranelift IR Supporting arbitrary numbers of return values in Cranelift Translating Multi-Value Wasm into Cranelift IR Cranelift has its own intermediate representation that it manipulates, optimizes, and legalizes before generating machine code for the target architecture. In order for Cranelift to compile some code, you need to translate whatever you’re working with into Cranelift’s IR. In our case, that means translating Wasm into Cranelift’s IR. This process is analogous to rustc converting its mid-level intermediate representation (MIR) to LLVM’s IR.2 Cranelift’s IR is made up of (extended) basic blocks3 containing code in single, static-assignment form (SSA). SSA, as the name implies, means that variables can only be assigned to when defined, and can’t ever be re-assigned: ;; `42 + 1337` in Cranelift IR v0 = iconst.32 42 v1 = iconst.32 1337 v2 = iadd v0, v1 When translating to SSA form, most re-assignments to a variable x can be handled by defining a fresh x1 and replacing subsequent uses of x with x1, and then turning the next re-assignment into x2, etc. But that doesn’t work for points where control flow joins, such as the block following the consequent and alternative arms of an if/else. Consider this Rust code, and how we might translate it into SSA: let x; if some_condition() { // This version of `x` becomes `x0` when // translated to SSA. x = foo(); } else { // This version of `x` becomes `x1` when // translated to SSA. x = bar(); } // Does this use `x0` or `x1`?? do_stuff(x); Should the do_stuff call at the bottom use x0 or x1 when translated into SSA? Neither! SSA uses Φ (phi) functions to handle these cases. A phi function takes a number of mutually exclusive, control flow-dependent parameters and returns the one that was defined where control flow came from. In our example we would have x2 = Φ(x0, x1), and if some_condition() was true then x2 would get its value from x0. Otherwise, x2 would get its value from x1. If SSA and phi functions are new to you and you’re feeling confused, don’t worry! It was confusing for me too when I first learned about this stuff. But Cranelift IR doesn’t use phi functions per se, it has something that I think is more intuitive: blocks can have formal parameters. Translating our example to Cranelift IR, we get this: ;; Head of the `if`/`else`. ebb0: v0 = call some_condition() brnz v0, ebb1 jump ebb2 ;; Consequent. ebb1: v1 = call foo() jump ebb3(v1) ;; Alternative. ebb2: v2 = call bar() jump ebb3(v2) ;; Code following the `if`/`else`. ebb3(v3: i32): call do_stuff(v3) Note that ebb3 takes a parameter for the control flow-dependent value that we should pass to do_stuff! And the jumps in ebb1 and ebb2 pass their locally-defined values “into” ebb3! This is equivalent to phi functions, but I find it much more intuitive. Anyways, translating WebAssembly code into Cranelift IR happens in the cranelift-wasm crate. It uses wasmparser to decode the given blob of Wasm and validate it, and then constructs Cranelift IR via (you guessed it!) abstract interpretation. As cranelift-wasm interprets Wasm instructions, rather than pushing and popping Wasm values, it maintains a stack of Cranelift IR SSA values. As cranelift-wasm enters and exits Wasm control frames, it creates Cranelift IR basic blocks. This process is fairly similar to walrus‘s IR construction, which was pretty similar to wasmparser‘s validation, and the whole thing felt pretty familiar by now. There were just a couple tricky bits. The first tricky bit was remembering to add parameters (phi functions) to the first basic block for a Wasm loop‘s body, representing its Wasm stack parameters. This is necessary, because control flow joins from two places at the top of the loop body: from where we were when we first entered the loop, and from the bottom of the loop when we finish an iteration and are starting another. In terms of the abstract interpretation, this means you need to pop off the particular SSA values you have on the stack at the start of the loop, construct SSA values for the loop’s parameters, and then push those onto the stack instead. I originally overlooked this, resulting in a fair bit of head scratching and debugging mis-translated IR. Whoops! Second, cranelift-wasm will track reachability during translation, and if some Wasm code is unreachable, we don’t even bother constructing Cranelift IR for it. But that boundary between unreachable and reachable code, and when one transitions to the other, can be a bit subtle. You can be in an unreachable state, fall through the current block into the following block, and become reachable once again. Throw in ifs with elses, and ifs without elses, and unconditional branches, and early returns, and it is easy for bugs to sneak in. And in the process of adding multi-value Wasm support, bugs did, in fact, sneak in. This time involving an if that was initially reachable, and whose consequent arm also ends reachable, but whose alternative arm ends unreachable. Given that, should the block following the consequent and alternative be reachable? Yes, but we were incorrectly computing that it shouldn’t be. To fix this bug, I refactored how cranelift-wasm computes reachablity of code following an if. It now correctly determines that the following block is reachable if the head of the if is reachable and any of the following are true: The consequent or alternative end reachable, in which case they will continue to the following block. The consequent or alternative do an early branch (potentially a conditional branch) to the following block, and that branch is reachable. There is no alternative, so if the if‘s condition is false, we go directly to the following block. To be sure that we are handling all these edge cases correctly, I added tests enumerating every combination of reachability of an if‘s arms as well as early branches. Phew! Finally, this bug first manifested itself in a 39 KiB Wasm file, and figuring out what was going on was made so much easier thanks to tools like wasm-reduce (a tool that is part of binaryen) and creduce (working on the WAT disassembly, rather than the binary Wasm). I forget which one I used this time, but I’ve successfully used both to turn big, complicated Wasm test cases into small, isolated test cases that highlight the bug at hand. These tools are real life savers so it is worth broadcasting their existence just in case anyone doesn’t know about them! Translating multi-value Wasm into Cranelift IR happened in these pull requests: #1049: Translate multi-value Wasm into Cranelift IR #1110: Correctly jump to the destination block at the end of the consequent #1143: Fix reachability tracking for ifs Supporting Many Return Values in Cranelift Cranelift IR the language supports returning arbitrarily many values from a function, but Cranelift the implementation only supported returning as many values as there are available registers in the calling convention that the function is using. For example, with the System V calling convention, you could return up to three pointer-sized values, and with the Windows fastcall calling convention, you could only return a single pointer-sized value. So the question was: How to return more values than can fit in registers? This should trigger some deja vu: when compiling to Wasm, how was LLVM returning structures larger than could fit in a single Wasm value? Struct return pointer parameters! This is nothing new, and in fact its use is dictated by certain calling conventions, we just hadn’t implemented support for it in Cranelift yet. So that’s what I set out to do. When Cranelift is given some initial IR, the IR is generally portable and machine independent. As the IR moves through Cranelift, eventually it reaches a legalization phase where instructions that don’t have a direct mapping to an machine code instruction in the target architecture are replaced with ones that do. For example, on 32-bit x86, Cranelift legalizes 64-bit arithmetic by expanding it into a series of 32-bit operations. During this process, we also legalize function signatures: passing a value that is larger than can fit in a register may need to be split into multiple parameters, each of which can fit in registers, for example. Signature legalization also assigns locations to formal parameters based on the function’s calling convention: this parameter should be in this register, and that parameter should be at this stack offset, etc. My plan for implementing arbitrary numbers of return values via struct return pointer parameters was to hook into Cranelift’s legalization phase during signature legalization, legalizing return instructions, and legalizing call instructions. When legalizing signatures, we need to determine whether a struct return pointer is required, and if so, update the signature to reflect that. ;; Before legalization. fn() -> i64, i64, i64, i64 fast ;; After legalization. fn (v0: i64 sret [%rdi]) -> i64 sret [%rax] fast Here, fast means the signature is using our internal, unstable “fast” calling convention. The sret is an annotation for a parameter or return value, in this case documenting that it is being used as a struct return pointer. The %rdi and %rax are the registers assigned to the parameter and return value by the calling convention.4 After legalization, we’ve added the struct return pointer parameter, but we also removed the old returns, and we also return the struct return pointer parameter as well. Returning the struct return pointer is mandated by the System V ABI’s calling conventions, but we currently do the same thing for our internal, unstable calling convention as well. After signatures are legalized, we need to legalize call and return instructions as well, so that they match the new, legalized signatures. Let’s turn our attention to the latter first. Legalizing a return instruction removes the return values from the return instruction itself, and creates a series of preceding store instructions that write the return values through the struct return pointer. Here’s an example that is returning four i32 values: ;; Before legalization. ebb0: ;; ... return v0, v1, v2, v3 ;; After legalization. ebb0(v4: i64): ;; ... store notrap aligned v0, v4 store notrap aligned v1, v4+4 store notrap aligned v2, v4+8 store notrap aligned v3, v4+12 return v4 The new v4 value is the struct return pointer parameter. The notrap annotation on the store instruction is saying that this store can’t trigger a trap. It is the caller’s responsibility to give us a valid struct return pointer that is pointing to enough space to fit all of our return values. The aligned annotation is similar, saying that the pointer we are storing through is properly four-byte aligned for an i32. Again, the responsibility is on the caller to ensure the struct return pointer has at least the maximum alignment required by the return values’ types. The +4, +8, and +12 are static immediates that specify an offset to be added to the actual v4 operand to compute the destination address for the store. Legalizing a call instruction has comparatively more responsibilities than legalizing a return instruction. Yes, it involves adding the struct return pointer argument to the call instruction itself, and then loading the values out of the struct return space after the callee returns to us. But it additionally must allocate the space for the struct return in the caller function’s stack frame, and it must ensure that the size and alignment invariants that the callee and its return instructions rely on are upheld. Let’s take a look at an example of some caller function calling a callee function that returns four i32s: ;; Before legalization. function %caller() { fn0 = colocated %callee() -> i32, i32, i32, i32 ebb0: v0, v1, v2, v3 = call fn0() return } ;; After legalization. function %caller() { ss0 = sret_slot 16 sig0 = (i64 sret [%rdi]) -> i64 sret [%rax] fast fn0 = colocated %callee sig0 ebb0: v4 = stack_addr.i64 ss0 v5 = call fn0(v4) v6 = load.i32 notrap aligned v5 v0 -> v6 v7 = load.i32 notrap aligned v5+4 v1 -> v7 v8 = load.i32 notrap aligned v5+8 v2 -> v8 v9 = load.i32 notrap aligned v5+12 v3 -> v9 return } The ss0 = sret_slot 16 is a sixteen byte stack slot that we created for the struct return space. It is also aligned to sixteen bytes, which is greater than necessary in this case, since we only need four byte alignment for the i32s. Similar to the stores in the legalized return, the loads in the legalized call are also annotated with notrap and aligned. v0 -> v6 establishes that v0 is another name for v6, and we don’t have to eagerly rewrite all the following uses of v0 into uses of v6 (even though there don’t happen to be any in this particular example). With signature, call, and return legalization that all understand when and how to use struct return pointers, we now have full support for arbitrarily many multi-value returns in Cranelift and Wasmtime. This support was implemented in these pull requests: #1147: Support many multi-value returns with struct return pointers #1213: legalize_signatures: Optimistically try and assign register locations to return values; backtrack to use struct-return pointer parameter Putting It All Together Finally, let’s put everything together and create a multi-value Wasm binary with the Rust and Wasm toolchain and then run it in Wasmtime! First, let’s create a new library crate with cargo: $ cargo new --lib hello-multi-value Created library `hello-multi-value` package $ cd hello-multi-value/ We’re going to use wasm-bindgen to return a string from our Wasm function, so lets add it as a dependency. Additionally, we’re going to create a Wasm library, rather than an executable, so specify that this is a “cdylib”: # Cargo.toml [lib] crate-type = ["cdylib"] [dependencies] wasm-bindgen = "0.2.54" Let’s fill out src/lib.rs with our string-returning function: use wasm_bindgen::prelude::*; #[wasm_bindgen] pub fn hello(name: &str) -> String { format!("Hello, {}!", name) } We can build our Wasm library with cargo wasi: $ cargo wasi build --release This will automatically build a Wasm file for the wasm32-wasi target and then run wasm-bindgen‘s post-processor to add interface types and introduce multi-value. We can verify this with the wasm-objdump tool from WABT: $ wasm-objdump -x \ target/wasm32-wasi/release/hello_multi_value.wasm hello_multi_value.wasm: file format wasm 0x1 Section Details: Type[14]: ... - type[6] (i32, i32) -> (i32, i32) ... Function[151]: ... - func[93] sig=6 ... Export[5]: - func[93] -> "hello" ... We can see that the function is exported as `"hello"` and that it has the multi-value type `(i32, i32) -> (i32, i32)`. This shim function is indeed the one introduced by our multi-value transform we added to `wasm-bindgen` to wrap the original function and turn its struct return pointer into multi-value. Finally, we can load this Wasm library into Wasmtime, which will use Cranelift to just-in-time (JIT) compile it to machine code, and then invoke the hello export with the string "multi-value Wasm": $ wasmtime \ target/wasm32-wasi/release/hello_multi_value.wasm \ --invoke hello "multi-value Wasm" Hello, multi-value Wasm! It works!! Conclusion The Rust and WebAssembly toolchain now supports generating Wasm binaries that make use of the multi-value proposal, and Cranelift and Wasmtime can compile and run multi-value Wasm binaries. This has been — I hope! — an interesting tale of implementing a Wasm feature through the whole vertical ecosystem, start to finish. Lastly, and definitely not leastly, I’d like to thank Dan Gohman, Benjamin Bouvier, Alex Crichton, Yury Delendik, @bjorn3, and @iximeow for providing reviews and implementation suggestions for different pieces of this journey at various stages. Additionally, thanks again to Alex and Dan, and to Lin Clark and Till Schneidereit for all providing feedback on early drafts of this piece. 0 Additionally, Thomas Lively and some other folks are already working on adding multi-value Wasm support directly to LLVM, so that is definitely coming in the future, and it made sense for me to focus my attention elsewhere. 1 There are some “stack-y” forms that don’t quite directly map to a tree. For example, you can insert stack-neutral, side-effectual instruction sequences in the middle of any part of the post-order encoding of an expression tree. Here is a call that produces some value, followed by a drop of that value, inserted into the middle of the post-order encoding of 1 + 2: ;; Instructions ;; Stack ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; i32.const 1 ;; [1] call $foo ;; [1, $result_of_foo] drop ;; [1] i32.const 2 ;; [1, 2] i32.add ;; [3] These stack-y forms can be represented by introducing blocks that don’t also introduce labels for control flow branches. You can think of them as sort of similar to Common Lisp’s progn and prog0 forms or Scheme’s (begin ...) 2 Fun fact: there is also ongoing work to make Cranelift a viable alternative backend for rustc! See the goals write up and the bjorn3/rustc_codegen_cranelift repo for details. 3 Originally, Cranelift was designed to use extended basic blocks, rather than regular basic blocks. Both can only be entered at their head, but basic blocks additionally can only exit at their tail, while extended basic blocks can have conditional exits from the block in their middle. The idea is that extended basic blocks more directly match machine code which falls through untaken conditional branches to continue executing the next instruction. However, Cranelift is in the process of switching over to regular basic blocks, and removing support for extended basic blocks. The reasoning is that all its optimization passes end up essentially constructing and keeping track of basic blocks anyways, which added complexity, and the extended basic blocks weren’t ultimately carrying their weight. 4 Semi-confusingly, the square brackets are just the syntax that Cranelift decided to use to surround parameter locations, and they do not represent dereferencing the way they would in Intel-flavored assembly syntax. The post Multi-Value All The Wasm! appeared first on Mozilla Hacks - the Web developer blog. [Less]