Posted
about 7 years
ago
by
mconley
Highlights
Just in case you didn’t get the memo, the add-ons team blogged about their plans up to Firefox 57.
New WebExtension APIs have landed in Nightly, including:
Sidebar API
Check out these prototype sidebar tabs!
Custom protocol handlers
... [More]
API
Privacy APIs
DevTools APIs, devtools panel API landed
We’ll soon be able to support Redux DevTools! Check out this video:
URL overrides for about:home and about:newtab
Reminder: when Firefox 52 is released (1 week!), non-Flash plugins are not supported by default
That means that Acrobat Reader, Java and Silverlight will not function.
This also includes Google Hangouts.
The Mobile team has started building Firefox Focus for Android!
Mozilla has been accepted into Google Summer of Code! Here are the Firefox projects.
Test Pilot has launched some new experiments!
Launched last week: Snooze Tabs and Pulse
Launching tomorrow (Wednesday): Containers
Containers are currently in Nightly. Test Pilot is being used to measure engagement and iterate on the UI. Read more about the goals and plans here.
Friends of the Firefox team
Resolved bugs (excluding employees): https://mzl.la/2mpsGpV
More than one bug fixed:
Deepa
Mayank
Svetlana Orlik
Tomislav Jovanovic :zombie
Vedant Sareen [:fionn_mac]
New contributors ( = First Patch!)
Chandler got rid of some leftover SVGs we were packaging
Timothy Pan fixed a graphics glitch in the fullscreen mode warning dialog
Deepa cleaned up some of our macOS theming code
kevin.kwong.chip fixed a graphics glitch in about:addons
Project Updates
Add-ons
Currently unsure if WebExtensions installation permissions will land in 54 or 55, most likely 55.
Out of process WebExtensions coming in 55.
Activity Stream
Removing dependence on Add-on SDK for landing in mozilla-central as a result of Talos testing and deprecation of APIs. Starting with API replacing / inlining, bootstrapping / loader alternatives, testing infrastructure.
The team has identified chunks of their project that they can land in mozilla-central independently of one another, and work is underway here
The team will still use the system add-on architecture where we feel we need to iterate more quickly, such as our UI code
Content Handling Enhancement
Download progress indication redesign reviewed and will land in a matter of days.
Electrolysis (e10s)
Planning is currently underway to do an e10s-multi experiment on a future release. Currently defining cohort sizes.
Native Stacks are now available for BHR on Windows, and stacks are starting to trickle in for tab switch spinners.
mconley found a case where we’ll show tab switch spinners when blocked by JS, even with force-painting. Working on a patch.
Firefox Core Engineering
Looked into “one” problematic Aurora 51 client that was messing up our graphs.
Looking into lack of application of hotfix for websense in 47 and 48 (despite users actually having the hotfix).
pingSender should be fully functional and out of QA this week. Will be used for sending crash pings on Nightly, Aurora, and Beta next week.
Starting to work on a background download service for updates.
Form Autofill
Team met for a workweek in Taipei
Discussions with layout/DOM on platform dependencies
Got form fill working with Enter from autocomplete
Finalizing preferences design with UX
Collecting data on the form structures of targeted US e-retailers
Fixed
Fill the selected autofill profile when an autocomplete entry is chosen
[Form Autofill] Prevent duplicate autocomplete search registration
[Form Autofill] Add built-in debug logging to ease debugging
add a new profile item and make rich-result-popup append item accordingly
Fallback to form history if there is no form autofill profile saved
Replace async getEnabledStatus with initialProcessData for content process init
Hide the result without primary label in ProfileAutoCompleteResult
Implement two column layout for profile item binding
Make adjustHeight method adapt profile item list
Form autofill popup won’t apply to an auto-focused input until it’s refocused
In Progress
Dialog to add/edit/view an autofill profile
Use FormAutofillHandler.autofillFormFields to fill in fields
Fallback to form history if whole profiles doesn’t have any data for the specific fields
Fallback to form history if the target field doesn’t have data in selected profile
Mobile
daleharvey has the beginnings of Progressive Web App support working in Fennec, and will be posting more patches for review soon!
Firefox Focus 3.1 for iOS is scheduled to to ship end of the month. This release only contains locale updates. The product went from 27 to 51 supported languages!
Firefox for iOS 7.0 has entered the stabilization phase and is expected to ship about 4 weeks from now. This release includes a migration of the codebase to Swift 3.0, stability fixes and Top Tabs for iPad. We will be doing TestFlight beta builds in the coming weeks. You can sign up for those here!
Firefox for iOS 8.0 development has started. Primary focus is landing Activity Stream
The Mobile team has started an engineering blog!
Landed various improvements to Android Sync. Better uploader, smarter sync flow, with a focus on data correctness
Project Prox aims to have our second user test in early March. Our new build is an iteration on the previous user test, addressing user feedback such as the need for more consistent data, filters, and a map view.
Platform UI and other Platform Audibles
jessica and scottwu have been working on proper localization support for the Date/Time pickers
Styling work for the dropdown has finished; riding the 54 train. Let jaws or mconley know if you see any issues.
Privacy/Security
More polish work around the permissions project (53) and in-context password warning (52) project.
Test Pilot experiments progress: Tracking Protection concluded, Containers is about to launch.
Quality of Experience
Lightweight themes will soon be implemented purely through CSS variables once this bug lands
We are close to getting the new (Web Extension-based) themes to show up in the Add-ons Manager. This work is being tracked here
Blog post announcing Theming API by dolske
Improvements to importing are ongoing
Turning on automatic migration/import on nightly, aurora and early beta starting with 54
Running another experiment on beta 53 to see why/when people don’t like us importing their history/bookmarks from another browser (with a survey)
Dão added some limits so we don’t import ALL THE HISTORY all the time when importing from Chrome (currently looking at 6 months and 2000 urls as a limit)
Gijs tried to make history import not hang quite so much by reducing main thread communication
Preferences reorg/search
The work to reorganize the Preferences continues to push forward. Going through review cycles now. We are hoping to get the reorganization work to land at the beginning of the Nightly 55 release cycle.
Integrating search within the Preferences is also on-going, and will also likely land in the 55 release cycle though the two projects are not tied to each other.
Search
Landed some Awesomebar and Places fixes.
High-rez favicons are coming!
Sync / Firefox Accounts
Bookmark repair work landing this week. This is important for unblocking iOS sync.
New UI to show all synced tabs in the panel.
Test Pilot
Page Shot in FF 54
Page Shot will now be a bootstrapped addon + embedded WebExtension, and will ship as a system addon.
Please submit your ideas for new Firefox features!
We’ve simplified our Test Pilot experiment proposal form. Learn more.
Here are the raw meeting notes that were used to derive this list.
Want to help us build Firefox? Get started here!
Here’s a tool to find some mentored, good first bugs to hack on. [Less]
|
Posted
about 7 years
ago
by
mihai.boldan
Hello Mozillians!
Last week, the Release QA Team (Firefox for Desktop) reached out to a few people from the QA Community and asked for help on a very specific list of bug fixes that would make the team more confident about the quality of Firefox
... [More]
52.0, if successfully verified.
The following contributors were hand picked based on their consistent and reliable performance during Bug Verification Days: Maruf Rahman, Md.Majedul isalm, Kazi Nuzhat Tasnem, Azmina, Saheda Reza, Nazir Ahmed Sabbir, Sajedul Islam, Tanvir Rahman and Hossain Al Ikram.
It gives me great pleasure to extend my warmest congratulations to each and every one of them, on behalf of the entire Release QA Team. Thank you and we all hope that you’ll be willing to repeat this exercise again, soon.
Keep up the good work guys!
Mihai Boldan, QA Community Mentor
Firefox for Desktop, Release QA Team [Less]
|
Posted
about 7 years
ago
by
Camelia Badau
Hello Mozillians,
We are happy to let you know that Friday, March 3rd, we are organizing Firefox 53.0 Aurora Testday. We’ll be focusing our testing on the following features: Implement support for WebM Alpha, Reader Mode Displays Estimated Reading
... [More]
Time and Quantum – Compositor Process for Windows. Check out the detailed instructions via this etherpad .
No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.
Join us and help us make Firefox better!
See you on Friday! [Less]
|
Posted
about 7 years
ago
Advice on storing encryption keys
I saw an excellent question get some excellent infosec advice on IRC recently.
I’m quoting the discussion here because I expect that I’ll want to reference
it when answering others’ questions in the future.
A user
... [More]
going by Dagnabit asked:
May I borrow some advice specifically on how best to store an ecryption key? I
have a python script that encrypts files using libsodium, My question is how
can I securely store the encryption key within the file system? Is it best
kept in an sqlite db that can only be accessed by the user owning the python
script?
This user has run right into one of the fundamental challenges of security:
How can my secrets (in this case, keys) be safe from attackers, while still
being usable?
HedgeMage replied with a wall of useful
advice. Quotations are her words, links and annotations between them are me
adding some context and opinions.
So, it depends on your security model: in most cases I’m prone to
keeping my encryption key on a hardware token, so that even if the server is
compromised, the secret key is not.
You’re probably familiar with time-based one-time-pad hardware tokens, but in the case of key
management, the “hardware token” could be as simple as a USB stick locked in a
safe. On the spectrum of compromise between security and convenience, a
hardware token is toward the DNSSEC keyholder end.
However, for some projects you are on virtualized infrastructure and can’t
plug in a hardware token. It’s unfortunate, because that’s really the safest
thing, but a reality for many of us.
This also applies to physical infrastructure in which an application might
need to use a key without human supervision.
Without getting into anything crazy where a proxy server does signing, etc,
you usually are better off trusting filesystem permissions than stuffing it in
the database, for the following reasons:
While delegating the task of signing to a proxy server can make life more
annoying to an attacker, you’re still going to have to choose between having a
human hold the key and get interrupted whenever it’s needed, or trusting a
server with it, at some point. You can compromise between those two extremes
by using a setup like subkeys, but it’s
still inconvenient if a subkey gets compromised.
It’s easier to monitor the filesystem activity comprehensively, and detect
intrusions/compromises.
Filesystem permissions are pretty dependable at this point, and if the
application doing the signing has permission for the key, whether in a DB or
the filesystem, it can compromise that key... so the database is giving you new
attack surfaces (compromise of the DB access model) without any new
protections.
To put it even more bluntly, any unauthorized access to a machine has the
potential to leak all of the secrets on it. The actions that you’ll need to
take if you suspect the filesystem of a host was compromised are pretty much
identical to those you’d take if the DB was.
Stuffing the key in the DB is nonstandard enough that you may be writing more of
the implementation yourself, instead of depending as much as possible on
widely-used, frequently-examined code.
Dagnabit’s reply saved me the work of summarizing the key takeaways:
I will work on securing the distrubtion and removing any unnecessary packages.
I’ll look at the possibility of using a hardware token to keep it
secure/private.
Reducing the attack surface is logical and something I had not considered.
[Less]
|
Posted
about 7 years
ago
by
[email protected] (Alexander Surkov)
There's old soviet animation "38 parrots". The plot of the movie was the animals were measuring a boa, and they concluded that the boa is longer in parrots than in monkeys. I always thought this is a cool joke.My 2nd grade son has shown his classwork
... [More]
to me. They measured a rectangle by covering it by different shapes: squares, triangles etc, and then they computed area of the rectangle. For example, 20 squares were used to cover the rectangle, then the rectangle's area equals 20 rectangles. If 16 triangles match the shape of the rectangle, then the area equals 16 triangles. Then they compared the areas. They deduced that area of the rectangle covered by squares is bigger than rectangle's area covered by triangles, because 20 is bigger than 16. That's right, same rectangle, different units, which area is bigger.The idea of measuring is being able to compare different objects, for example, this house is 600 sq.ft, and it's probably small for my family; this one is 1500 sq.ft, and this one is probably ok. You could definitely measure your TV in inches, and then in centimeters, and conclude that your TV is twice bigger in centimeters than in inches, but it doesn't make any practical sense.There's a math concept of measure, which is generalization of length, area and volume concepts. Generally speaking, measure is a function defined on a set, which maps its elements into real numbers, i.e 𝝻:S⟼ℝ. You are definitely free to build any kind of measure, and then measure your set in parrots, carrots or elephants. You can also compare the measures in some ways, because these are real-valued functions, having some good properties. I don't recall though any multiple measures juggling in university math program, I would say it is something more pertinent to theoretical research works.I had a lengthy chat with my son's teacher, arguing that they definitely can compare amounts of the units used to cover the rectangle or they can compare the areas of different objects in same units, but they shouldn't compare areas in different units. I didn't succeed.So is it something that 2nd grade kids are really expected to deal with or did the teachers misinterpret the Ontario math curriculum, which says:estimate, measure, and record area, through investigation using a variety of non-standard units (e.g., determine the number of yellow pattern blocks it takes to cover an outlinedshape)
[Less]
|
Posted
about 7 years
ago
by
Nicholas D. Matsakis
In my previous post, I
outlined a plan for non-lexical lifetimes. I wanted to write a
follow-up post today that discusses different ways that we can extend
the system to support nested mutable calls. The ideas here are based
on some the ideas that
... [More]
emerged in a
recent discussion on internals, although what I describe
here is a somewhat simplified variant. If you want more background,
it’s worth reading at least the top post in the thread, where I laid
out a lot of the history here. I’ll try to summarize the key bits as I
go.
The problem we’d like to solve
This section is partially copied from the internals post; if you’ve read
that, feel free to skip or skim.
The overriding goal here is that we want to accept nested method calls
where the outer call is an &mut self method, like
vec.push(vec.len()). This is a common limitation that beginners
stumble over and find confusing and which experienced users have as a
persistent annoyance. This makes it a natural target to eliminate as
part of the 2017 Roadmap.
You may wonder why this code isn’t accepted in the first place. To see
why, consider what the resulting MIR looks like (I’m going to number
the statements for later reference in the post):
/* 0 */ tmp0 = &mut vec; // mutable borrow starts here.. -+
/* 1 */ tmp1 = &vec; // <-- shared borrow overlaps here |
/* 2 */ tmp2 = Vec::len(tmp1); // |
/* 3 */ Vec::push(tmp0, tmp2); // <--.. and ends here-----------+
As you can see, we first take a mutable reference to vec for
tmp0. This “locks” vec from being accessed in any other way until
after the call to Vec::push(), but then we try to access it again
when calling vec.len(). Hence the error.
When you see the code desugared in that way, it should not surprise
you that there is in fact a real danger here for code to crash if we
just “turned off” this check (if we even could do such a thing). For
example, consider this rather artificial Rust program:
let mut v: Vec<String> = vec![format!("Hello, ")];
v[0].push_str({ v.push(format!("foo")); "World!" });
// ^^^^^^^^^^^^^^^^^^^^^^ sneaky attempt to mutate `v`
The problem is that, when we desugar this, we get:
let mut v: Vec<String> = vec![format!("Hello, ")];
// creates a reference into `v`'s current data array:
let arg0: &mut String = &mut v[0];
let arg1: &str = {
// potentially frees `v`'s data array:
v.push(format!("foo"));
"World!"
};
// uses pointer into data array that may have been freed:
String::push_str(arg0, arg1)
So, to put it another way, as we evaluate the arguments, we are
creating references and pointers that we will give to the final
function. But evaluating arguments can also have arbitrary
side-effects, which might invalidate the references that we prepared
for earlier arguments. So we have to be sure to rule that out.
In fact, even when the receiver is just a local variable (e.g.,
vec.push(vec.len())) we have to be wary. We wouldn’t want it to be
possible to give ownership of the receiver away in one of the
arguments: vec.push({ send_to_another_thread(vec); ... }). That
should still be an error of course.
(Naturally, these complex arguments that are blocks look really
artificial, but keep in mind that most of the time when this occurs in
practice, the argument is a method or fn call, and that could in
principle have arbitrary side-effects.)
How can we fix this?
Now, we could address this by changing how we desugar method calls
(and indeed the original post on the internals thread
contained two such alternatives). But I am more interested in seeing
if we can keep the current desugaring, but enrich the lifetime and
borrowing system so that it type-checks for cases that we can see
won’t lead to a crash (such as this one).
The key insight is that, today, when we execute the mutable borrow of
vec, we start a borrow immediately, even though the reference
(arg0, here) is not going to be used until later:
/* 0 */ tmp0 = &mut vec; // mutable borrow created here..
/* 1 */ tmp1 = &vec; // <-- shared borrow overlaps here |
/* 2 */ tmp2 = Vec::len(tmp1); // |
/* 3 */ Vec::push(tmp0, tmp2); // ..but not used until here!
The proposal – which I will call two-phased mutable borrows – is
to modify the borrow-checker so that mutable borrows operate in two
phases:
When an &mut reference is first created, but before it is used,
the borrowed path (e.g., vec) is considered reserved. A
reserved path is subject to the same restrictions as a shared borrow
– reads are ok, but moves and writes are not (except under a
Cell).
Once you start using the reference in some way, the path is
considered mutably borrowed and is subject to the usual
restrictions.
So, in terms of our example, when we execute the MIR statement tmp0 =
&mut vec, that creates a reservation on vec, but doesn’t start
the actual borrow yet. tmp0 is not used until line 3, so that means
that for lines 1 and 2, vec is only reserved. Therefore, it’s ok to
share vec (as line 1 does) so long as the resulting reference
(tmp1) is dead as we enter line 3. Since tmp1 is only used to call
Vec::len(), we’re all set!
Code we would not accept
To help understand the rule, let’s look at a few other examples, but
this time we’ll consider examples that would be rejected as illegal
(both today and under the new rules). We’ll start with the example we
saw before that could have trigged a use-after-free:
let mut v: Vec<String> = vec![format!("Hello, ")];
v[0].push_str({ v.push(format!("foo")); "World!" });
We can partially desugar the call to push_str() into MIR
that would look something like this:
/* 0 */ tmp0 = &mut v;
/* 1 */ tmp1 = IndexMut::index_mut(tmp0, 0);
/* 2 */ tmp2 = &mut v;
/* 3 */ Vec::push(tmp2, format!("foo"));
/* 4 */ tmp3 = "World!";
/* 5 */ Vec::push_str(tmp1, tmp3);
In one sense, this example turns out to be not that interesting in
terms of the new rules. This is because v[0] is actually an
overloaded operator; when we desugar it, we see that v would be
reserved on line 0 and then (mutably) borrowed starting on line 1.
This borrow extends as long as tmp1 is in use, which is to say, for
the remainder of the example. Therefore, line 2 is an error, because
we cannot have two mutable borrows at once.
However, in another sense, this example is very interesting: this is
because it shows how, while the new system is more expressive, it
preserves the existing behavior of safe abstractions. That is,
the index_mut() method has a signature like:
fn index_mut(&mut self) -> &mut Self::Output
Since calling this method is going to “use” the receiver, and hence
activate the borrow, the method is guaranteed that as long as its
return value is in use, the caller will not be able to access the
receiver. This is precisely how it works today as well.
The next example is artificial but inspired by one that is covered
in my original post to the internals thread:
/*0*/ let mut i = 0;
/*1*/ let p = &mut i; // (reservation of `i` starts here)
/*2*/ let j = i; // OK: `i` is only reserved here
/*3*/ *p += 1; // (mutable borrow of `i` starts here, since `p` is used)
/*4*/ let k = i; // ERROR: `i` is mutably borrowed here
/*5*/ *p += 1;
// (mutable borrow ends here, since `p` is not used after this point)
This code fails to compile as well. What happens, as you can see in
the comments, is that i is considered reserved during the first
read, but once we start using p on line 3, i is considered
borrowed. Hence the second read (on line 4) results in an
error. Interestingly, if line 5 were to be removed, then the program
would be accepted (at least once we move to NLL), since the borrow
only extends until the last use of p.
The final example shows that this analysis doesn’t permit any kind
of nesting you might want. In particular, for better or worse, it does
not permit calls to &mut self methods to be nested inside of a call
to an &self method. This means that something like
vec.get({vec.push(2); 0}) would be illegal. To see why, let’s check
out the (partial) MIR desugaring:
/* 0 */ tmp0 = &vec;
/* 1 */ tmp1 = &mut vec;
/* 2 */ Vec::push(tmp1, 2);
/* 3 */ Vec::get(tmp0, 0);
Now, you might expect that this would be accepted, because the borrow
on line 0 would not be active until line 3. But this isn’t quite
right, for two reasons. First, as I described it, only mutable
borrows have a reserve/active cycle, shared borrows start right
away. And the reason for this is that when a path is reserved, it
acts the same as if it had been shared. So, in other words, even if
we used two-phase borrowing for shared borrows, it would make no
difference (which is why I described reservations as only applying to
mutable borrows). At the end of the post, I’ll describe how we could
– if we wanted – support examples like this, at the cost of making
the system slightly more complex.
How to implement it
The way I envision implementing this rule is part of borrow check.
Borrow check is the final pass that executes as part of the compiler’s
safety checking procedure. In case you’re not familiar with how the
compiler works, Rust’s safety check is done using three passes:
Normal type check (like any other language);
Lifetime check (infers the lifetimes for each reference, as described in my previous post);
Borrow check (using the lifetimes for each borrow, checks that all uses are acceptable,
and that variables are not moved).
How borrow check would work before this proposal
Before two-phase borrows, then, the way the borrow-check would begin
is to iterate over every borrow in the program. Since the lifetime
check has completed, we know the lifetimes of every reference and
every borrow. In MIR, borrows always look like this:
var = &'lt mut? lvalue;
// ^^^ ^^^^
// | |
// | distinguish `&mut` or `&` borrow
// lifetime of borrow
This says “borrow lvalue for the lifetime 'lt” (recall that, under
NLL,
each lifetime is a set of points in the MIR control-flow graph). So
we would go and, for each point in 'lt, add lvalue to the list of
borrowed things at that point. If we find that lvalue is already
borrowed at that point, we would check that the two borrows are
compatible (both must be shared borrows).
At this point, we now have a list of what is borrowed at each point in
the program, and whether that is a shared or mutable borrow. We can then
iterate over all statements and check that they are using the values in
a compatible way. So, for example, if we see a MIR statement like:
k = i // where k, i are integers
then this would be illegal if k is borrowed in any way (shared or
mutable). It would also be illegal if i is mutably borrowed.
Similarly, it is an error if we see a move from a path p when p is
borrowed (directly or indirectly). And so forth.
Supporting two-phases
To support two-phases, we can extend borrow-check in a simple way.
When we encounter a mutable borrow:
var = &'lt mut lvalue;
we do not go and immediately mark lvalue as borrowed for all the
points in 'lt. Instead, we find the points A in 'lt where the
borrow is active. This corresponds to any point where var is
used and any point that is reachable from a use (this is a very simple
inductive definition one can easily find with a data-flow
analysis). For each point in A, we mark that lvalue is mutably
borrowed. For the points 'lt - U, we would mark lvalue as merely
reserved. We can then do the next part of the check just as before,
except that anywhere that an lvalue is treated as reserved, it is
subject to the same restrictions as if it were shared.
Comparing to other approaches
There have been a number of proposals aimed at solving this same
problem. This particular proposal is, I believe, a new variant, but
it accepts a similar set of programs to the other proposals. I wanted
to compare and contrast it a bit with prior ideas and try to explain
why I framed it in just this way.
Borrowing for the future.
My own first stab at this problem was using the idea of “borrowing for
the future”, described in the internals thread. The basic
idea was that the lifetime of a borrow would be inferred to start on
the first use, and the borrow checker, when it sees a borrow that
doesn’t start immediately, would consider the path “reserved” until
the start. This is obviously very close to what I have presented
here. The key difference is that here the borrow checker itself
computes the active vs reserved portions of the borrow, rather than
this computation being done in lifetime inference.
This seems to me to be more appropriate: lifetime inference figures
out how long a given reference is live (may later be used), based on
the type system and its rules. The borrow checker then uses that
information to figure out if the program may cause the reference to be
invalidated.
The formulation I presented here also fits much better with the
NLL rules that I presented previously. This is because it
allows us to keep the rule that when a reference is live at some
point P (may be dereferenced later), its lifetime include that point
P. To see what I mean, let’s reconsider our original example, but in
the “borrowing for the future” scheme. I’ll annotate lifetimes using
braces to describe sets:
/* 0 */ tmp0 = &{3} mut vec;
/* 1 */ tmp1 = &vec;
/* 2 */ tmp2 = Vec::len(tmp1);
/* 3 */ Vec::push(tmp0, tmp2);
Here tmp0 would have the type &{3} mut Vec, but tmp0 is clearly
live at point 1 (i.e., it will be used later, on line 3). So we would
have to make the NLL rules that I outlined later incorporate a
more complex invariant, one that considers two-phase borrows as a
first-class thing (cue next piece of ‘related work’ in 1…2…3….).
Two-phase lifetimes
In the internals thread, arielb1 had an interesting proposal
that they called “two-phase lifetimes”. The goal was precisely to take
the “two-phase” concept but incorporate it into lifetime inference,
rather than handling it in borrow checking as I present here. The idea
was to define a type RefMut<'r, 'w, T>1 which stands in for a
kind of “richer” &mut type.2 In particular, it has two
lifetimes:
'r is the “read” lifetime. It includes every point where the reference
may later be used.
'w is a subset of 'r (that is, 'r: 'w) which indicates the “write” lifetime.
This includes those points where the reference is actively being written.
We can then conservatively translate a &'a mut T type into
RefMut<'a, 'a, T> – that is, we can use 'a for both of the two
lifetimes. This is what we would do for any &mut type that appears
in a struct declaration or fn interface. But for &mut T types within
a fn body, we can infer the two lifetimes somewhat separately: the
'r lifetime is computed just as I described in my
NLL post. But the 'w lifetime only needs to include those
points where a write occurs. The borrow check would then guarantee
that the 'w regions of every &mut borrow is disjoint from the 'r
regions of every other borrow (and from shared borrows).
This proposal accepts more programs than the one I outlined. In
particular, it accepts the example with interleaved reads and writes
that we saw earlier. Let me give that example again, but annotation
the regions more explicitly:
/* 0 */ let mut i = 0;
/* 1 */ let p: RefMut<{2-5}, {3,5}, i32> = &mut i;
// ^^^^^ ^^^^^
// 'r 'w
/* 2 */ let j = i; // just in 'r
/* 3 */ *p += 1; // must be in 'w
/* 4 */ let k = i; // just in 'r
/* 5 */ *p += 1; // must be in 'w
As you can see here, we would infer the write region to be just the
two points 3 and 5. This is precisely those portions of the CFG where
writes are happening – and not the gaps in between, where reads are
permitted.
Why I do not want to support discontinuous borrows
As you might have surmised, these sorts of “discontinuous” borrows
represent a kind of “step up” in the complexity of the system. If it
were vital to accept examples with interleaved writes like the
previous one, then this wouldn’t bother me (NLL also represents such a
step, for example, but it seems clearly worth it). But given that the
example is artificial and not a pattern I have ever seen arise in
“real life”, it seems like we should try to avoid growing the
underlying complexity of the system if we can.
To see what I mean about a “step up” in complexity, consider how we
would integrate this proposal into lifetime inference. The current
rules treat all regions equally, but this proposal seems to imply that
regions have “roles”. For example, the 'r region captures the
“liveness” constraints that I described in the original NLL
proposal. Meanwhile the 'w region captures “activity”.
(Since we would always convert a &'a mut T type into RefMut<'a, 'a,
T>, all regions in struct parameters would adopt the more
conservative “liveness” role to start. This is good because we
wouldn’t want to start allowing “holes” in the lifetimes that unsafe
code is relying on to prevent access from the outside. It would
however be possible for type inference to use a RefMut<'r, 'w ,T>
type as the value for a type parameter; I don’t yet see a way for that
to cause any surprises, but perhaps it can if you consider
specialization and other non-parametric features.)
Another example of where this “complexity step” surfaces came from
Ralf Jung. As you may know, Ralf is working on a
formalization of Rust as part of the RustBelt project (if you’re
interested, there is video available of a
great introduction to this work which Ralf gave at the Rust
Paris meetup). In any case, their model is a kind of generalization of
Rust, in that it can accept a lot of programs that standard Rust
cannot (it is intended to be used for assigning types to unsafe code
as well as safe code). The two-phase borrow proposal that I describe
here should be able to fit into that system in a fairly
straightforward way. But if we adopted discontinuous regions, that
would require making Ralf’s system more expressive. This is not
necessarily an argument against doing it, but it does show that it
makes the Rust system qualitatively more complex to reason about.
If all this talk of “steps in complexity” seems abstract, I think that
the most immediate way it will surface is when we try to
teach. Supporting discontinous borrows just makes it that much
harder to craft small examples that show how borrowing works. It will
make the system feel more mysterious, since the underlying rules are
indeed more complex and thus harder to “intuit” on your own.
Two-phase lifetimes without discontinuous borrows
For a while I was planning to describe a variant on arielb1’s proposal
where the write lifetimes were required to be continuous – in effect,
they would be required to be a suffix of the overall read lifetime;
this would make the proposal roughly equivalent to the current one.
Given that the set of programs that are accepted are the same, this
becomes more a question of presentation than anything.
I ultimately settled on the current presentation because it seems
simpler to me. In particular, lifetime inference today is based solely
on liveness, which is a “forward-looking property”. In other
words, something is live if it may be used later. In contrast, the
borrow check today is interested in tracking, at a particular point,
the “backwards-looking property” of whether something has been
borrowed. So adding another “backwards-looking property” – whether
that borrow has been activated – fits borrowck quite naturally.3
Possible future extensions
There are two primary ways I see that we might extend this proposal in
the future. The first would be to allow “discontinuous borrows”, as I
described in the previous section under the heading “Two-phase
lifetimes”.
The other would be to apply the concept of reservations to all
borrows, and to loosen the restrictions we impose on a “reserved”
path. In this proposal, I chose to treat reserved and shared paths in
the same way. This implies that some forms of nesting do not work; for
example, as we saw in the examples, one cannot write
vec.get({vec.push(2); 0}). These conditions are stronger than is
strictly needed to prevent memory safety violations. We could consider
reserved borrows to be something akin to the old const borrows we
used to support: these would permit reads and writes of the
original path, but not moves. There are some tricky cases to be
careful of (for example, if you reserve *b where b: Box, you
cannot permit people to mutate b, because that would cause the
existing value to be dropped and hence invalidate your existing
reference to *b), but it seems like there is nothing fundamentally
stopping us. I did not propose this because (a) I would prefer not to
introduce a third class of borrow restrictions and (b) most examples
which would benefit from this change seem quite artificial and not
entirely desirable (though there are exceptions). Basically, it seems
ok for vec.get({vec.push(2); 0}) to be an error. =)
Conclusion
I have presented here a simple proposal that tries to address the
“nested method call” problem as part of the NLL work, without
modifying the desugaring into MIR at all (or changing MIR’s dynamic
semantics). It works by augmenting the borrow checker so that mutable
borrows begin as “reserved” and then, on first use, convert to active
status. While the borrows are reserved, they impose the same
restrictions as a shared borrow.
In terms of the “overall plans” for NLL, I consider this to be the
second out of a series of three posts that lay out a complete proposal4:
the core NLL system, covered in the previous post;
nested method calls, this post;
incorporating dropck, still to come.
Comments? Let’s use this internals thread for comments.
Footnotes
arielb1 called it Ref2Φ<'immut, 'mutbl, T>, but I’m going to take the liberty of renaming it. ↩
arielb1 also proposed to unify &T into this type, but that introduces complications because &T are Copy but &mut are not, so i’m leaving that out too. ↩
In more traditional compiler terminology,
“forwards-looking properties” are ones computed using
a reverse data-flow analysis, and “backwards-looking
properties” are those that would be computed by a
forwards data-flow analysis. ↩
Presuming I’m not overlooking something. =) ↩
[Less]
|
Posted
about 7 years
ago
by
dlawrence
the following changes have been pushed to bugzilla.mozilla.org:
[1341457] Add Text::MultiMarkdown as a dependency and regenerate carton bundle
[1280363] [a11y] Make the Actions menu button accessible for keyboard and screen readers
[1330884]
... [More]
Centralize #bugzilla-body for bug modal page
[1342542] Add special group partner-confidential, [email protected] and [email protected] to generate_bmo_data.pl
[1343429] Dropdown menus are slightly off screen in some cases and should open justified to the right instead of left
[1343430] The “Format Bug” and “New/Clone Bug” buttons cause page to reload (need type=”button”)
discuss these changes on mozilla.tools.bmo. [Less]
|
Posted
about 7 years
ago
by
ahal
Imagine this scenario. You've pushed a large series of commits to your favourite review tool
(because you are a believer in the glory of microcommits). The reviewer however has found several
problems, and worse, they are spread across all of the
... [More]
commits in your series. How do you fix all
the issues with minimal fuss while preserving the commit order?
If you were using the builtin histedit extension, you might make temporary "fixup" commits for
each commit that had issues. Then after running hg histedit you'd roll them up into their
respective parent. Or if you were using the evolve extension (which I definitely recommend),
you might do something like this:
```bash
$ hg update 1
fix issues in commit 1
$ hg amend
$ hg evolve
fix issues in commit 2
$ hg amend
$ hg evolve
etc.
```
Both methods are serviceable, but involve some jumping around through hoops to accomplish. Enter a
new extension from Facebook called absorb. The absorb extension will take each change in your
working directory, figure out which commits in your series modified that line, and automatically
amend the change to that commit. If there is any ambiguity (i.e multiple commits modified the same
line), then absorb will simply ignore that change and leave it in your working directory to be
resolved manually. So instead of the rather convoluted processes above, you can do this:
```bash
fix all issues across all commits
$ hg absorb
```
It's magic!
Installing Absorb
There's one big problem. The docs in the hg-experimental repo (where absorb lives) are
practically non-existent, and installation is a bit of a pain. So here are the steps I took to get
it working on Fedora. They won't hand hold you for other platforms, but they should at least point
you in the right direction.
First, clone the hg-experimental repo:
bash
$ hg clone https://bitbucket.org/facebook/hg-experimental
Absorb depends on a compiled python module called linelog which also lives in hg-experimental.
In order to compile linelog, you'll need some dependencies:
bash
$ sudo pip install cython
$ sudo dnf install lz4-devel python-devel openssl-devel
Make sure the cython dependency gets installed to the same python your mercurial install uses.
That may mean dropping the sudo from the pip command if you have mercurial running in user space.
Next, compile the hg-experimental repo by running:
bash
$ cd path/to/hg-experimental
$ sudo python setup.py install
Again, be sure to run the install with the same python mercurial is installed with. Finally, add the
following to your ~/.hgrc:
ini
[extensions]
absorb = path/to/hg-experimental/hgext3rd/absorb.py
The extension should now be installed! In the future, you can update the extension and python
modules with:
bash
$ cd path/to/hg-experimental
$ hg pull --rebase
$ make clean
$ sudo python setup.py install
Let me know if there were other steps needed to get this working on your platform. [Less]
|
Posted
about 7 years
ago
by
ahal
Imagine this scenario. You've pushed a large series of commits to your favourite review tool
(because you are a believer in the glory of microcommits). The reviewer however has found several
problems, and worse, they are spread across all of the
... [More]
commits in your series. How do you fix all
the issues with minimal fuss while preserving the commit order?
If you were using the builtin histedit extension, you might make temporary "fixup" commits for
each commit that had issues. Then after running hg histedit you'd roll them up into their
respective parent. Or if you were using the evolve extension (which I definitely recommend),
you might do something like this:
```bash
$ hg update 1
fix issues in commit 1
$ hg amend
$ hg evolve
fix issues in commit 2
$ hg amend
$ hg evolve
etc.
```
Both methods are serviceable, but involve some jumping around through hoops to accomplish. Enter a
new extension from Facebook called absorb. The absorb extension will take each change in your
working directory, figure out which commits in your series modified that line, and automatically
amend the change to that commit. If there is any ambiguity (i.e multiple commits modified the same
line), then absorb will simply ignore that change and leave it in your working directory to be
resolved manually. So instead of the rather convoluted processes above, you can do this:
```bash
fix all issues across all commits
$ hg absorb
```
It's magic!
Installing Absorb
There's one big problem. The docs in the hg-experimental repo (where absorb lives) are
practically non-existent, and installation is a bit of a pain. So here are the steps I took to get
it working on Fedora. They won't hand hold you for other platforms, but they should at least point
you in the right direction.
First, clone the hg-experimental repo:
bash
$ hg clone https://bitbucket.org/facebook/hg-experimental
Absorb depends on a compiled python module called linelog which also lives in hg-experimental.
In order to compile linelog, you'll need some dependencies:
bash
$ sudo pip install cython
$ sudo dnf install python-devel
Edit: Previously I had lz4-devel and openssl-devel listed as dependencies, but as junw notes, that's only needed if you are compiling the whole hg-experimental repo (by omitting the --component flag below). Though it looks like lz4 might still be needed on OSX.
Make sure the cython dependency gets installed to the same python your mercurial install uses.
That may mean dropping the sudo from the pip command if you have mercurial running in user space.
Next, compile the hg-experimental repo by running:
bash
$ cd path/to/hg-experimental
$ sudo python setup.py install --component absorb
Again, be sure to run the install with the same python mercurial is installed with. Finally, add the
following to your ~/.hgrc:
ini
[extensions]
absorb = path/to/hg-experimental/hgext3rd/absorb.py
The extension should now be installed! In the future, you can update the extension and python
modules with:
bash
$ cd path/to/hg-experimental
$ hg pull --rebase
$ make clean
$ sudo python setup.py install --component absorb
Let me know if there were other steps needed to get this working on your platform. [Less]
|
Posted
about 7 years
ago
by
Jorge Villalobos
If you haven’t yet, please read our roadmap to Firefox 57. Firefox 53 is an important milestone, when we will stop accepting new legacy add-ons on AMO, will turn Multiprocess Firefox on by default, and will be restricting binary access from add-ons
... [More]
outside of the WebExtensions API.
Firefox 53 will be released on April 18th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 53 for Developers, so you should also give it a look.
General
Remove support for multi-package xpis. This is still used by a few add-ons that mix complete themes and extensions in a single package, but it’s going away along with other legacy add-on technologies.
Use a separate content process for file:// URLs. This slightly changes the behavior of _openURIInNewTab.
Remove support for -moz-calc().
dom-storage2-changed notifications are not useful for private windows. This created a separate dom-private-storage2-changed notification for private windows.
Remove unused bindings from urlbarbindings.xml. Removes and .
Move content vs. content-primary distinction out of the type browser attribute. This means type="content-primary" now works exactly as type="content", and will be dropped eventually.
MimeTypeArray should have unenumerable named properties.
PluginArray and Plugin should have unenumerable own properties.
Password Manager
The 3 following changes are related, and the main impact is that add-ons can no longer call findSlotByName("") to figure out if the master password is set. You can find an example on how to change this here.
Disallow empty token names as the argument to nsIPK11TokenDB.findTokenByName.
Add nsIPK11Token.hasPassword to replace unnecessary uses of nsIPKCS11Slot.status.
Remove token choosing functionality from changepassword.(js|xul).
XPCOM and Modules
Don’t expose the NSS certificate nickname API in PSM interfaces. This changes a number of methods in nsIX509Cert: addCert, addCertFromBase64, findCertByNickname, findEmailEncryptionCert, findEmailSigningCert.
AddonManager APIs should try to support both callbacks and promises. This makes the callback for getInstallForURL to be called asynchronously.
Remove getURIForKeyword API. PlacesUtils.keywords.fetch should be used instead.
Support ServoStyleSheets in nsIStyleSheetService::PreloadSheet. This changes the return type of preloadSheet, which shouldn’t affect you unless you do anything other than pass it to addSheet.
WebExtensions
Encrypt record deletes. The storage.sync API hasn’t shipped yet, but it’s probably already in use by some pre-release users. This change causes old synced data to be lost.
Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 53, I’d like to know.
The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 52.
The post Add-on Compatibility for Firefox 53 appeared first on Mozilla Add-ons Blog. [Less]
|