I Use This!
Very High Activity

News

Analyzed about 9 hours ago. based on code collected about 16 hours ago.
Posted almost 6 years ago by Air Mozilla
mconley livehacks on real Firefox bugs while thinking aloud.
Posted almost 6 years ago by Air Mozilla
This is the SUMO weekly call
Posted almost 6 years ago by botondballo
Summary / TL;DR Project What’s in it? Status C++17 See list Published! C++20 See below On track Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17 Concepts TS ... [More] Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication! Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet. Concurrency TS v2 See below Under development. Depends on Executors. Networking TS Sockets library based on Boost.ASIO Published! Ranges TS Range-based algorithms and views Published! Headed towards C++20 Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain Modules v1 A component system to supersede the textual header file inclusion model Published as a TS Modules v2 Improvements to Modules v1, including a better transition path Under active development Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API No consensus to move forward Reflection TS Static code reflection mechanisms Send out for PDTS ballot Contracts Preconditions, postconditions, and assertions Merged into C++20 A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days. Introduction A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post. At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS. C++20 C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report. Language: Support for contract-based programming in C++20. This adds language-level preconditions, postconditions, and assertions to C++. It’s a feature long in the making, and one of the most significant features added to C++20 to date. Class types in non-type template parameters. This is a long-desired change that significantly increases the expressiveness of the language for metaprogramming and reflection purposes. Allowing virtual function calls in constant expressions. Prohibit aggregates with user-declared constructors. Efficient sized deletion for variable-sized classes. Consistency improvements for <=> and other comparison operators. Conditionally explicit constructors, a.k.a. explicit(bool). Deprecate implicit capture of this via [=]. This deprecates the arguably misleading semantics of a default by-value capture implicitly capturing the this pointer (which amounts to capturing the enclosing object by reference). If such capture is desired, it can be expressed explicitly as [=, this]. (It’s worth noting that C++20 also provides a syntax for capturing the enclosing object by value: [*this].) Integrating feature-test macros into the C++ working draft. A tweak to the rules about when certain errors related to a class being abstract are reported. This is also a Defect Report against previous versions of the standard. (Recall, this means that implementers are encouraged to adopt the new semantics even in previous standards-conformance modes, like -std=c++11.) A tweak to the treatment of padding bits during atomic compare-and-exchange operations. Tweaks to the __VA_OPT__ preprocessor feature. Updating the reference to the Unicode standard. Library: The most notable addition at this meeting was standard library Concepts. This rounds out Concepts, the language feature, by adding foundational Concepts — originally developed as part of the Ranges TS — to the standard library, and paving the way for higher-level concept-constrained libraries that make use of these foundational Concepts. atomic_ref Bit-casting object representations Standard library specification in a Concepts and Contracts world Checking for the existence of an element in associative containers Add shift() to Implicit conversion traits and utility functions Integral power-of-2 operations The identity metafunction Improving the return value of erase()-like algorithms constexpr comparison operators for std::array constexpr for swap and related functions fpos requirements Eradicating unnecessarily explicit default constructors Removing some facilities that were deprecated in C++17 or earlier Technical Specifications In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization. At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development. Parallelism TS v2 The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication. The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft: Finding the right set of traits for simd concat() and split() on simd<> objects Other tweaks to SIMD Various other comment resolutions The working draft, as modified by these changes, was then approved for publication! Reflection TS The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature. Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction. Concurrency TS v2 The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward. Library Fundamentals TS v3 The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups. Future Technical Specifications There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point: Graphics At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time. As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward. As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being. (That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).) Executors SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting. At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal. As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here). Merging Technical Specifications into C++20 Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain. Ranges TS The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20. The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review. Coroutines TS The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities). Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines. Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.) As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.) The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision. Networking TS The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain. Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting. Modules Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless. I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23). The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.) There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20. What’s happening with Concepts? The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)). As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction. The Role of Technical Specifications We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays. The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice. The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset. Evolution Working Group I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week. Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories: Accepted proposals: Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote). A couple of minor tweaks to the contracts proposal: In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller). Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting. try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time. More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable. Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time. Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives). std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime. Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while. Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {. There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above: Prohibit aggregate types with user-declared constructors. This addresses a confusing and unexpected hole in the language where some types that have deleted constructors can still be constructed (even using the same number / types of arguments as in the deleted constructor) via aggregate initialization. (An alternative, more targeted fix was considered but rejected.) Allowing virtual function calls in constant expressions. This lifts another restriction that’s unnecessary, since during constant evaluation, the compiler tracks the dynamic types of objects anyways. Integrating feature-test macros into the C++ working draft. This promotes feature-test macros (like __cpp_constexpr) from being listed in a Standing Document, to being listed in the standard itself. My understanding is that this is the “stamp of approval” Microsoft has been waiting for to implement these macros in MSVC. Proposals for which further work is encouraged: Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal. Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out. Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal. Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this: template R foo(this Self&& self, /* other parameters */); EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this: template R foo(/* other parameters */) Self&& self (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out). EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type). constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms. Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter). There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter: issue a warning issue an error extend the lifetime of the returned object In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on. EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back. Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further. How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it). Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious. Rejected proposals: Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected. Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary. Concepts EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs). Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type). The four different proposals in play at the last meeting have been whittled down to two: An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template void sort(S& s);) to require braces for type parameters, so that you’d instead write template void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.) Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone. Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax. After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation). In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule. Coroutines As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design. Core Coroutines makes a number of changes to the Coroutines TS design: The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated). Syntax changes: To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption). To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests). A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function). EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines: While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases. There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase. While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.) As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work. While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting. Modules As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”). Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals. The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment. Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics. EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus. One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting. As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky. It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20. Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal. Papers not discussed With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone. Other Working Groups Library Groups Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue. I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above. The following are among the proposals have passed design review and are undergoing (or awaiting) wording review: Notably, the merge of the Ranges TS into C++20. This includes various proposed enhancements: Improved range access customization points Contiguous ranges Deep integration of the Ranges TS Uninitialized memory algorithms Making std::vector constexpr. (This is the sort of thing that language enhancements to allow dynamic allocation in a constexpr context were meant to unblock.) constexpr in std::pointer_traits Tightening constaints on std::function Well-behaved interpolation for numbers and pointers Utility function to implement uses-allocator construction Allocator-aware basic_stringbuf decay_unwrap and unwrap_reference Vectorization policies (taken from the Parallelism TS v2) Utility enhancements for std::span function_ref: a non-owning reference to a Callable A coroutine task type The following proposals are still undergoing design review, and are being treated with priority: Text formatting A stack trace library Generic none() factories for Nullable types Monadic operations for std::optional The following proposals are also undergoing design review: Executors A friendlier tuple get() Fractional numeric type std::embed, which provides a mechanism to access program-external resources at compile time Fixing the partial_order comparison algorithm Fixed-point real numbers User-defined literals for std::filesystem::path split()/join() for string and string_view A call for a data persistence (“iostream v2”) study group. It looks like there is sufficient interest to proceed with creating a Study Group here. Sizes should only span unsigned. This is likely to prompt an evening session at the next meeting to have a more general discussion about the use of signed vs. unsigned types in library interfaces. Should span be regular? Safe integral comparisons Zero-overhead deterministic exceptions: throwing values. Feedback on this in Library Evolution Working Group has been positive. As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals. (These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.) Study Groups SG 1 (Concurrency) I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors. The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline: Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread. Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization. Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold. SG 7 (Compile-Time Programming) It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting. That said, SG 7 did meet briefly to discuss two other papers: PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill). Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon. SG 12 (Undefined and Unspecified Behaviour) SG 12 met to discuss several topics this week: Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour. Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered. Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers. SG 15 (Tooling) The Tooling Study Group (SG 15) held its second meeting during an evening session this week. The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community. SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system. The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself. It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move. Next Meetings The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018. However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then: A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals. A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal. A meeting of the Concurrecy Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors. (The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.) Conclusion I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included: The entire Ranges TS being on track to be merged into C++20. C++20 gaining standard facilities for contract programming. Important progress on Modules, with a merged proposal that was very well-received. A pivot towards package management, including as a way to make graphical progamming in C++ more accessible. Stay tuned for future reports from me! Other Trip Reports Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well! [Less]
Posted almost 6 years ago by botondballo
Summary / TL;DR Project What’s in it? Status C++17 See list Published! C++20 See below On track Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17 Concepts TS ... [More] Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication! Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet. Concurrency TS v2 See below Under development. Depends on Executors. Networking TS Sockets library based on Boost.ASIO Published! Ranges TS Range-based algorithms and views Published! Headed towards C++20 Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain Modules v1 A component system to supersede the textual header file inclusion model Published as a TS Modules v2 Improvements to Modules v1, including a better transition path Under active development Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API No consensus to move forward Reflection TS Static code reflection mechanisms Send out for PDTS ballot Contracts Preconditions, postconditions, and assertions Merged into C++20 A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days. Introduction A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post. At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS. C++20 C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report. Language: Support for contract-based programming in C++20. This adds language-level preconditions, postconditions, and assertions to C++. It’s a feature long in the making, and one of the most significant features added to C++20 to date. Class types in non-type template parameters. This is a long-desired change that significantly increases the expressiveness of the language for metaprogramming and reflection purposes. Allowing virtual function calls in constant expressions. Prohibit aggregates with user-declared constructors. Efficient sized deletion for variable-sized classes. Consistency improvements for <=> and other comparison operators. Conditionally explicit constructors, a.k.a. explicit(bool). Deprecate implicit capture of this via [=]. This deprecates the arguably misleading semantics of a default by-value capture implicitly capturing the this pointer (which amounts to capturing the enclosing object by reference). If such capture is desired, it can be expressed explicitly as [=, this]. (It’s worth noting that C++20 also provides a syntax for capturing the enclosing object by value: [*this].) Integrating feature-test macros into the C++ working draft. A tweak to the rules about when certain errors related to a class being abstract are reported. This is also a Defect Report against previous versions of the standard. (Recall, this means that implementers are encouraged to adopt the new semantics even in previous standards-conformance modes, like -std=c++11.) A tweak to the treatment of padding bits during atomic compare-and-exchange operations. Tweaks to the __VA_OPT__ preprocessor feature. Updating the reference to the Unicode standard. Library: The most notable addition at this meeting was standard library Concepts. This rounds out Concepts, the language feature, by adding foundational Concepts — originally developed as part of the Ranges TS — to the standard library, and paving the way for higher-level concept-constrained libraries that make use of these foundational Concepts. atomic_ref Bit-casting object representations Standard library specification in a Concepts and Contracts world Checking for the existence of an element in associative containers Add shift() to Implicit conversion traits and utility functions Integral power-of-2 operations The identity metafunction Improving the return value of erase()-like algorithms constexpr comparison operators for std::array constexpr for swap and related functions fpos requirements Eradicating unnecessarily explicit default constructors Removing some facilities that were deprecated in C++17 or earlier Technical Specifications In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization. At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development. Parallelism TS v2 The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication. The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft: Finding the right set of traits for simd concat() and split() on simd<> objects Other tweaks to SIMD Various other comment resolutions The working draft, as modified by these changes, was then approved for publication! Reflection TS The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature. Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction. Concurrency TS v2 The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward. Library Fundamentals TS v3 The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups. Future Technical Specifications There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point: Graphics At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time. As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward. As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being. (That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).) Executors SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting. At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal. As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here). Merging Technical Specifications into C++20 Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain. Ranges TS The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20. The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review. Coroutines TS The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities). Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines. Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.) As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.) The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision. Networking TS The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain. Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting. Modules Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless. I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23). The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.) There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20. What’s happening with Concepts? The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)). As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction. The Role of Technical Specifications We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays. The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice. The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset. Evolution Working Group I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week. Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories: Accepted proposals: Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote). A couple of minor tweaks to the contracts proposal: In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller). Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting. try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time. More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable. Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time. Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives). std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime. Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while. Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {. There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above: Prohibit aggregate types with user-declared constructors. This addresses a confusing and unexpected hole in the language where some types that have deleted constructors can still be constructed (even using the same number / types of arguments as in the deleted constructor) via aggregate initialization. (An alternative, more targeted fix was considered but rejected.) Allowing virtual function calls in constant expressions. This lifts another restriction that’s unnecessary, since during constant evaluation, the compiler tracks the dynamic types of objects anyways. Integrating feature-test macros into the C++ working draft. This promotes feature-test macros (like __cpp_constexpr) from being listed in a Standing Document, to being listed in the standard itself. My understanding is that this is the “stamp of approval” Microsoft has been waiting for to implement these macros in MSVC. Proposals for which further work is encouraged: Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal. Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out. Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal. Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this: template R foo(this Self&& self, /* other parameters */); EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this: template R foo(/* other parameters */) Self&& self (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out). EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type). constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms. Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter). There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter: issue a warning issue an error extend the lifetime of the returned object In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on. EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back. Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further. How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it). Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious. Rejected proposals: Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected. Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary. Concepts EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs). Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type). The four different proposals in play at the last meeting have been whittled down to two: An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template void sort(S& s);) to require braces for type parameters, so that you’d instead write template void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.) Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone. Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax. After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation). In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule. Coroutines As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design. Core Coroutines makes a number of changes to the Coroutines TS design: The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated). Syntax changes: To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption). To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests). A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function). EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines: While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases. There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase. While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.) As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work. While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting. Modules As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”). Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals. The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment. Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics. EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus. One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting. As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky. It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20. Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal. Papers not discussed With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone. Other Working Groups Library Groups Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue. I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above. The following are among the proposals have passed design review and are undergoing (or awaiting) wording review: Notably, the merge of the Ranges TS into C++20. This includes various proposed enhancements: Improved range access customization points Contiguous ranges Deep integration of the Ranges TS Uninitialized memory algorithms Making std::vector constexpr. (This is the sort of thing that language enhancements to allow dynamic allocation in a constexpr context were meant to unblock.) constexpr in std::pointer_traits Tightening constaints on std::function Well-behaved interpolation for numbers and pointers Utility function to implement uses-allocator construction Allocator-aware basic_stringbuf decay_unwrap and unwrap_reference Vectorization policies (taken from the Parallelism TS v2) Utility enhancements for std::span function_ref: a non-owning reference to a Callable A coroutine task type The following proposals are still undergoing design review, and are being treated with priority: Text formatting A stack trace library Generic none() factories for Nullable types Monadic operations for std::optional The following proposals are also undergoing design review: Executors A friendlier tuple get() Fractional numeric type std::embed, which provides a mechanism to access program-external resources at compile time Fixing the partial_order comparison algorithm Fixed-point real numbers User-defined literals for std::filesystem::path split()/join() for string and string_view A call for a data persistence (“iostream v2”) study group. It looks like there is sufficient interest to proceed with creating a Study Group here. Sizes should only span unsigned. This is likely to prompt an evening session at the next meeting to have a more general discussion about the use of signed vs. unsigned types in library interfaces. Should span be regular? Safe integral comparisons Zero-overhead deterministic exceptions: throwing values. Feedback on this in Library Evolution Working Group has been positive. As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals. (These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.) Study Groups SG 1 (Concurrency) I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors. The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline: Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread. Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization. Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold. SG 7 (Compile-Time Programming) It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting. That said, SG 7 did meet briefly to discuss two other papers: PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill). Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon. SG 12 (Undefined and Unspecified Behaviour) SG 12 met to discuss several topics this week: Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour. Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered. Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers. SG 15 (Tooling) The Tooling Study Group (SG 15) held its second meeting during an evening session this week. The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community. SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system. The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself. It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move. Next Meetings The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018. However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then: A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals. A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal. A meeting of the Concurrency Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors. (The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.) Conclusion I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included: The entire Ranges TS being on track to be merged into C++20. C++20 gaining standard facilities for contract programming. Important progress on Modules, with a merged proposal that was very well-received. A pivot towards package management, including as a way to make graphical progamming in C++ more accessible. Stay tuned for future reports from me! Other Trip Reports Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well! [Less]
Posted almost 6 years ago by botondballo
Summary / TL;DR Project What’s in it? Status C++17 See list Published! C++20 See below On track Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17 Concepts TS ... [More] Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication! Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet. Concurrency TS v2 See below Under development. Depends on Executors. Networking TS Sockets library based on Boost.ASIO Published! Ranges TS Range-based algorithms and views Published! Headed towards C++20 Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain Modules v1 A component system to supersede the textual header file inclusion model Published as a TS Modules v2 Improvements to Modules v1, including a better transition path Under active development Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API No consensus to move forward Reflection TS Static code reflection mechanisms Send out for PDTS ballot Contracts Preconditions, postconditions, and assertions Merged into C++20 A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days. Introduction A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post. At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS. C++20 C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report. Language: Support for contract-based programming in C++20. This adds language-level preconditions, postconditions, and assertions to C++. It’s a feature long in the making, and one of the most significant features added to C++20 to date. Class types in non-type template parameters. This is a long-desired change that significantly increases the expressiveness of the language for metaprogramming and reflection purposes. Allowing virtual function calls in constant expressions. Prohibit aggregates with user-declared constructors. Efficient sized deletion for variable-sized classes. Consistency improvements for and other comparison operators. Conditionally explicit constructors, a.k.a. explicit(bool). Deprecate implicit capture of this via [=]. This deprecates the arguably misleading semantics of a default by-value capture implicitly capturing the this pointer (which amounts to capturing the enclosing object by reference). If such capture is desired, it can be expressed explicitly as [=, this]. (It’s worth noting that C++20 also provides a syntax for capturing the enclosing object by value: [*this].) Integrating feature-test macros into the C++ working draft. A tweak to the rules about when certain errors related to a class being abstract are reported. This is also a Defect Report against previous versions of the standard. (Recall, this means that implementers are encouraged to adopt the new semantics even in previous standards-conformance modes, like -std=c++11.) A tweak to the treatment of padding bits during atomic compare-and-exchange operations. Tweaks to the __VA_OPT__ preprocessor feature. Updating the reference to the Unicode standard. Library: The most notable addition at this meeting was standard library Concepts. This rounds out Concepts, the language feature, by adding foundational Concepts — originally developed as part of the Ranges TS — to the standard library, and paving the way for higher-level concept-constrained libraries that make use of these foundational Concepts. atomic_ref Bit-casting object representations Standard library specification in a Concepts and Contracts world Checking for the existence of an element in associative containers Add shift() to Implicit conversion traits and utility functions Integral power-of-2 operations The identity metafunction Improving the return value of erase()-like algorithms constexpr comparison operators for std::array constexpr for swap and related functions fpos requirements Eradicating unnecessarily explicit default constructors Removing some facilities that were deprecated in C++17 or earlier Technical Specifications In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization. At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development. Parallelism TS v2 The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication. The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft: Finding the right set of traits for simd concat() and split() on simd objects Other tweaks to SIMD Various other comment resolutions The working draft, as modified by these changes, was then approved for publication! Reflection TS The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature. Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction. Concurrency TS v2 The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward. Library Fundamentals TS v3 The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups. Future Technical Specifications There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point: Graphics At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time. As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward. As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being. (That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).) Executors SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting. At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal. As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here). Merging Technical Specifications into C++20 Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain. Ranges TS The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20. The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review. Coroutines TS The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities). Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines. Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.) As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.) The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision. Networking TS The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain. Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting. Modules Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless. I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23). The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.) There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20. What’s happening with Concepts? The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)). As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction. The Role of Technical Specifications We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays. The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice. The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset. Evolution Working Group I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week. Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories: Accepted proposals: Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote). A couple of minor tweaks to the contracts proposal: In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller). Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting. try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time. More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable. Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time. Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives). std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime. Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while. Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {. There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above: Prohibit aggregate types with user-declared constructors. This addresses a confusing and unexpected hole in the language where some types that have deleted constructors can still be constructed (even using the same number / types of arguments as in the deleted constructor) via aggregate initialization. (An alternative, more targeted fix was considered but rejected.) Allowing virtual function calls in constant expressions. This lifts another restriction that’s unnecessary, since during constant evaluation, the compiler tracks the dynamic types of objects anyways. Integrating feature-test macros into the C++ working draft. This promotes feature-test macros (like __cpp_constexpr) from being listed in a Standing Document, to being listed in the standard itself. My understanding is that this is the “stamp of approval” Microsoft has been waiting for to implement these macros in MSVC. Proposals for which further work is encouraged: Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal. Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out. Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal. Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this: template R foo(this Self&& self, /* other parameters */); EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this: template R foo(/* other parameters */) Self&& self (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out). EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type). constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms. Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter). There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter: issue a warning issue an error extend the lifetime of the returned object In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on. EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back. Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further. How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it). Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious. Rejected proposals: Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected. Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary. Concepts EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs). Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type). The four different proposals in play at the last meeting have been whittled down to two: An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template void sort(S& s);) to require braces for type parameters, so that you’d instead write template void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.) Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone. Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax. After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation). In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule. Coroutines As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design. Core Coroutines makes a number of changes to the Coroutines TS design: The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated). Syntax changes: To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption). To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests). A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function). EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines: While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases. There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase. While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.) As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work. While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting. Modules As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”). Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals. The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment. Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics. EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus. One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting. As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky. It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20. Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal. Papers not discussed With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone. Other Working Groups Library Groups Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue. I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above. The following are among the proposals have passed design review and are undergoing (or awaiting) wording review: Notably, the merge of the Ranges TS into C++20. This includes various proposed enhancements: Improved range access customization points Contiguous ranges Deep integration of the Ranges TS Uninitialized memory algorithms Making std::vector constexpr. (This is the sort of thing that language enhancements to allow dynamic allocation in a constexpr context were meant to unblock.) constexpr comparisons for std::array constexpr in std::pointer_traits Tightening constaints on std::function Well-behaved interpolation for numbers and pointers Utility function to implement uses-allocator construction Allocator-aware basic_stringbuf decay_unwrap and unwrap_reference Vectorization policies (taken from the Parallelism TS v2) Utility enhancements for std::span function_ref: a non-owning reference to a Callable A coroutine task type The following proposals are still undergoing design review, and are being treated with priority: Text formatting A stack trace library Generic none() factories for Nullable types Monadic operations for std::optional The following proposals are also undergoing design review: Executors A friendlier tuple get() Fractional numeric type std::embed, which provides a mechanism to access program-external resources at compile time Fixing the partial_order comparison algorithm Fixed-point real numbers User-defined literals for std::filesystem::path split()/join() for string and string_view A call for a data persistence (“iostream v2”) study group. It looks like there is sufficient interest to proceed with creating a Study Group here. Sizes should only span unsigned. This is likely to prompt an evening session at the next meeting to have a more general discussion about the use of signed vs. unsigned types in library interfaces. Should span be regular? Safe integral comparisons Zero-overhead deterministic exceptions: throwing values. Feedback on this in Library Evolution Working Group has been positive. As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals. (These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.) Study Groups SG 1 (Concurrency) I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors. The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline: Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread. Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization. Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold. SG 7 (Compile-Time Programming) It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting. That said, SG 7 did meet briefly to discuss two other papers: PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill). Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon. SG 12 (Undefined and Unspecified Behaviour) SG 12 met to discuss several topics this week: Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour. Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered. Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers. SG 15 (Tooling) The Tooling Study Group (SG 15) held its second meeting during an evening session this week. The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community. SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system. The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself. It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move. Next Meetings The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018. However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then: A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals. A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal. A meeting of the Concurrecy Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors. (The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.) Conclusion I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included: The entire Ranges TS being on track to be merged into C++20. C++20 gaining standard facilities for contract programming. Important progress on Modules, with a merged proposal that was very well-received. A pivot towards package management, including as a way to make graphical progamming in C++ more accessible. Stay tuned for future reports from me! Other Trip Reports Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well! [Less]
Posted almost 6 years ago by Gregory Szorc
As of Firefox 60, the build environment for official Firefox Linux builds switched from CentOS to Debian. As part of the transition, we overhauled how the build environment for Firefox is constructed. We now populate the environment from ... [More] deterministic package snapshots and are much more stringent about dependencies and operations being deterministic and reproducible. The end result is that the build environment for Firefox is deterministic enough to enable Firefox itself to be built deterministically. Changing the underlying operating system environment used for builds was a risky change. Differences in the resulting build could result in new bugs or some users not being able to run the official builds. We figured a good way to mitigate that risk was to make the old and new builds as bit-identical as possible. After all, if the environments produce the same bits, then nothing has effectively changed and there should be no new risk for end-users. Employing the diffoscope tool, we identified areas where Firefox builds weren't deterministic in the same environment and where there was variance across build environments. We iterated on differences and changed systems so variance would no longer occur. By the end of the process, we had bit-identical Firefox builds across environments. So, as of Firefox 60, Firefox builds on Linux are deterministic in our official build environment! That being said, the builds we ship to users are using PGO. And an end-to-end build involving PGO is intrinsically not deterministic because it relies on timing data that varies from one run to the next. And we don't yet have continuous automated end-to-end testing that determinism holds. But the underlying infrastructure to support deterministic and reproducible Firefox builds is there and is not going away. I think that's a milestone worth celebrating. This milestone required the effort of many people, often working indirectly toward it. Debian's reproducible builds effort gave us an operating system that provided deterministic and reproducible guarantees. Switching Firefox CI to Taskcluster enabled us to switch to Debian relatively easily. Many were involved with non-determinism fixes in Firefox over the years. But Mike Hommey drove the transition of the build environment to Debian and he deserves recognition for his individual contribution. Thanks to all these efforts - and especially Mike Hommey's - we can now say Firefox builds deterministically! The fx-reproducible-build bug tracks ongoing efforts to further improve the reproducibility story of Firefox. (~300 bugs in its dependency tree have already been resolved!) [Less]
Posted almost 6 years ago by Nicholas D. Matsakis
I consider Rust’s RFC process one of our great accomplishments, but it’s no secret that it has a few flaws. At its best, the RFC offers an opportunity for collaborative design that is really exciting to be a part of. At its worst, it can devolve into ... [More] bickering without any real motion towards consensus. If you’ve not done so already, I strongly recommend reading aturon’s excellent blog posts on this topic. The RFC process has also evolved somewhat organically over time. What began as “just open a pull request on GitHub” has moved into a process with a number of formal and informal stages (described below). I think it’s a good time for us to take a step back and see if we can refine those stages into something that works better for everyone. This blog post describes a proposal that arose over some discussions at the Mozilla All Hands. This proposal represents an alternate take on the RFC process, drawing on some ideas from the TC39 process, but adapting them to Rust’s needs. I’m pretty excited about it. Important: This blog post is meant to advertise a proposal about the RFC process, not a final decision. I’d love to get feedback on this proposal and I expect further iteration on the details. In any case, until the Rust 2018 Edition work is complete, we don’t really have the bandwidth to make a change like this. (And, indeed, most of my personal attention remains on NLL at the moment.) If you’d like to discuss the ideas here, I opened an internals thread. TL;DR The TL;DR of the proposal is as follows: Explicit RFC stages. Each proposal moves through a series of explicit stages. Each RFC gets its own repository. These are automatically created by a bot. This permits us to use GitHub issues and pull requests to split up conversation. It also permits a RFC to have multiple documents (e.g., a FAQ). The repository tracks the proposal from the early days until stabilization. Right now, discussions about a particular proposal are scattered across internals, RFC pull requests, and the Rust issue tracker. Under this new proposal, a single repository would serve as the home for the proposal. In the case of more complex proposals, such as impl Trait, the repository could even serve as the home multiple layered RFCs. Prioritization is now an explicit part of the process. The new process includes an explicit step to move from the “spitballing” stage (roughly “Pre-RFC” today) to the “designing” stage (roughly “RFC” today). This step requires both a team champion, who agrees to work on moving the proposal through implementation and towards stabilization, and general agreement from the team. The aim here is two-fold. First, the teams get a chance to provide early feedback and introduce key constraints (e.g., “this may interact with feature X”). Second, it provides room for a discussion about prioritization: there are often RFCs which are good ideas, but which are not a good idea right now, and the current process doesn’t give us a way to specify that. There is more room for feedback on the final, implemented design. In the new process, once implementation is complete, there is another phase where we (a) write an explainer describing how the feature works and (b) issue a general call for evaluation. We’ve done this before – such as cramertj’s call for feedback on impl Trait, aturon’s call to benchmark incremental compilation, or alexcrichton’s push to stabilize some subset of procedural macros – but each of those was an informal effort, rather than an explicit part of the RFC process. The current process Before diving into the new process, I want to give my view of the current process by which an idea becomes a stable feature. This goes beyond just the RFC itself. In fact, there are a number of stages, though some of them are informal or sometimes skipped: Pre-RFC (informal): Discussions take place – often on internals – about the shape of the problem to be solved and possible proposals. RFC: A specific proposal is written and debated. It may be changed during this debate as a result of points that are raised. Steady state: At some point, the discussion reaches a “steady state”. This implies a kind of consensus – not necessarily a consensus about what to do, but a consensus on the pros and cons of the feature and the various alternatives. Note that reaching a steady state does not imply that no new comments are being posted. It just implies that the content of those comments is not new. Move to merge: Once the steady state is reached, the relevant team(s) can move to merge the RFC. This begins with a bunch of checkboxes, where each team member indicates that they agree that the RFC should be merged; in some cases, blocking concerns are raised (and resolved) during this process. FCP: Finally, once the team has assented to the merge, the RFC enters the Final Comment Period (FCP). This means that we wait for 10 days to give time for any final arguments to arise. Implementation: At this point, a tracking issue on the Rust repo is created. This will be the new home for discussion about the feature. We can also start writing code, which lands under a feature gate. Refinement: Sometimes, after implementation the feature, we find that the original design was inconsistent, in which case we might opt to alter the spec. Such alterations are discussed on the tracking issue – for significant changes, we will typically open a dedicated issue and do an FCP process, just like with the original RFC. A similar procedure happens for resolving unresolved questions. Stabilization: The final step is to move to stabilize. This is always an FCP decision, though the precise protocol varies. What I consider Best Practice is to create a dedicated issue for the stabilization: this issue should describe what is being stabilized, with an emphasis on (a) what has changed since the RFC, (b) tests that show the behavior in practice, and (c) what remains to be stabilized. (An example of such an issue is #48453, which proposed to stabilize the ? in main feature.) Proposal for a new process The heart of the new proposal is that each proposal should go through a series of explicit stages, depicted graphically here (you can also view this directly on Gooogle drawings, where the oh-so-important emojis work better): You’ll notice that the stages are divided into two groups. The stages on the left represent phases where significant work is being done: they are given “active” names that end in “ing”, like spitballing, designing, etc. The bullet points below describe the work that is to be done. As will be described shortly, this work is done on a dedicated repository, by the community at large, in conjunction with at least one team champion. The stages on the right represent decision points, where the relevant team(s) must decide whether to advance the RFC to the next stage. The bullet points below represent the questions that the team must answer. If the answer is Yes, then the RFC can proceed to the next stage – note that sometimes the RFC can proceed, but unresolved questions are added as well, to be addressed at a later stage. Repository per RFC Today, the “home” for an RFC changes over the course of the process. It may start in an internals thread, then move to the RFC repo, then to a tracking issue, etc. Under the new process, we would instead create a dedicated repository for each RFC. Once created, the RFC would serve as the “main home” for the new proposal from start to finish. The repositories will live in the rust-rfcs organization. There will be a convenient webpage for creating them; it will create a repo that has an appropriate template and which is owned by the appropriate Rust team, with the creator also having full permissions. These repositories would naturally be subject to Rust’s Code of Conduct and other guidelines. Note that you do not have to seek approval from the team to create a RFC repository. Just like opening a PR, creating a repository is something that anyone can do. The expectation is that the team will be tracking new repositories that are created (as well as those seeing a lot of discussion) and that members of the team will get involved when the time is right. The goal here is to create the repository early – even before the RFC text is drafted, and perhaps before there exists a specific proposal. This allows joint authorship of RFCs and iteration in the repository. In addition to create a “single home” for each proposal, having a dedicated RFC allows for a number of new patterns to emerge: One can create a FAQ.md that answers common questions and summarizes points that have already reached consensus. One can create an explainer.md that documents the feature and explains how it works – in fact, creating such docs is mandatory during the “implementing” phase of the process. We can put more than one RFC into a single repository. Often, there are complex features with inter-related (but distinct) aspects, and this allows those different parts to move through the stabilization process at a different pace. The main RFC repository The main RFC repository (named rust-rfcs/rfcs or something like that) would no longer contain content on its own, except possibly the final draft of each RFC text. Instead, it would primarily serve as an index into the other repositories, organized by stage (similar to the TC39 proposals repository). The purpose of this repository is to make it easy to see “what’s coming” when it comes to Rust. I also hope it can serve as a kind of “jumping off point” for people contributing to Rust, whether that be through design input, implementation work, or other things. Team champions and the mechanics of moving an RFC between stages One crucial role in the new process is that of the team champion. The team champion is someone from the Rust team who is working to drive this RFC to completion. Procedurally speaking, the team champion has two main jobs. First, they will give periodic updates to the Rust team at large of the latest developments, which will hopefully identify conflicts or concerns early on. The second job is that team champions decide when to try and move the RFC between stages. The idea is that it is time to move between stages when two conditions are met: The discussion on the repository has reached a “steady state”, meaning that there do not seem to be new arguments or counterarguments emerging. This sometimes also implies a general consensus on the design, but not always: it does however imply general agreement on the contours of the design space and the trade-offs involved. There are good answers to the questions listed for that stage. The actual mechanics of moving an RFC between stages are as follows. First, although not strictly required, the team champion should open an issue on the RFC repository proposing that it is time to move between stages. This issue should contain a draft of the report that will be given to the team at large, which should include summary of the key points (pro and con) around the design. Think of like a summary comment today. This issue can go through an FCP period in the same way as today (though without the need for checkmarks) to give people a chance to review the summary. At that point, the team champion will open a PR on the main repository (rust-rfcs/rfcs). This PR itself will not have a lot of content: it will mostly edit the index, moving the PR to a new stage, and – where appropriate – linking to a specific revision of the text in the RFC repository (this revision then serves as “the draft” that was accepted, though of course further edits can and will occur). It should also link to the issue where the champion proposed moving to the next stage, so that the team can review the comments found there. The PRs that move an RFC between stages are primarily intended for the Rust team to discuss – they are not meant to be the source of sigificant discussion, which ought to be taking place on the repository. If one looks at the current RFC process, they might consist of roughly the set of comments that typically occur once FCP is proposed. The teams should ensure that a decision (yay or nay) is reached in a timely fashion. Finding the best way for teams to govern themselves to ensure prompt feedback remains a work in progress. The TC39 process is all based around regular meetings, but we are hoping to achieve something more asynchronous, in part so that we can be more friendly to people from all time zones, and to ease language barriers. But there is still a need to ensure that progress is made. I expect that weekly meetings will continue to play a role here, if only to nag people. Making implicit stages explicit There are two new points in the process that I want to highlight. Both of these represents an attempt to take “implicit” decision points that we used to have and make them more explicit and observable. The Proposal point and the change from Spitballing to Designing The very first stage in the RFC is going from the Spitballing phase to the Designing phase – this is done by presenting a Proposal. One crucial point is that there doesn’t have to be a primary design in order to present a proposal. It is ok to say “here are two or three designs that all seem to have advantages, and further design is needed to find the best approach” (often, that approach will be some form of synthesis of those designs anyway). The main questions to be answered at the proposal have to do with motivation and prioritization. There are a few questions to answer: Is this a problem we want to solve? And, specifically, is this a problem we want to solve now? Do we think we have some realistic ideas for solving it? Are there major things that we ought to dig into? Are there cross-cutting concerns and interactions with other features? It may be that two features which are individually quite good, but which – taken together – blow the language complexity budget. We should always try to judge how a new feature might affect the language (or libraries) as a whole. We may want to extend the process in other ways to make identification of such “cross-cutting” or “global” concerns more first class. The expectation is that all major proposals need to be connected to the roadmap. This should help to keep us focused on the work we are supposed to be doing. (I think it is possible for RFCs to advance that are not connected to the roadmap, but they need to be simple extensions that could effectively work at any time.) There is another way that having an explicit Proposal step addresses problems around prioritization. Creating a Proposal requires a Team Champion, which implies that there is enough team bandwidth to see the project through to the end (presuming that people don’t become champions for more than a few projects at a time). If we find that there aren’t enough champions to go around (and there aren’t), then this is a sign we need to grow the teams (something we’ve been trying hard to do). The Proposal point also offers a chance for other team members to point out constraints that may have been overlooked. These constraints don’t necessarily have to derail the proposal, they may just add new points to be addressed during the Designing phase. The Candidate point and the Evaluating phase Another new addition to the process here is the Evaluation phase. The idea here is that, once implementation is complete, we should do two things: Write up an explainer that describes how the feature works in terms suitable for end users. This is a kind of “preliminary” documentation for the feature. It should explain how to enable the feature, what it’s good for, and give some examples of how to use it. For libraries, the explainer may not be needed, as the API docs serve the same purpose. We should in particular cover points where the design has changed significantly since the “Draft” phase. Propose the RFC for Candidate status. If accepted, we will also issue a general call for evaluation. This serves as a kind of “pre-stabilization” notice. It means that people should go take the new feature for a spin, kick the tires, etc. This will hopefully uncover bugs, but also surprising failure modes, ergonomic hazards, or other pitfalls with the design. If any significant problems are found, we can correct them, update the explainer, and repeat until we are satisfied (or until we decide the idea isn’t going to work out). As I noted earlier, we’ve done this before, but always informally: cramertj’s call for feedback on impl Trait; aturon’s call to benchmark incremental compilation; alexcrichton’s push to stabilize some subset of procedural macros. Once the evaluation phase seems to have reached a conclusion, we would move to stabilize the feature. The explainer docs would then become the preliminary documentation and be added to a kind of addendum in the Rust book. The docs would be expected to integrate the docs into the book in smoother form sometime after synchronization. Conclusion As I wrote before, this is only a preliminary proposal, and I fully expect us to make changes to it. Timing wise, I don’t think it makes sense to pursue this change immediately anyway: we’ve too much going on with the edition. But I’m pretty excited about revamping our RFC processes both by making stages explicit and adding explicit repositories. I have hopes that we will find ways to use explicit repositories to drive discussions towards consensus faster. It seems that having the ability, for example, to document “auxiliary” documents, such as lists of constraints and rationale, can help to ensure that people’s concerns are both heard and met. In general, I would also like to start trying to foster a culture of “joint ownership” of in-progress RFCs. Maintaining a good RFC repository is going to be a fair amount of work, which is a great opportunity for people at large to pitch in. This can then serve as a kind of “mentoring on ramp” getting people more involved in the lang team. Similarly, I think that having a list of RFCs that are in the “implementation” phase might be a way to help engage people who’d like to hack on the compiler. Comments? Please leave comments in the internals thread for this post. Credit where credit is due This proposal is heavily shaped by the TC39 process. This particular version was largely drafted in a big group discussion with wycats, aturon, ag_dubs, steveklabnik, nrc, jntrnr, erickt, and oli-obk, though earlier proposals also involved a few others. Updates (I made various simplifications shortly after publishing, aiming to keep the length of this blog post under control and remove what seemed to be somewhat duplicated content.) [Less]
Posted almost 6 years ago by Air Mozilla
In part 2 of this two-part video, managers can use this Playbook to assess their employees' performance and make recommendations about bonus and merit.
Posted almost 6 years ago by Air Mozilla
In part 1 of this two-part video, managers can use this Playbook to help assess their employees' performance and make bonus and merit recommendations.
Posted almost 6 years ago by Bryce Van Dyk
Mozilla is rolling out Phabricator as part of our tooling. However, at the time of writing I was unable to find a straight forward setup to get the Phabricator tooling playing nice on Windows with MozillaBuild. Right now there are a couple of ... [More] separate threads around how to interact with Phabricator on Windows: Unified front end for code reviews Shipping Arcanist and its deps in MozillaBuild However, I have stuff waiting for me on Phabricator that I'd like to interact with now, so let's get a work around in place! I started with the Arcanist windows steps, but have adapted them to a MozillaBuild specific environment. PHP Arcanist requires PHP. Grab a build from here. The docs for Arcanist indicate the type of build doesn't really matter, but I opted for a thread safe one because that seems like a nice thing to have. I installed PHP outside of my MozillaBuild directory, but you can put it anywhere. For the sake of example, my install is in C:\Patches\Php\php-7.2.6-Win32-VC15-x64. We need to enable the curl extension: in the PHP install dir copy php.ini-development to php.ini and uncomment (by removing the ;) the extension=curl line. Finally, enable PHP to find its extension by uncommenting the extension_dir = "ext" line. The Arcanist instructions suggest setting a fully qualified path, but I found a relative path worked fine. Arcanist Create somewhere to store Arcanist and libphutil. Note, these need to be located in the same directory for arc to work. $ mkdir somewhere/ $ cd somewhere/ somewhere/ $ git clone https://github.com/phacility/libphutil.git somewhere/ $ git clone https://github.com/phacility/arcanist.git For me this is C:\Patches\phacility\. Wire everything into MozillaBuild Since I want arc to be usable in MozillaBuild until this work around is no longer required, we're going to modify start up settings. We can do this by changing ...mozilla-build/msys/etc/profile.d and adding to the PATH already being constructed. In my case I've added that paths mentioned earlier, but with MSYS syntax: /c/Patches/Php/php-7.2.6-Win32-VC15-x64:/c/Patches/phacility/arcanist/bin:. Now arc should run inside newly started MozillaBuild shells. Credentials We still need credentials in order to use arc with mozilla-central. For this to work we need a Phabricator account, see here for that. After that's done, in order to get your credentials run arc install-certificate, navigate to the page as instructed and copy your API key back to the command line. Other problems There was an issue with the evolve Mercurial extension the would cuase Unknown Mercurial log field 'instability'!. This should now be fixed in Arcanist. See this bug for more info. Finally, I had some issues with arc diff based on my Mercurial config. Updating my extensions and running a ./mach bootsrap seemed to help. Ready to go Hopefully everything is ready to go at this point. I found working through the Mozilla docs for how to use arc after setup helpful. If have any comments, please let me know either via email or on IRC. [Less]