I Use This!
Activity Not Available


Analyzed 9 months ago. based on code collected 11 months ago.
Posted about 14 hours ago by John Whitlock
December is when Mozilla meets as a company for our biannual All-Hands, and we reflect on the past year and plan for the future. Here are some of the highlights of 2018. The browser-compat-data (BCD) project required a sustained effort to convert ... [More] MDN’s documentation to structured data. The conversion was 39% complete at the start of 2018, and ended the year at 98% complete. Florian Scholz coordinated a large community of staff and volunteers, breaking up the work into human-sized chunks that could be done in parallel. The community converted, verified, and refreshed the data, and converted thousands of MDN pages to use the new data sources. Volunteers also built tools and integrations on top of the data. The interactive-examples project had a great year as well. Will Bamberg coordinated the work, including some all-staff efforts to write new examples. Schalk Neethling improved the platform as it grew to handle CSS, JavaScript, and HTML examples. In 2018, MDN developers moved from MozMEAO to Developer Outreach, joining the content staff in Emerging Technologies. The organizational change in March was followed by a nine-month effort to move the servers to the new ET account. Ryan Johnson, Ed Lim, and Dave Parfitt completed the smoothest server transition in MDN’s history. The strength of MDN is our documentation of fundamental web technologies. Under the leadership of Chris Mills, this content was maintained, improved, and expanded in 2018. It’s a lot of work to keep an institution running and growing, and there are few opportunities to properly celebrate that work. Thanks to Daniel Beck, Eric Shepherd, Estelle Weyl, Irene Smith, Janet Swisher, Rachel Andrew, and our community of partners and volunteers for keeping MDN awesome in 2018. Kadir Topal led the rapid development of the payments project. We’re grateful to all the MDN readers who are supporting the maintenance and growth of MDN. There’s a lot more that happened in 2018: January – Added a language preference dialog, and added rate limiting. February – Prepared to move developers to Emerging Technologies. March – Ran a Hack on MDN event for BCD, and tried Brotli. April – Moved MDN to a CDN, and started switching to SVG. May – Moved to ZenHub. June – Shipped Django 1.11. July – Decommissioned zones, and tried new CDN experiments. August – Started performance improvements, added section links, removed memcache from Kuma, and upgraded to ElasticSearch 5. September – Ran a Hack on MDN event for accessibility, and deleted 15% of macros. October – Completed the server migration, and shipped some performance improvements. November – Completed the migration to SVG, and updated the compatibility table header rows. Shipped tweaks and fixes There were 124 PRs merged in December, including 27 pull requests from 26 new contributors: 65 mdn/browser-compat-data PRs 22 mozilla/kuma PRs 20 mdn/interactive-examples PRs 4 mdn/bob PRs 3 mdn/data PRs 2 mdn/infra PRs 2 mdn/learning-area PRs 2 mdn/kumascript PRs 1 mdn/dom-examples PR 1 mdn/stumptown-experiment PR 1 mdn/html-examples PR 1 mdn/short-descriptions PR This includes some important changes and fixes: Add the Accessibility Checker plugin to CKEditor (Kuma PR 4989), from Florian Scholz. Add Jest and True, and initial tests (Kuma PR 5162), from Schalk Neethling. Fix test_footer_language_selector test (Kuma PR 5163), and Fix test_header_signin and test_edit_sign_in (Kuma PR 5166), from Ryan Johnson, part of the successful effort to get acceptance tests working reliably again. Add conic gradient examples (Interactive Examples PR 1265), from Estelle Weyl. 27 pull requests were from first-time contributors: Add compatibility data for the SVG paint-order attribute (PR 3074), and Fix SVG text MDN URLs, and textLength IE support (PR 3098), to BCD from Steven Kalt. Add Edge support for lastElementChild element of ParentNode (BCD PR 3099), from Andrew Stewart Gibson. Add Opera 36 support for class (BCD PR 3102), from Christian Sirolli. Add that “Experimental JavaScript Features” preference needed for Edge support of RegExp.flags (BCD PR 3142), from ulrichb. Fix typo in KeyboardEvent notes (BCD PR 3146), from Philipp Spiess. Samsung Browser does not support CSS media feature display-mode (BCD PR 3153), from Sumurai8. Update desktop Edge compatibility data for URLSearchParams (BCD PR 3162), from Vitaly K.. Add node.js v11 support for flatMap (BCD PR 3163), from Artur Klesun. Document.hasFocus, ChildNode.remove are not supported by Opera 12.18 (BCD PR 3165), from Abradoks. Lookbehind has no firefox support (BCD PR 3189), from StefanSchoof. Add support row for for await...of (BCD PR 3194), from Yuichi Nukiyama. Add Safari iOS support for animateMotion (BCD PR 3222), from Paul Masson. Add support for BigInt (BCD PR 3224), from VFDan. Opera still supports @keyframes (BCD PR 3227), from Tony Ross. Fix preference name for Firefox for CSS property scrollbar-color (BCD PR 3234), from Josh Smith. Simplify Math.round example (Interactive Examples PR 1230), from Kevin Simper. Add Intl.RelativeDateFormat example (Interactive Examples PR 1245), from Romulo Cintra. Put prefixed position: -webkit-sticky value before the standard value (Interactive Examples PR 1249), from Daniel Holbert. Use example name for employee (Interactive Examples PR 1259), from Osama Soliman. Change expected output of Math.truncto negative 0 (Interactive Examples PR 1264), from Hugo Nogueira. Fix spelling of nonExistentFunction (Interactive Examples PR 1274), from Dale Harris. Drop from scrollbar-width (Data PR 334), from Emilio Cobos Álvarez. Add lang attribute to element (learning-area PR 113), from Alexey Filin. Add id attribute to element (learning-area PR 114), from lfzyx. Add travis-based markdown linting and spell checking (PR 6), from Ryan Johnson (first contribution to stumptown-experiment). Remove bulk preloading all fonts (html-examples PR 3), from Vadim Makeev. Planned for January David Flanagan took a look at KumaScript, MDN’s macro rendering engine, and is proposing several changes to modernize it, including using await and Jest. These changes are performing well in the development environment, and we plan to get the new code in production in January. The post MDN Changelog – Looking back at 2018 appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted about 22 hours ago
Support for a long awaited GNU C extension, asm goto, is in the midst of landing in Clang and LLVM. We want to make sure that we release a high quality implementation, so it’s important to test the new patches on real code and not just small test ... [More] cases. When we hit compiler bugs in large source files, it can be tricky to find exactly what part of potentially large translation units are problematic. In this post, we’ll take a look at using C-Reduce, a multithreaded code bisection utility for C/C++, to help narrow done a reproducer for a real compiler bug (potentially; in a patch that was posted, and will be fixed before it can ship in production) from a real code base (the Linux kernel). It’s mostly a post to myself in the future, so that I can remind myself how to run C-reduce on the Linux kernel again, since this is now the third real compiler bug it’s helped me track down. So the bug I’m focusing on when trying to compile the Linux kernel with Clang is a linkage error, all the way at the end of the build. 1 drivers/spi/spidev.o:(__jump_table+0x74): undefined reference to `.Ltmp4' Hmm…looks like the object file (drivers/spi/spidev.o), has a section (__jump_table), that references a non-existent symbol (.Ltmp), which looks like a temporary label that should have been cleaned up by the compiler. Maybe it was accidentally left behind by an optimization pass? To run C-reduce, we need a shell script that returns 0 when it should keep reducing, and an input file. For an input file, it’s just way simpler to preprocess it; this helps cut down on the compiler flags that typically requires paths (-I, -L). Preprocess First, let’s preprocess the source. For the kernel, if the file compiles correctly, the kernel’s KBuild build process will create a file named in the form path/to/.file.o.cmd, in our case drivers/spi/.spidev.o.cmd. (If the file doesn’t compile, then I’ve had success hooking make path/to/file.o with bear then getting the compile_commands.json for the file.) I find it easiest to copy this file to a new shell script, then strip out everything but the first line. I then replace the -c -o .o with -E. chmod +x that new shell script, then run it (outputting to stdout) to eyeball that it looks preprocessed, then redirect the output to a .i file. Now that we have our preprocessed input, let’s create the C-reduce shell script. Reproducer I find it helpful to have a shell script in the form: remove previous object files rebuild object files disassemble object files and pipe to grep For you, it might be some different steps. As the docs show, you just need the shell script to return 0 when it should keep reducing. From our previous shell script that pre-processed the source and dumped a .i file, let’s change it back to stop before linking rather that preprocessing (s/-E/-c/), and change the input to our new .i file. Finally, let’s add the test for what we want. Since I want C-Reduce to keep reducing until the disassmbled object file no longer references anything Ltmp related, I write: 1 $ objdump -Dr -j __jump_table spidev.o | grep Ltmp > /dev/null Now I can run the reproducer to check that it at least returns 0, which C-Reduce needs to get started: 1 2 3 $ ./spidev_asm_goto.sh $ echo $? 0 Running C-Reduce Now that we have a reproducer script and input file, let’s run C-Reduce. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 $ time creduce --n 40 spidev_asm_goto.sh spidev.i ===< 144926 >=== running 40 interestingness tests in parallel ===< pass_includes :: 0 >=== ===< pass_unifdef :: 0 >=== ===< pass_comments :: 0 >=== ===< pass_blank :: 0 >=== (0.7 %, 2393679 bytes) (5.3 %, 2282207 bytes) ===< pass_clang_binsrch :: replace-function-def-with-decl >=== (12.6 %, 2107372 bytes) ... ===< pass_indent :: final >=== (100.0 %, 156 bytes) ===================== done ==================== pass statistics: method pass_clang_binsrch :: remove-unused-function worked 1 times and failed 0 times ... method pass_lines :: 0 worked 427 times and failed 998 times ******** /android0/kernel-all/spidev.i ******** a() { int b; c(); if (c < 2) b = d(); else { asm goto("1:.long b - ., %l[l_yes] - . \n\t" : : : : l_yes); l_yes:; } if (b) e(); } creduce --n 40 spidev_asm_goto.sh spidev.i 1892.35s user 1186.10s system 817% cpu 6:16.76 total $ wc -l spidev.i.orig 56160 spidev.i.orig $ wc -l spidev.i 12 spidev.i So it took C-reduce just over 6 minutes to turn >56k lines of mostly irrelevant code into 12 when running 40 threads on my 48 core workstation. It’s also highly entertaining to watch C-Reduce work its magic. In another terminal, I highly recommend running watch -n1 cat to see it pared down before your eyes. Jump to 4:24 to see where things really pick up. Finally, we still want to bisect our compiler flags (the kernel uses a lot). I still do this process manually, and it’s not too bad. Having proper and minimal steps to reproduce compiler bugs is critical. That’s enough for a great bug report for now. In a future episode, we’ll see how to start pulling apart llvm to see where compilation is going amiss. [Less]
Posted 1 day ago by Nick Cameron
Today is my last day as an employee of Mozilla. It's been almost exactly seven years - two years working on graphics and layout for Firefox, and five years working on Rust. Mostly remote, with a few stints in the Auckland office. It has been an ... [More] amazing time: I've learnt an incredible amount, worked on some incredible projects, and got to work with some absolutely incredible people. BUT, it is time for me to learn some new things, and work on some new things with some new people. Nearly everyone I've had contact with at Mozilla has been kind and smart and fun to work with. I would have liked to give thanks and a shout-out to a long list of people I've learned from or had fun with, but the list would be too long and still incomplete. I'm going to be mostly stepping back from the Rust project too. I'm pretty sad about that (although I hope it will be worth it) - it's an extremely exciting, impactful project. As a PL researcher turned systems programmer, it really has been a dream project to work on. The Rust team at Mozilla and the Rust community in general are good people, and I'll miss working with you all terribly. Concretely, I plan to continue to co-lead the Cargo and IDEs and Editors teams. I'll stay involved with the Rustfmt and Rustup working groups for a little while. I'll be leaving the other teams I'm involved with, including the core team (although I'll stick around in a reduced capacity for a few months). I won't be involved with code and review for Rust projects day-to-day. But I'll still be around on Discord and GitHub if needed for mentoring or occasional review; I will probably take much longer to respond. None of the projects I've worked on are going to be left unmaintained, I'm very confident in the people working on them, on the teams I'm leaving behind, and in the Rust community in general (did I say you were awesome already?). I'm very excited about my next steps (which I'll leave for another post), but for now I'm feeling pretty emotional about moving on from the Rust project and the Rust team at Mozilla. It's been a big part of my life for five years and I'm going to miss y'all. <3 P.S., it turns out that Steve is also leaving Mozilla - this is just a coincidence and there is no conspiracy or shared motive. We have different reasons for leaving, and neither of us knew the other was leaving until after we'd put in our notice. As far as I know, there is no bad blood between either of us and the Rust team. [Less]
Posted 1 day ago by Marco Castelluccio
Bugzilla is a noisy data source: bugs are used to track anything, from Create a LDAP account for contributor X to Printing page Y doesn’t work. This makes it hard to know which bugs are bugs and which bugs are not bugs but e.g. feature requests, or ... [More] meta bugs, or refactorings, and so on. To ease reading the next paragraphs, I’ll refer as bugbug to bugs that are actually bugs, as fakebug to bugs that are not actually bugs, and as bug to all Bugzilla bugs (bugbug + fakebug). Why do we need to tell if a bug is actually a bug? There are several reasons, the main two being: Quality metrics: to analyze the quality of a project, to measure the churn of a given release, it can be useful to know, for example, how many bugbugs are filed in a given release cycle. If we don’t know which bugs are bugbugs and which are feature requests, we can’t precisely measure how many problems are found (= bugbugs filed) in a given component for a given release, we can only know the overall number, confusing bugbugs and feature work; Bug prediction: given the development history of the project, one can try to predict, with some measure of accuracy, which changes are risky and more likely to lead to regressions in the future. In order to do that, of course, you need to know which changes introduced problems in the past. If you can’t identify problems (i.e. bugbugs), then you can’t identify changes that introduced them! On BMO, we have some optional keywords to identify regressions vs features, but they are not used consistently (and, being optional, they can’t be. We can work on improving the practices, but we can’t reach perfection when there is human involvement). So, we need another way to identify them. A possibility is to use handwritten rules (‘mozregression’ in comment → regression; ‘support’ in title → feature), which can be precise up to a certain accuracy level, but any improvement over that requires hard manual labor. Another option is to use machine learning techniques, leaving the hard work of extracting information from bug features to the machines! The bugbug project is trying to do just that, at first with a very simple ML architecture. We have a set of 1913 bugs, manually labelled between the two possible classes (bugbug vs nobug). We augment this manually labelled set with Bugzilla bugs containing the keywords ‘regression’ or ‘feature’, which are basically labelled already. The augmented data set contains 10818 bugs. Unfortunately we can’t use all of them indistinctly, as the dataset is unbalanced towards bugbugs, which would skew the results of the classifier, so we simply perform random under-sampling to reduce the number of bugbug examples. In the end, we have 1928 bugs. We split the dataset into a training set of 1735 bugs and a test set of 193 bugs (90% - 10%). We extract features both from bug fields (such as keywords, number of attachments, presence of a crash signature, and so on), bug title and comments. To extract features from text (title and comments), we use a simple BoW model with 1-grams, using TF-IDF to lower the importance of very common words in the corpus and stop word removal mainly to speed up the training phase (stop word removal should not be needed for accuracy in our case since we are using a gradient boosting model, but it can speed up the training phase and it eases experimenting with other models which would really need it). We are then training a gradient boosting model (these models usually work quite well for shallow features) on top of the extracted features. Figure 1: A high-level overview of the architecture. This very simple approach, in a handful of lines of code, achieves ~93% accuracy. There’s a lot of room for improvement in the algorithm (it was, after all, written in a few hours…), so I’m confident we can get even better results. This is just the first step: in the near future we are going to implement improvements in Bugzilla directly and in linked tooling so that we can stop guessing and have very accurate data. Since the inception of bugbug, we have also added additional experimental models for other related problems (e.g. detecting if a bug is a good candidate for tracking, or predicting the component of a bug), turning bugbug into a platform for quickly building and experimenting with new machine learning applications on Bugzilla data (and maybe soon VCS data too). We have many other ideas to implement, if you are interested take a look at the open issues on our repo! [Less]
Posted 1 day ago by Nicholas Nethercote
I have used a variety of profiling tools over the years, including several I wrote myself. But there is one profiling tool I have used more than any other. It is capable of providing invaluable, domain-specific profiling data of a kind not obtainable ... [More] by any general-purpose profiler. It’s a simple text processor implemented in a few dozen lines of code. I use it in combination with logging print statements in the programs I am profiling. No joke. Post-processing The tool is called counts, and it tallies line frequencies within text files, like an improved version of the Unix command chain sort | uniq -c. For example, given the following input. a 1 b 2 b 2 c 3 c 3 c 3 d 4 d 4 d 4 d 4 counts produces the following output. 10 counts: ( 1) 4 (40.0%, 40.0%): d 4 ( 2) 3 (30.0%, 70.0%): c 3 ( 3) 2 (20.0%, 90.0%): b 2 ( 4) 1 (10.0%,100.0%): a 1 It gives a total line count, and shows all the unique lines, ordered by frequency, with individual and cumulative percentages. Alternatively, when invoked with the -w flag, it assigns each line a weight, determined by the last integer that appears on the line (or 1 if there is no such integer).  On the same input, counts -w produces the following output. 30 counts: ( 1) 16 (53.3%, 53.3%): d 4 ( 2) 9 (30.0%, 83.3%): c 3 ( 3) 4 (13.3%, 96.7%): b 2 ( 4) 1 ( 3.3%,100.0%): a 1 The total and per-line counts are now weighted; the output incorporates both frequency and a measure of magnitude. That’s it. That’s all counts does. I originally implemented it in 48 lines of Perl, then later rewrote it as 48 lines of Python, and then later again rewrote it as 71 lines of Rust. In terms of benefit-to-effort ratio, it is by far the best code I have ever written. counts in action As an example, I added print statements to Firefox’s heap allocator so it prints a line for every allocation that shows its category, requested size, and actual size. A short run of Firefox with this instrumentation produced a 77 MB file containing 5.27 million lines. counts produced the following output for this file. 5270459 counts: ( 1) 576937 (10.9%, 10.9%): small 32 (32) ( 2) 546618 (10.4%, 21.3%): small 24 (32) ( 3) 492358 ( 9.3%, 30.7%): small 64 (64) ( 4) 321517 ( 6.1%, 36.8%): small 16 (16) ( 5) 288327 ( 5.5%, 42.2%): small 128 (128) ( 6) 251023 ( 4.8%, 47.0%): small 512 (512) ( 7) 191818 ( 3.6%, 50.6%): small 48 (48) ( 8) 164846 ( 3.1%, 53.8%): small 256 (256) ( 9) 162634 ( 3.1%, 56.8%): small 8 (8) ( 10) 146220 ( 2.8%, 59.6%): small 40 (48) ( 11) 111528 ( 2.1%, 61.7%): small 72 (80) ( 12) 94332 ( 1.8%, 63.5%): small 4 (8) ( 13) 91727 ( 1.7%, 65.3%): small 56 (64) ( 14) 78092 ( 1.5%, 66.7%): small 168 (176) ( 15) 64829 ( 1.2%, 68.0%): small 96 (96) ( 16) 60394 ( 1.1%, 69.1%): small 88 (96) ( 17) 58414 ( 1.1%, 70.2%): small 80 (80) ( 18) 53193 ( 1.0%, 71.2%): large 4096 (4096) ( 19) 51623 ( 1.0%, 72.2%): small 1024 (1024) ( 20) 45979 ( 0.9%, 73.1%): small 2048 (2048) Unsurprisingly, small allocations dominate. But what happens if we weight each entry by its size? counts -w produced the following output. 2554515775 counts: ( 1) 501481472 (19.6%, 19.6%): large 32768 (32768) ( 2) 217878528 ( 8.5%, 28.2%): large 4096 (4096) ( 3) 156762112 ( 6.1%, 34.3%): large 65536 (65536) ( 4) 133554176 ( 5.2%, 39.5%): large 8192 (8192) ( 5) 128523776 ( 5.0%, 44.6%): small 512 (512) ( 6) 96550912 ( 3.8%, 48.3%): large 3072 (4096) ( 7) 94164992 ( 3.7%, 52.0%): small 2048 (2048) ( 8) 52861952 ( 2.1%, 54.1%): small 1024 (1024) ( 9) 44564480 ( 1.7%, 55.8%): large 262144 (262144) ( 10) 42200576 ( 1.7%, 57.5%): small 256 (256) ( 11) 41926656 ( 1.6%, 59.1%): large 16384 (16384) ( 12) 39976960 ( 1.6%, 60.7%): large 131072 (131072) ( 13) 38928384 ( 1.5%, 62.2%): huge 4864000 (4866048) ( 14) 37748736 ( 1.5%, 63.7%): huge 2097152 (2097152) ( 15) 36905856 ( 1.4%, 65.1%): small 128 (128) ( 16) 31510912 ( 1.2%, 66.4%): small 64 (64) ( 17) 24805376 ( 1.0%, 67.3%): huge 3097600 (3100672) ( 18) 23068672 ( 0.9%, 68.2%): huge 1048576 (1048576) ( 19) 22020096 ( 0.9%, 69.1%): large 524288 (524288) ( 20) 18980864 ( 0.7%, 69.9%): large 5432 (8192) This shows that the cumulative count of allocated bytes (2.55GB) is dominated by a mixture of larger allocation sizes. This example gives just a taste of what counts can do. (An aside: in both cases it’s good the see there isn’t much slop, i.e. the difference between the requested sizes and actual sizes are mostly 0. That 5432 entry at the bottom of the second table is curious, though.) Other Uses This technique is often useful when you already know something — e.g. a general-purpose profiler showed that a particular function is hot — but you want to know more. Exactly how many times are paths X, Y and Z executed? For example, how often do lookups succeed or fail in data structure D? Print an identifying string each time a path is hit. How many times does loop L iterate? What does the loop count distribution look like? Is it executed frequently with a low loop count, or infrequently with a high loop count, or a mix? Print the iteration count before or after the loop. How many elements are typically in hash table H at this code location? Few? Many? A mixture? Print the element count. What are the contents of vector V at this code location? Print the contents. How many bytes of memory are used by data structure D at this code location? Print the byte size. Which call sites of function F are the hot ones? Print an identifying string at the call site. Then use counts to aggregate the data. Often this domain-specific data is critical to fully optimize hot code. Worse is better Print statements are an admittedly crude way to get this kind of information, profligate with I/O and disk space. In many cases you could do it in a way that uses machine resources much more efficiently, e.g. by creating a small table data structure in the code to track frequencies, and then printing that table at program termination. But that would require: writing the custom table (collection and printing); deciding where to define the table; possibly exposing the table to multiple modules; deciding where to initialize the table; and deciding where to print the contents of the table. That is a pain, especially in a large program you don’t fully understand. Alternatively, sometimes you want information that a general-purpose profiler could give you, but running that profiler on your program is a hassle because the program you want to profile is actually layered under something else, and setting things up properly takes effort. In contrast, inserting print statements is trivial. Any measurement can be set up in no time at all. (Recompiling is often the slowest part of the process.) This encourages experimentation. You can also kill a running program at any point with no loss of profiling data. Don’t feel guilty about wasting machine resources; this is temporary code. You might sometimes end up with output files that are gigabytes in size. But counts is fast because it’s so simple… and the Rust version is 3–4x faster than the Python version, which is nice. Let the machine do the work for you. (It does help if you have a machine with an SSD.) Ad Hoc Profiling For a long time I have, in my own mind, used the term ad hoc profiling to describe this combination of logging print statements and frequency-based post-processing. Wikipedia defines “ad hoc” as follows. In English, it generally signifies a solution designed for a specific problem or task, non-generalizable, and not intended to be able to be adapted to other purposes The process of writing custom code to collect this kind of profiling data — in the manner I disparaged in the previous section — truly matches this definition of “ad hoc”. But counts is valuable specifically makes this type of custom profiling less ad hoc and more repeatable. I should arguably call it “generalized ad hoc profiling” or “not so ad hoc profiling”… but those names don’t have quite the same ring to them. Tips Use unbuffered output for the print statements. In C and C++ code, use fprintf(stderr, ...). In Rust code use eprintln!. (Update: Rust 1.32 added the dbg! macro, which also works well.) Pipe the stderr output to file, e.g. firefox 2> log. Sometimes programs print other lines of output to stderr that should be ignored by counts. (Especially if they include integer IDs that counts -w would interpret as weights!) Prepend all logging lines with a short identifier, and then use grep $ID log | counts to ignore the other lines. If you use more than one prefix, you can grep for each prefix individually or all together. Occasionally output lines get munged together when multiple print statements are present. Because there are typically many lines of output, having a few garbage ones almost never matters. It’s often useful to use both counts and counts -w on the same log file; each one gives different insights into the data. To find which call sites of a function are hot, you can instrument the call sites directly. But it’s easy to miss one, and the same print statements need to be repeated multiple times. An alternative is to add an extra string or integer argument to the function, pass in a unique value from each call site, and then print that value within the function. It’s occasionally useful to look at the raw logs as well as the output of counts, because the sequence of output lines can be informative. For example, I recently diagnosed an occurrences of quadratic behaviour in the Rust compiler by seeing that a loop iterated 1, 2, 3, …, 9000+ times. The Code counts is available here. Conclusion I use counts to do ad hoc profiling all the time. It’s the first tool I reach for any time I have a question about code execution patterns. I have used it extensively for every bout of major performance work I have done in the past few years, as well as in plenty of other circumstances. I even built direct support for it into rustc-perf, the Rust compiler’s benchmark suite, via the profile eprintln subcommand. Give it a try! [Less]
Posted 1 day ago by dklawren
https://github.com/mozilla-bteam/bmo/tree/release-20190116.4 the following changes have been pushed to bugzilla.mozilla.org: [1518522] phabbugz comments in bugs need to set is_markdown to true [1493253] Embed crash count table to bug pages ... [More] [1518264] New non-monospace comments styled with way too small a font size [1500441] Make site-wide announcement dismissable [1519240] Markdown comments ruin links wrapped in <> [1519157] Linkification is disabled on , etc. [1518328] The edit comment feature should have a preview mode as well [1510996] Abandoned phabricator revisions should be hidden by default [1518967] Edit attachment as comment does markdown, which is very unexpected [1519659] Need to reload the page before being able to edit [1520221] Avoid wrapping markdown comments [1520495] Crash count table does not detect uplift links in Markdown comments [1519564] Add a mechanism for disabling all special markdown syntax discuss these changes on mozilla.tools.bmo. [Less]
Posted 1 day ago by Jennifer Davidson
Authors: Jennifer Davidson, Meridel Walkington, Emanuela Damiani, Philip WalmsleyCo-design workshops help designers learn first-hand the language of the people who use their products, in addition to their pain points, workflows, and motivations. With ... [More] co-design methods [1] participants are no longer passive recipients of products. Rather, they are involved in the envisioning and re-imagination of them. Participants show us what they need and want through sketching and design exercises. The purpose of a co-design workshop is not to have a pixel-perfect design to implement, rather it’s to learn more about the people who use or will use the product, and to involve them in generating ideas about what to design.We ran a co-design workshop at Mozilla to inform our product design, and we’d like to share our experience with you.Sketching exercises during the co-design workshop were fueled by coffee and tea.Before the workshopOur UX team was tasked with improving the Firefox browser extension experience. When people create browser extensions, they use a form to submit their creations. They submit their code and all the metadata about the extension (name, description, icon, etc.). The metadata provided in the submission form is used to populate the extension’s product page on addons.mozilla.org.A cropped screenshot of the third step of the submission form, which asks for metadata like name and description of the extension.Screenshot of an extension product page on addons.mozilla.org.The Mozilla Add-ons team (i.e., Mozilla staff who work on improving the extensions and themes experience) wanted to make sure that the process to submit an extension is clear and useful, yielding a quality product page that people can easily find and understand. Improving the submission flow for developers would lead to higher quality extensions for people to use.We identified some problems by using test extensions to “eat our own dog food” (i.e. walk through the current process). Our content strategist audited the submission flow experience to understand product page guidelines in the submission flow. Then some team members conducted a cognitive walkthrough [2] to gain knowledge of the process and identify potential issues.After identifying some problems, we sought to improve our submission flow for browser extensions. We decided to run a co-design workshop that would identify more problem areas and generate new ideas. The workshop took place in London on October 26, one day before MozFest, an annual week-long “celebration for, by, and about people who love the internet.” Extension and theme creators were selected from our global add-ons community to participate in the workshop. Mozilla staff members were involved, too: program managers, a community manager, an Engineering manager, and UX team members (designers, a content strategist, and a user researcher).A helpful and enthusiastic sticky note on the door of our workshop room. Image: “Submission flow workshop in here!!” posted on a sticky note on a wooden door.Steps we took to create and organize the co-design workshopAfter the audit and cognitive walkthrough, we thought a co-design workshop might help us get to a better future. So we did the following: Pitch the idea to management and get buy-in Secure budget Invite participants Interview participants (remotely) Analyze interviews Create an agenda for the workshop. Our agenda included: ice breaker, ground rules, discussion of interview results, sketching (using this method [3]) & critique sessions, creating a video pitch for each group’s final design concept. Create workshop materials Run the workshop! Send out a feedback survey Debrief with Mozilla staff Analyze results (over three days) with Add-ons UX team Share results (and ask for feedback) of analysis with Mozilla staff and participants Lessons learned: What went wellInterview participants beforehandWe interviewed each participant before the workshop. The participants relayed their experience about submitting extensions and their motivations for creating extensions. They told us their stories, their challenges, and their successes.Conducting these interviews beforehand helped our team in a few ways: The interviews introduced the team and facilitators, helping to build rapport before the workshop. The interviews gave the facilitators context into each participant’s experience. We learned about their motivations for creating extensions and themes as well as their thoughts about the submission process. This foundation of knowledge helped to shape the co-design workshop (including where to focus for pain points), and enabled us to prepare an introductory data summary for sharing at the workshop. We asked for participants’ feedback about the draft content guidelines that our content strategist created to provide developers with support, examples, and writing exercises to optimize their product page content. Those guidelines were to be incorporated into the new submission flow, so it was very helpful to get early user feedback. It also gave the participants some familiarity with this deliverable so they could help incorporate it into the submission flow during the workshop. A photo of Jennifer, user researcher, presenting interview results back to the participants, near the beginning of the workshop.Thoughtfully select diverse participantsThe Add-ons team has an excellent community manager, Caitlin Neiman, who interfaces with the greater Add-ons community. Working with Mozilla staff, she selected a diverse group of community participants for the workshop. The participants hailed from several different countries, some were paid to create extensions and some were not, and some had attended Mozilla events before and some had not. This careful selection of participants resulted in diverse perspectives, workflows, and motivations that positively impacted the workshop.Create Ground RulesDesign sessions can benefit from a short introductory activity of establishing ground rules to get everyone on the same page and set the tone for the day. This activity is especially helpful when participants don’t know one another.Using a flip chart and markers, we asked the room of participants to volunteer ground rules. We captured and reviewed those as a group.A photo of Emanuela, UX Designer and facilitator, scribing ground rules on a flip chart.Why are ground rules important?Designing the rules together, with facilitators and participants, serves as a way to align the group with a set of shared values, detecting possible harmful group behaviors and proposing productive and healthy interactions. Ground rules help make everyone’s experience a more rich and satisfying one.Assign roles and create diverse working groups during the workshopThe Mozilla UX team in Taipei recently conducted a participatory workshop with older adults. In their blog post, they also highlight the importance of creating diverse working groups for the workshops [4].In our workshop, each group was comprised of: multiple participants (i.e. extension and theme creators) a Mozilla staff program manager, engineering manager, community manager, and/or engineer. a facilitator who was either a Mozilla staff designer or program manager. As a facilitator, the designer was a neutral party in the group and could internalize participants’ mental models, workflows, and vocabulary through the experience. We also assigned roles during group critique sessions. Each group member chose to be a dreamer (responds to ideas with a “Why not?” attitude), a realist (responds to ideas with “How?”), or a spoiler (responds to ideas by pointing out their flaws). This format is called the Walt Disney approach [5].Post-its for each critique role: Realist, Spoiler, DreamerWhy are critique roles important?Everyone tends to fit into one of the Walt Disney roles naturally. Being pushed to adopt a role that may not be their tendency gets participants to step out of their comfort zone gently. The roles help participants empathize with other perspectives.We had other roles throughout the workshop as well, namely, a “floater” who kept everyone on track and kept the workshop running, a timekeeper, and a photographer.Ask for feedback about the workshop resultsThe “co” part of “co-design” doesn’t have to end when the workshop concludes. Using what we learned during the workshop, the Add-ons UX team created personas and potential new submission flow blueprints. We sent those deliverables to the workshop participants and asked for their feedback. As UX professionals, it was useful to close the feedback loop and make sure the deliverables accurately reflected the people and workflows being represented.Lessons Learned: What could be improvedThe workshop was too longWe flew from around the world to London to do this workshop. A lot of us were experiencing jet lag. We had breaks, coffee, biscuits, and lunch. Even so, going from 9 to 4, sketching for hours and iterating multiple times was just too much for one day.Jorge, a product manager, provided feedback about the workshop’s duration. Image: “Jorge is done” text written above a skull and crossbones sketch.We have ideas about how to fix this. One approach is to introduce a variety of tasks. In the workshop we mostly did sketching over and over again. Another idea is to extend the workshop across two days, and do a few hours each day. Another idea is to shorten the workshop and do fewer iterations.There were not enough Mozilla staff engineers presentThe workshop was developed by a user researcher, designers, and a content strategist. We included a community manager and program managers, but we did not include engineers in the planning process (other than providing updates). One of the engineering managers said that it would have been great to have engineers present to help with ideation and hear from creators first-hand. If we were to do a design workshop again, we would be sure to have a genuinely interdisciplinary set of participants, including more Mozilla staff engineers.And with that…We hope that this blog post helps you create a co-design workshop that is interdisciplinary, diverse, caring of participants’ perspectives, and just the right length.AcknowledgementsMuch gratitude to our colleagues who created the workshop with us and helped us edit this blog post! Thanks to Amy Tsay, Caitlin Neiman, Jorge Villalobos, Kev Needham, Stuart Colville, Mike Conca, and Gemma Petrie.References[1] Sanders, Elizabeth B-N., and Pieter Jan Stappers. “Co-creation and the new landscapes of design.” Co-design 4.1 (2008): 5–18.[2] “How to Conduct a Cognitive Walkthrough.” The Interaction Design Foundation, 2018, www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough.[3] Gray, Dave. “6–8–5.” Gamestorming, 2 June 2015, gamestorming.com/6–8–5s/.[4] Hsieh, Tina. “8 Tips for Hosting Your First Participatory Workshop.” Medium.com, Firefox User Experience, 20 Sept. 2018, medium.com/firefox-ux/8-tips-for-hosting-your-first-participatory-workshop-f63856d286a0.[5] “Disney Brainstorming Method: Dreamer, Realist, and Spoiler.” Idea Sandbox, idea-sandbox.com/blog/disney-brainstorming-method-dreamer-realist-and-spoiler/.Reflections on a co-design workshop was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted 1 day ago by mconley
Highlights We’re moving to profile-per-install, starting in Firefox 67! This will help avoid nasty downgrade bugs See these bugs for more detail WebExtensions’ keyboard shortcuts can now be managed / overridden from about:addons as of Firefox 66 ... [More] There’s a new WebExtension browser.tabs API for changing the order in which tabs are focused The DevTools console now lets you invoke getters from the console (check out this demo). Slated to ship in Firefox 65! The Milestone 1 version of the long-requested CSS track changes feature is slated to ship in Firefox 65! Firefox Color now exports static themes ready for submission to AMO A new about:debugging is being worked on. Nightly users can check out about:debugging-new right now for a preview. As of this writing, we’re down to 27% of the original set of XBL bindings! We aim to eliminate XBL in Firefox entirely this year. We added some new improvements to our content blocking UI, including a new cookies subpanel in the site identity popup: Tasty! Baku and Johann eliminated some Evil Traps with malicious use of window.open() in a loop! After 3 years of valuable service, Test Pilot is riding off into the sunset! More info here. Friends of the Firefox team Introductions New Student Project: Fluent Migrations (watch out for email) Resolved bugs (excluding employees) Fixed more than one bug Florens Verschelde [:fvsch] Tim Nguyen :ntim New contributors (🌟 = first patch) 🌟 andrewc.goupil simplified how we use media queries in our front-end CSS TomS [:toms] fixed a focus indicator glitch in about:addons 🌟 hereissophie simplified the code that we use to send users to SUMO from the Accessibility Indicator Adrian Kaczmarek switched a bunch of #main-window use to :root in our front-end CSS 🌟 Irvin Ives Lau swapped out some old PNGs from our theme with a shiny new SVG rishabhjairath replaced some querySelector uses with a more direct getElementById 🌟 P Kausthubh S fixed RTL support in the theme selector in Customize Mode Toby Ward made sure that browser.tabs.warnOnOpen is respected when opening multiple items from the Library 🌟 haotian xiang made sure our “Learn more” links in about:preferences are in sentence case (“Learn more” instead of “Learn More”) 🌟 dhyey35 removed some dead code from the all_files_referenced test 🌟 edward.i.wu got rid of a spurious console warning when running a WebExtension with background.persistent set to true 🌟 John Lin removed some unused rules from our global.css files 🌟 Kaio Augusto de Camargo made it so that we don’t log a warning if referencing an expired Telemetry scalar probe 🌟 matthewacha removed “use strict”; from one of our ES6 modules – those are running in strict mode by default. 🌟 mattheww improved the performance when loading breakpoints in the debugger for a page with many scripts! Project Updates Add-ons / Web Extensions Ongoing work on: User opt-in for extensions in private browsing windows Rebuilding about:addons in HTML Old Lightweight Themes (LWTs) on addons.mozilla.org will be converted to XPI packaged themes next week Browser Architecture `` has been converted to a Custom Element (source) Other XBL binding removals since last meeting: `scrollbox`, `popup-scrollbars`, `numberbox`, `tabmodalprompt`, `richlistbox`, `autocomplete-richlistbox`, `categories-list`, `arrowscrollbox-clicktoscroll`, `datetime-popup`, `download-subview-toolbarbutton` Developer Tools Huge shoutout to debugger community for all of their work during the winter break! Console Autocomplete custom properties on Array, Number and String literals Export to Clipboard landed thanks to Jefry Lagrange Debugger Major cleanup pass done to remove old unused source-map logic debugger actors Mouse selection when debugger is paused fixed Fixed “Nested event loops do not suspend scroll events” Search within files is now cancelable Worker debugging is coming! Enable devtools.debugger.features.windowless-workers to try it out Working on debugging a worker over here All of the debugger’s worker, packages, and css sources are in MC Layout Tools Polishing recently shipped features: Flexbox Inspector and Changes Panel Alert()/prompt()/confirm() work again in RDM Remote Debugging Landed: Support several USB runtimes simultaneously Landed: Show USB devices before Firefox is ready Landed: Disabling service worker debugging if multie10s is true Landed: Stopping ADB when you close about:debugging Soon: Worker support for remote devices Current target Fx68 for preffing new about:debugging on Fission Support Highlighter-utils module is gone! Other Shader Editor to be removed Fluent New student project started up last week. Goals are: convert more strings to Fluent increase tool support research porting the fluent-rs parser to wasm and replacing the JS Fluent parser with the wasm fluent-rs parser Lint Work continues on enabling ESLint for more DOM directories and other core directories. Performance In Q1 we are focusing our efforts on startup performance. This time we’ll care both about first paint performance (which already received optimization efforts previously and is close to parity with Chrome) and the time to render the home page. Doug landed his document splitting work that should enable faster rendering and is investigating creating a warmup service to preload Firefox files when the OS starts. Felipe’s tab animation patch is going through review. The code of the old about:performance is gone (both front-end and back-end pieces) and some test coverage was added. Gijs continued his browser adjustment work (adding telemetry and about:support visibility), improved Pocket’s startup behavior, and removed some more Feed code. mconley is unblocking enabling the background process priority manager, by removing the HAL stuff that was leftover from FxOS. Perf.html improvements deployed recently: Tooltips in the thread activity graph indicating the meaning of colors (most frequent request we got during the all-hands!) This was a top request from users of the Profiler! Memory track This should help us notice memory allocation patterns in profiles. Category colors in the stack chart This should help people narrow down slow code to the responsible components. Privacy/Security We’re eliminating some interesting performance regressions from enabling cookie restrictions by default. Erica is working on another Shield study on content blocking breakage Baku refactored some of URL-Classifier to prepare it for future endeavors, including a neat new way to manually classify URLs on about:url-classifier Search and Navigation Search Work continues on switching built-in search engines to WebExtensions, intention to land after the next merge Hand-off from content to address bar in preparation for Private Browsing search Quantum Bar Implemented switch-to-tab with mouse and keyboard Increased results size in touch mode Reimplemented performance telemetry Implementing results removal (through history) from the dropdown Cleaned up various pointless attributes and functionality from the urlbar bindings Implemented UrlbarInput::handleRevert and Escape key handling Implemented typeRestrictToken Results are now consistent with the legacy address bar Fixed handling of non-url entries Places Places will now properly replace corrupt databases on startup [Less]
Posted 1 day ago by Julien Vehent
Over the past few years I've followed the rise of the BeyondCorp project, Google's effort to move away from perimetric network security to identity-based access controls. The core principle of BeyondCorp is to require strong authentication to access ... [More] resources rather than relying on the source IP a connection originates from. Don't trust the network, authenticate all accesses, are requirements in a world where your workforce is highly distributed and connects to privileged resources from untrusted networks every day. They are also a defense against office and datacenter networks that are rarely secure enough for the data they have access to. BeyondCorp, and zero trust networks, are good for security. This isn't new. Most modern organizations have completely moved away from trusting source IPs and rely on authentication to grant access to data. But BeyondCorp goes further by recommending that your entire infrastructure should have a foot on the Internet and protect access using strong authentication. The benefits of this approach are enormous: employees can be fully mobile and continue to access privileged resources, and compromising an internal network is no longer sufficient to compromise the entire organization. As a concept, this is good. And if you're hosting on GCP or are willing to proxy your traffic through GCP, you can leverage their Identity and Access Proxy to implement these concepts securely. But what about everyone else? Should you throw away your network security and put all your security in the authentication layer of your applications? Maybe not... At Mozilla, we've long adopted single sign on, first using SAML, nowadays using OpenID Connect (OIDC). Most of our applications, both public facing and internal, require SSO to protect access to privileged resources. We never trust the network and always require strong authentication. And yet, we continue to maintain VPNs to protect our most sensitive admin panels. "How uncool", I hear you object, "and here we thought you were all about DevOps and shit". And you would be correct, but I'm also pragmatic, and I can't count the number of times we've had authentication bugs that let our red team or security auditors bypass authentication. The truth is, even highly experienced programmers and operators make mistakes and will let a bug disable or fail to protect part of that one super sensitive page you never want to leave open to the internet. And I never blame them because SSO/OAuth/OIDC are massively complex protocols that require huge libraries that fail in weird and unexpected ways. I've never reached the point where I fully trust our SSO, because we find one of those auth bypass every other month. Here's the catch: they never lead to major security incidents because we put all our admin panels behind a good old VPN. Those VPN that no one likes to use or maintain (me included) also provide a stable and reliable security layer that simply never fails. They are far from perfect, and we don't use them to authenticate users or grant access to resources, but we use them to cover our butts when the real authentication layer fails. So far, real world experience continues to support this model. So, there, you have it: adopt BeyondCorp and zero trust networks, but also consider keeping your most sensitive resources behind a good old VPN (or an SSH jumphost, whatever works for you). VPNs are good at reducing your attack surface and adding an extra layer of protection to your infrastructure. You'll be thankful to have one the next time you find a bypass in your favorite auth library. [Less]
Posted 2 days ago by Nical
Hi everyone! This week’s highlight is Glenn’s picture caching work which almost landed about a week ago and landed again a few hours ago. Fingers crossed! If you don’t know what picture caching means and are interested, you can read about it in the ... [More] introduction of this newsletter’s season 01 episode 28. On a more general note, the team continues focusing on the remaining list of blocker bugs which grows and shrinks depending on when you look, but the overall trend is looking good. Without further ado: Notable WebRender and Gecko changes Bobby fixed unbounded interner growth. Bobby overhauled the memory reporter. Bobby added a primitive highlighting debug tool. Bobby reduced code duplication around interners. Matt and Jeff continued investigating telemetry data. Jeff removed the minimum blob image size, yielding nice improvements on some talos benchmarks (18% raptor-motionmark-animometer-firefox linux64-qr opt and 7% raptor-motionmark-animometer-firefox windows10-64-qr opt). kvark fixed a crash. kvark reduced the number of vector allocations. kvark improved the chasing debugging tool. kvark fixed two issues with reference frame and scrolling. Andrew fixed an issue with SVGs that embed raster images not rendering correctly. Andrew fixed a mismatch between the size used during decoding images and the one we pass to WebRender. Andrew fixed a crash caused by an interaction between blob images and shared surfaces. Andrew avoided scene building caused by partially decoded images when possible. Emilio made the build system take care of generating the ffi bindings automatically. Emilio fixed some clipping issues. Glenn optimized how picture caching handle world clips. Glenn fixed picture caching tiles being discarded incorrectly. Glenn split primitive preparation into a separate culling pass. Glenn fixed some invalidation issues. Glenn improved display list correlation. Glenn re-landed picture caching. Doug improved the way we deal with document splitting to allow more than two documents. Ongoing work The team keeps going through the remaining blockers (14 P2 bugs and 29 P3 bugs at the time of writing). Enabling WebRender in Firefox Nightly In about:config, set the pref “gfx.webrender.all” to true and restart the browser. Reporting bugs The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla. Note that it is possible to log in with a github account. [Less]