I Use This!
Activity Not Available


Analyzed about 1 month ago. based on code collected about 2 months ago.
Posted 2 days ago by Nicholas Nethercote
DMD is heap profiler built into Firefox, best known for being the tool used to  diagnose the sources of “heap-unclassified” memory in about:memory. It’s been unusable on Win32 for a long time due to incredibly slow start-up times. And recently it ... [More] became very slow on Mac due to a performance regression in libunwind. Fortunately I have been able to fix this in both cases (Win32, Mac) by using FramePointerStackWalk() instead of MozStackWalk() to do the stack tracing within DMD. (The Gecko Profiler likewise uses FramePointerStackWalk() on those two platforms, and it was my recent work on the profiler that taught me that there was an alternative stack walker available.) So DMD should be usable and effective on all Tier 1 platforms. I have tested it on  Win32, Win64, Linux64 and Mac. I haven’t tested it in Linux32 or Android. Please let me know if you encounter any problems. [Less]
Posted 2 days ago by ehsan
It has been almost a month and a half since the last time that I talked about our progress in fighting sync IPC issues.  So I figured it’s time to prepare another Sync IPC Analysis report.  Again, unfortunately only the latest data is available in ... [More] the spreadsheet.  But here are screenshot of the C++ and JS IPC message pie charts: As you can see, as we have made even more progress in fixing more sync IPC issues, now the document.cookie issue is even a larger relative share of the pie, at 60%.  That is followed by some JS IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent (which is a fast sync IPC message used by the async pan zoom component which would be hard to replace), followed by more JS IPC, followed by PContent::Msg_GetBlocklistState which is recently fixed, followed by PBrowser::Msg_NotifyIMEFocus, followed by more JS IPC and CPOW overhead before we get to the longer tail.  If you look at the JS sync IPC chart, you will see that almost all the overhead there is due to add-ons.  Hopefully none of this will be an issue after Firefox 57 with the new out of process WebExtensions for Windows users.  The only message in this chart stemming from our code that shows up in the data is contextmenu. The rate of progress here has been really great to see, and this is thanks to the hard work of many people across many different teams.  Some of these issues have required heroic efforts to fix, and it’s really great to see this much progress made in so little time. The development of Firefox 56 in coming to a close rapidly.  Firefox 57 branches off on Aug 2, and we have about 9 weeks from now until Firefox 57 rides the trains to beta.  So far, according to our burn-down chart, we have closed around 224 [qf:p1] bugs and have yet 110 more to fix.  Fortunately Quantum Flow is not one of those projects that needs all of those bugs to be fixed, because we may not end up having enough time to fix these bugs for the final release, especially since we usually keep adding new bugs to the list in our weekly triage sessions.  Soon we will probably need to reassess the priority of some of these bugs as the eventual deadline approaches. It is now time for me to acknowledge the great work of everyone who helped by contributing performance improvements over the past two weeks.  As usual, I hope I’m not forgetting any names! Perry Jiang prevented leaving the preferences window open to cause expensive periodic checks for the default browser to run off of a timer in the background. Kris Maglione cached the add-on blocklist state property in the add-ons database for faster retrieval.  Additionally, he switched away from using IndexedDB to using a simple compressed binary flat file for storing the WebExtensions startup cache.  He also ensured that the Extension.jsm’s promiseLocales() method doesn’t perform main-thread I/O.  Last but not least, he turned on out of process Web Extensions for Windows.  This is a huge improvement to the responsiveness of the main process by moving the code for all the user’s extensions to run out of the main process eventually in Firefox 57.  Support for Mac and Linux is going to follow after 57. Stephen Pohl added support for remote layer trees in popups.  This is one of the dependencies for out of process WebExtensions. Mohammed Yaseen Khan removed support for the -webide command line argument, and therefore removed one XPCOM component from the critical path of first paint. Olli Pettay made us avoid registering the visited callback when we have a pending link update, and also made our IME support be less eager to flush layout. Doug Thayer removed a call to _tzset() on startup on Windows.  He also made some speed improvements to nsNativeThemeWin::GetWidgetBorder() and nsNativeThemeWin::GetMinimumWidgetSize(). Andrew Swan ensured that the telemetry component doesn’t query the add-ons database at startup. Will Wang made sure that the cookie changed observer notification gets called a lot less frequently inside SessionCookies.jsm. Edouard Oger delayed the loading of FxAccounts module during startup. Mike Taylor made it so that the Web Compatibility Reporter doesn’t load anything before first paint. Marco Bonardo delayed creating the database connection in the history service as much as possible. Josh Aas turned off CGEvent logging only on buggy versions of OSX to avoid delaying the first paint. Gabor Krizsanits ensured that the preallocated process doesn’t get created before the first paint. Florian Quèze ensured that RecentWindow.jsm doesn’t get loaded during startup.  He also moved the initialization of some components out of _finalUIStartup (now known as _beforeUIStartup).  He also made sure that AsyncPrefs.jsm is lazily loaded in nsBrowserGlue.js. Dale Harvey removed Preferences.jsm usage from GMPUtils.jsm. Zibi Braniecki removed the main-thread I/O from UpdateUtils.jsm when reading the update.locale file. Nihanth Subramanya     ensured that about:home doesn’t use the beforeunload event handler which is a bit expensive to set. Dragana Damjanovic added support for .  This allows websites to programatically preload URLs that are important to the page for improved page loading performance. Jon Coppeard reduced the minor GCs encountered in Speedometer.  He also made the sweeping of the weak cache tables parallel and incremental and optimized gray root buffering. Mark Banner enabled async Places transactions, which takes out a chunk of main thread IO that Places was doing, particularly with bookmarking. Brian Birtles reduced the hashtable lookups performed by KeyframeEffectReadOnly::NotifyAnimationTimingUpdated(). Michael Layzell dramatically improved the performance of various HTMLTableElement.rows methods and properties on large tables by avoiding creating super inefficient nsContentList objects internally. Alessio Placitelli changed the telemetry scheduling to happen off of the idle queue after sleep-wake or idle-active cycles to move it out of the user’s way. David Baron prevented reflowing child frames repeatedly as dirty when they only needed to be reflown dirty as requested only the first time.  This helps improve the reflow performance in cases where frames reflow their children multiple times, which can be quite common. Henry Chang added support for custom default segment buffer list sizes to IPDL messages to control the buffer reallocation overhead for those messages that are typically larger than the average IPDL message. Jan de Mooij improved the performance of for-in loop’s slow path. Cătălin Badea optimized the URL API on Worker threads by avoiding main-thread round-trips for HTTP and HTTPS URLs. C.J. Ku reduced the allocation overhead in nsDisplayListBuilder::MarkFramesForDisplayList(). Nicolas B. Pierron added support for inlining functions that use arguments[x] to IonMonkey. Kyle Machulis avoided the usage of the PContent::Msg_GetBlocklistState sync IPC message in favor of sending the blocklist state on plugin list update. [Less]
Posted 2 days ago by Air Mozilla
On July 20, Jennifer Selby Long, an expert in the ethical use of the Myers-Briggs Type Indicator® (MBTI®), will lead us in an interactive session...
Posted 3 days ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted 3 days ago by Andre Vrignaud
Over the last few years, Mozilla has worked closely with other browsers and the industry to advance the state of games on the Web. Together, we have enabled developers to deploy native code on the web, first via asm.js, and then with its successor ... [More] WebAssembly. Now available in Firefox and Chrome, and also soon in Edge and WebKit, WebAssembly enables near-native performance of code in the browser, which is great for game development, and has also shown benefits for WebVR applications. WebAssembly code is able to deliver more predictable performance due to JIT compilation and garbage collection being avoided. Its wide support across all major browser engines opens up paths to near-native speed, making it possible to build high-performing plugin-free games on the web. “In 2017 Kongregate saw a shift away from Flash with nearly 60% of new titles using HTML5,” said Emily Greer, co-founder and CEO of Kongregate.  “Developers were able to take advantage of improvements in HTML5 technologies and tools while consumers were able to enjoy games without the need for 3rd-party plugins.  As HTML5 continues to evolve it will enable developers to create even more advanced games that will benefit the millions of gamers on Kongregate.com and the greater, still thriving, web gaming industry.” Kongregate’s data shows that on average, about 55% of uploaded games are HTML5 games. And we can also see that these are high-quality games, with over 60% of HTML5 titles receiving a “great” score (better than a 4.0 out of 5 rating). In spite of this positive trend, opportunities for improvement exist. The web is an ever-evolving platform, and developers are always looking for better performance. One major request we have often heard is for multithreading support on the web. SharedArrayBuffer is a required building block for multithreading, which enables concurrently sharing memory between multiple web workers. The specification is finished, and Firefox intends to ship SharedArrayBuffer support in Firefox 55. Another common request is for SIMD support. SIMD is short for Single Instruction, Multiple Data. It’s a way for a CPU to parallelize math instructions, offering significant performance improvements for math-heavy requirements such 3D rendering and physics. The WebAssembly Community Group is now focused on enabling hardware parallelism with SIMD and multithreading as the next major evolutionary steps for WebAssembly. Building on the momentum of shipping the first version of WebAssembly and continued collaboration, both of these new features should be stable and ready to ship in Firefox in early 2018. Much work has gone into optimizing runtime performance over the last few years, and with that we learned many lessons. We have collected many of these learnings in a practical blog post about porting games from native to web, and look forward to your input on other areas for improvement. As multithreading support lands in 2018, expect to see opportunities to further invest in improving memory usage. We again wish to extend our gratitude to the game developers, publishers, engine providers, and other browsers’ engine teams who have collaborated with us over the years. We could not have done it without your help — thank you! [Less]
Posted 3 days ago by Jukka Jylänki
The biggest improvement this year to web performance has been the introduction of WebAssembly. Now available in Firefox and Chrome, and coming soon in Edge and WebKit, WebAssembly enables the execution of code at a low assembly-like level in the ... [More] browser. Mozilla has worked closely with the games industry for several years to reach this stage: including milestones like the release of games built with Emscripten in 2013, the preview of Unreal Engine 4 running in Firefox (2014), bringing the Unity game engine to WebGL also in 2014, exporting an indie Unity game to WebVR in 2016, and most recently, the March release of Firefox 52 with WebAssembly. WebAssembly builds on Mozilla’s original asm.js specification, which was created to serve as a plugin-free compilation target approach for applications and games on the web. This work has accumulated a great deal of knowledge at Mozilla specific to the process of porting games and graphics technologies. If you are an engineer working on games and this sounds interesting, read on to learn more about developing games in WebAssembly. Where Does WebAssembly Fit In? By now web developers have probably heard about WebAssembly’s promise of performance, but for developers who have not actually used it, let’s set some context for how it works with existing technologies and what is feasible. Lin Clark has written an excellent introduction to WebAssembly. The main point is that unlike JavaScript, which is generally written by hand, WebAssembly is a compilation target, just like native assembly. Except perhaps for small snippets of code, WebAssembly is not designed to be written by humans. Typically, you’d develop the application in a source language (e.g. C/C++) and then use a compiler (e.g. Emscripten), which transforms the source code to WebAssembly in a compilation step. This means that existing JavaScript code is not the subject of this model. If your application is written in JavaScript, then it already runs natively in a web browser, and it is not possible to somehow transform it to WebAssembly verbatim. What can be possible in these types of applications however, is to replace certain computationally intensive parts of your JavaScript with WebAssembly modules. For example, a web application might replace its JavaScript-implemented file decompression routine or a string regex routine by a WebAssembly module that does the same job, but with better performance. As another example, web pages written in JavaScript can use the Bullet physics engine compiled to WebAssembly to provide physics simulation. Another important property: Individual WebAssembly instructions do not interleave seamlessly in between existing lines of JavaScript code; WebAssembly applications come in modules. These modules deal with low-level memory, whereas JavaScript operates on high-level object representations. This difference in structure means that data needs to undergo a transformation step—sometimes called marshalling—to convert between the two language representations. For primitive types, such as integers and floats, this step is very fast, but for more complex data types such as dictionaries or images, this can be time consuming. Therefore, replacing parts of a JavaScript application works best when applied to subroutines with large enough granularity to warrant replacement by a full WebAssembly module, so that frequent transitions between the language barriers are avoided. As an example, in a 3D game written in three.js, one would not want to implement a small Matrix*Matrix multiplication algorithm alone in WebAssembly. The cost of marshalling a matrix data type into a WebAssembly module and then back would negate the speed performance that is gained in doing the operation in WebAssembly. Instead, to reach performance gains, one should look at implementing larger collections of computation in WebAssembly, such as image or file decompression. On the other end of the spectrum are applications that are implemented as fully in WebAssembly as possible. This minimizes the need to marshal large amounts of data across the language barrier, and most of the application is able to run inside the WebAssembly module. Native 3D game engines such as Unity and Unreal Engine implement this approach, where one can deploy a whole game to run in WebAssembly in the browser. This will yield the best possible performance gain. However, WebAssembly is not a full replacement for JavaScript. Even if as much of the application as possible is implemented in WebAssembly, there are still parts that are implemented in JavaScript. WebAssembly code does not interact directly with existing browser APIs that are familiar to web developers, your program will call out from WebAssembly to JavaScript to interact with the browser. It is possible that this behavior will change in the future as WebAssembly evolves. Producing WebAssembly The largest audience currently served by WebAssembly are native C/C++ developers, who are often positioned to write performance sensitive code. An open source community project supported by Mozilla, Emscripten is a GCC/Clang-compatible compiler toolchain that allows building WebAssembly applications on the web. The main scope of Emscripten is support for the C/C++ language family, but because Emscripten is powered by LLVM, it has potential to allow other languages to compile as well. If your game is developed in C/C++ and it targets OpenGL ES 2 or 3, an Emscripten-based port to the web can be a viable approach. Mozilla has benefited from games industry feedback – this has been a driving force shaping the development of asm.js and WebAssembly. As a result of this collaboration, Unity3D, Unreal Engine 4 and other game engines are already able to deploy content to WebAssembly. This support takes place largely under the hood in the engine, and the aim has been to make this as transparent as possible to the application. Considerations For Porting Your Native Game For the game developer audience, WebAssembly represents an addition to an already long list of supported target platforms (Windows, Mac, Android, Xbox, Playstation, …), rather than being a new original platform to which projects are developed from scratch. Because of this, we’ve placed a great deal of focus on development and feature parity with respect to other existing platforms in the development of Emscripten, asm.js, and WebAssembly. This parity continues to improve, although on some occasions the offered features differ noticeably, most often due to web security concerns. The remainder of this article focuses on the most important items that developers should be aware of when getting started with WebAssembly. Some of these are successfully hidden under an abstraction if you’re using an existing game engine, but native developers using Emscripten should most certainly be aware of the following topics. Execution Model Considerations Most fundamental are the differences where code execution and memory model are concerned. Asm.js and WebAssembly use the concept of a typed array (a contiguous linear memory buffer) that represents the low level memory address space for the application. Developers specify an initial size for this heap, and the size of the heap can grow as the application needs more memory. Virtually all web APIs operate using events and an event queue mechanism to provide notifications, e.g. for keyboard and mouse input, file IO and network events. These events are all asynchronous and delivered to event handler functions. There are no polling type APIs for synchronously asking the “browser OS” for events, such as those that native platforms often provide. Web browsers execute web pages on the main thread of the browser. This property carries over to WebAssembly modules, which are also executed on the main thread, unless one explicitly creates a Web Worker and runs the code there. On the main thread it is not allowed to block execution for long periods of time, since that would also block the processing of the browser itself. For C/C++ code, this means that the main thread cannot synchronously run its own loop, but must tick simulation and animation forward based on an event callback, so that execution periodically yields control back to the browser. User-launched pthreads will not have this restriction, and they are allowed to run their own blocking main loops. At the time of writing, WebAssembly does not yet have multithreading support – this capability is currently in development. The web security model can be a bit more strict compared to other platforms. In particular, browser APIs constrain applications from gaining direct access to low-level information about the system hardware, to mitigate being able to generate strong fingerprints to identify users. For example, it is not possible to query information such as the CPU model, the local IP address, amount of RAM or amount of available hard disk space. Additionally, many web features operate on web domain boundaries, and information traveling across domains is configured by cross-origin access control rules. A special programming technique that web security also prevents is the dynamic generation and mutation of code on the fly. It is possible to generate WebAssembly modules in the browser, but after loading, WebAssembly modules are immutable and functions can no longer be added to it or changed. When porting C/C++ code, standard compliant code should compile easily, but native compilers relax certain features on x86, such as unaligned memory accesses, overflowing float->int casts and invoking function pointers via signatures that mismatch from the actual type of the function. The ubiquitousness of x86 has made these kind of nonstandard code patterns somewhat common in native code, but when compiling to asm.js or WebAssembly, these types of constructs can cause issues at runtime. Refer to Emscripten documentation for more information about what kind of code is portable. Another source of differences comes from the fact that code on a web page cannot directly access a native filesystem on the host computer, and so the filesystem solution that is provided looks a bit different than native. Emscripten defines a virtual filesystem space inside the web page, which backs onto the IndexedDB API for persistence across page visits. Browsers also store downloaded data in navigation caches, which sometimes is desirable but other times less so. Developers should be mindful in particular about content delivery. In native application stores the model of upfront downloading and installing a large application is an expected standard, but on the web, this type of monolithic deployment model can be an off-putting user experience. Applications can download and cache a large asset package at first run, but that can cause a sizable first-time download impact. Therefore, launching with minimal amount of downloading, and streaming additional asset data as needed can be critical for building a web-friendly user experience. Toolchain Considerations The first technical challenge for developers comes from adapting the existing build systems to target the Emscripten compiler. To make this easier, the compiler (emcc & em++) is designed to operate closely as a drop-in replacement for GCC or Clang. This eases migration of existing build systems that are already aware of GCC-like toolchains. Emscripten supports the popular CMake build system configuration generator, and emulates support for GNU Autotools configure scripts. A fact that is sometimes confused is that Emscripten is not a x86/ARM -> WebAssembly code transformation toolchain, but a cross-compiler. That is, Emscripten does not take existing native x86/ARM compiled code and transform that to run on the web, but instead it compiles C/C++ source code to WebAssembly. This means that you must have all the source available (or use libraries bundled with Emscripten or ported to it). Any code that depends on platform-specific (often closed source) native components, such as Win32 and Cocoa APIs, cannot be compiled, but will need to be ported to utilize other solutions. Performance Considerations One of the most frequently asked questions about asm.js/WebAssembly is whether it is fast enough for a particular purpose. Curiously, developers who have not yet tried out WebAssembly are the ones who most often doubt its performance. Developers who have tried it, rarely mention performance as a major issue. There are some performance caveats however, which developers should be aware of. As mentioned earlier, multithreading is not available just yet, so applications that heavily depend on threads will not have the same performance available. Another feature that is not yet available in WebAssembly, but planned, is SIMD instruction set support. Certain instructions can be relatively slower in WebAssembly compared to native. For example, calling virtual functions or function pointers has a higher performance footprint due to sandboxing compared to native code. Likewise, exception handling is observed to cause a bigger performance impact compared to native platforms. The performance landscape can look a bit different, so paying attention to this when profiling can be helpful. Web security validation is known to impact WebGL noticeably. It is recommended that applications using WebGL are careful to optimize their WebGL API calls, especially by avoiding redundant API calls, which still pay the cost for driver security validation. Last, application memory usage is a particularly critical aspect to measure, especially if targeting mobile support as well. Preloading big asset packages on first run and uncompressing large amounts of audio assets are two known sources of memory bloat that are easy to do by accident. Applications will likely need to optimize specifically for this when porting, and this is an active area of optimization in WebAssembly and Emscripten runtime as well. Summary WebAssembly provides support for executing low-level code on the web at high performance, similar to how web plugins used to, except that web security is enforced. For developers using some of the super-popular game engines, leveraging WebAssembly will be as easy as choosing a new export target in the project build menu, and this support is available today. For native C/C++ developers, the open source Emscripten toolchain offers a drop-in compatible way to target WebAssembly. There exists a lively community of developers around Emscripten who contribute to its development, and a mailing list for discussion that can help you getting started. Games that run on the web are accessible to everyone independent of which computation platform they are on, without compromising portability, performance, or security, or requiring up front installation steps. WebAssembly is only one part of a larger collection of APIs that power web-based games, so navigate on to the MDN games section to see the big picture. Hop right on in, and happy Emscriptening! [Less]
Posted 3 days ago by Barbara Bermes
Since the launch of Firefox Focus for Android less than a month ago, one million users have downloaded our fast, simple privacy browser app. Thank you for all your tremendous support for our Firefox Focus for Android app. This milestone marks a huge ... [More] demand for users who want to be in the driver’s seat when it comes to their personal information and web browsing habits. When we initially launched Firefox Focus for iOS last year, we did so based on our belief that everyone has a right to protect their privacy.  We created the Firefox Focus for Android app to support all our mobile users and give them the control to manage their online browsing habits across platforms. Within a week of the the Firefox Focus for Android launch, we’ve had more than 8,000 comments, and the app is rated 4.5 stars. We’re floored by the response! Feedback from Firefox Focus Users “Awesome, the iconic privacy focused Firefox browser now is even more privacy and security focused.”  “Excellent! It is indeed extremely lightweight and fast.”  “This is the best browser to set as your “default”, hands down. Super fast and lightweight.”  “Great for exactly what it’s built for, fast, secure, private and lightweight browsing. “ New Features We’re always looking for ways to improve and your comments help shape our products. We huddled together to decide what features we can quickly add and we’re happy to announce the following new features less than a month since the initial launch: Full Screen Videos: Your comments let us know that this was a top priority. We understand that if you’re going to watch videos on your phone, it’s only worth it if you can expand to the full size of your cellphone screen. We added support for most video sites with YouTube being the notable exception. YouTube support is dependent on a bug fix from Google and we will roll it out as soon as this is fixed. Supports Downloads: We use our mobile phones for entertainment – whether it’s listening to music, playing games, reading an ebook, or doing work.  And for some, it requires downloading a file. We updated the Firefox Focus app to support files of all kind. Updated Notification Actions: No longer solely for reminders to erase your history, Notifications now features a shortcut to open Firefox Focus. Finally, a quick and easy way to access private browsing.   We’re on a mission to make sure our products meet your needs. Responding to your feedback with quick, noticeable improvements is our way of saying thanks and letting you know, “Hey, we’re listening.” You can download the latest version of Firefox Focus on Google Play and in the App Store. Stay tuned for additional feature updates over the coming months!   The post Firefox Focus for Android Hits One Million Downloads! Today We’re Launching Three New User-Requested Features appeared first on The Mozilla Blog. [Less]
Posted 3 days ago by Nick Nguyen
Here at Firefox, we’re always looking for ways for users to get the most out of their web experience. Today, we’re rolling out some improvements that will set the stage for what’s to come in the Fall with Project Quantum. Together these new features ... [More] help to enhance your mobile browsing experience and make a difference in how you use Firefox for iOS. What’s new in Firefox for iOS: New Tab Experience We polished our new tab experience and will be gradually rolling it out so you’ll see recently visited sites as well as highlights from previous web visits. Night Mode For the times when you’re in a dark room and the last thing you want to do is turn on your cellphone to check the time – we added Night Mode which dims the brightness of the screen and eases the strain on your eyes. Now, it’ll be easier to read and you won’t get caught checking your email. https://blog.mozilla.org/wp-content/uploads/2017/07/NightMode12-2.mp4   QR Code Reader Trying to limit the number of apps on your phone?  We’ve eliminated the need to download a separate app for QR codes with a built-in QR code reader that allows you to quickly access QR codes. Feature Recommendations Everyone loves shortcuts and our Feature Recommendations will offer hints and timesavers to improve your overall Firefox experience. To start, this will be available in US and Germany. To experience the newest features and use the latest version of Firefox for iOS, download the update and let us know what you think. We hope you enjoy it!   The post Firefox for iOS Offers New and Improved Browsing Experience with Tabs, Night Mode and QR Code Reader appeared first on The Mozilla Blog. [Less]
Posted 3 days ago by sole
You might have heard that “*Utils” classes are a code smell. Lots of people have written about that before, but I tend to find the reasoning a bit vague, and some of us work better with examples. So here’s one I found recently while working on this ... [More] bug: you can’t know what part of the Utils class is used when you require it, unless you do further investigation. Case in point: if you place a method in VariousUtils.js and then import it later… var { SomeFunction } = require('VariousUtils'); it’ll be very difficult to actually pinpoint when VariousUtils.SomeFunction was used in the code base. Because you could also do this: var VariousUtils = require('VariousUtils'); var SomeFunction = VariousUtils.SomeFunction; or this: var SomeFunction = require('VariousUtils').SomeFunction; or even something like… var SomeFunction; lazyRequire('VariousUtils').then((res) {   SomeFunction = res.SomeFunction; }); Good luck trying to write a regular expression to search for all possible variations of non-evident ways to include SomeFunction in your codebase. You want to be able to search for things easily because you might want to refactor later. Obvious requires make this (and other code manipulation tasks) easier. My suggestion is: if you are importing just that one function, place it on its own file. It makes things very evident: var SomeFunction = require('SomeFunction'); And searching in files becomes very easy as well: grep -lr "require('SomeFunction');" * But I have many functions and it doesn’t make sense to have one function per file! I don’t want to load all of them individually when I need them!!!!111 Then find a common pattern and create a module which doesn’t have Utils in its name. Put the individual functions on a directory, and make a module that imports and exposes them. For example, with an `equations` module and this directory structure: equations   linear.js   cubic.js   bezier.js You would still have to require('equations').linear or some other way of just requiring `linear` if that’s what you want (so the search is “complicated” again). But at least the module is cohesive, and it’s obvious what’s on it: equations. It would not be obvious if it had been called “MathUtils” — what kind of utilities is that? formulas? functions to normalise stuff? matrix kernels? constants? Who knows! So: steer away from “assorted bag of tricks” modules because they’ll make you (or your colleagues) waste time (“what was in that module again?”), and you’ll eventually find yourself splitting them at some point, once they grow enough to not make any sense, with lots of mental context switching required to work on them: “ah, here’s this function for formatting text… now a function to generate UUIDs… and this one for making this low level system call… and… *brainsplosion*” An example that takes this decomposition in files to the “extreme” is lodash. Then it can generate a number of different builds thanks to its extreme modularity. Update: Another take: write code that is easy to delete. I love it! [Less]
Posted 3 days ago
The Rust team is happy to announce the latest version of Rust, 1.19.0. Rust is a systems programming language focused on safety, speed, and concurrency. If you have a previous version of Rust installed, getting Rust 1.19 is as easy as: $ rustup ... [More] update stable If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.19.0 on GitHub. What’s in 1.19.0 stable Rust 1.19.0 has some long-awaited features, but first, a note for our Windows users. On Windows, Rust relies on link.exe for linking, which you can get via the “Microsoft Visual C++ Build Tools.” With the recent release of Visual Studio 2017, the directory structure for these tools has changed. As such, to use Rust, you had to stick with the 2015 tools or use a workaround (such as running vcvars.bat). In 1.19.0, rustc now knows how to find the 2017 tools, and so they work without a workaround. On to new features! Rust 1.19.0 is the first release that supports unions: union MyUnion { f1: u32, f2: f32, } Unions are kind of like enums, but they are “untagged”. Enums have a “tag” that stores which variant is the correct one at runtime; unions elide this tag. Since we can interpret the data held in the union using the wrong variant and Rust can’t check this for us, that means reading or writing a union’s field is unsafe: let u = MyUnion { f1: 1 }; unsafe { u.f1 = 5 }; let value = unsafe { u.f1 }; Pattern matching works too: fn f(u: MyUnion) { unsafe { match u { MyUnion { f1: 10 } => { println!("ten"); } MyUnion { f2 } => { println!("{}", f2); } } } } When are unions useful? One major use-case is interoperability with C. C APIs can (and depending on the area, often do) expose unions, and so this makes writing API wrappers for those libraries significantly easier. Additionally, from its RFC: A native union mechanism would also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases. This feature has been long awaited, and there’s still more improvements to come. For now, unions can only include Copy types and may not implement Drop. We expect to lift these restrictions in the future. As a side note, have you ever wondered how new features get added to Rust? This feature was suggested by Josh Triplett, and he gave a talk at RustConf 2016 about the process of getting unions into Rust. You should check it out! In other news, loops can now break with a value: // old code let x; loop { x = 7; break; } // new code let x = loop { break 7; }; Rust has traditionally positioned itself as an “expression oriented language”, that is, most things are expressions that evaluate to a value, rather than statements. loop stuck out as strange in this way, as it was previously a statement. What about other forms of loops? It’s not yet clear. See its RFC for some discussion around the open questions here. A smaller feature, closures that do not capture an environment can now be coerced to a function pointer: let f: fn(i32) -> i32 = |x| x + 1; We now produce xz compressed tarballs and prefer them by default, making the data transfer smaller and faster. gzip‘d tarballs are still produced in case you can’t use xz for some reason. The compiler can now bootstrap on Android. We’ve long supported Android in various ways, and this continues to improve our support. Finally, a compatibility note. Way back when we were running up to Rust 1.0, we did a huge push to verify everything that was being marked as stable and as unstable. We overlooked one thing, however: -Z flags. The -Z flag to the compiler enables unstable flags. Unlike the rest of our stability story, you could still use -Z on stable Rust. Back in April of 2016, in Rust 1.8, we made the use of -Z on stable or beta produce a warning. Over a year later, we’re fixing this hole in our stability story by disallowing -Z on stable and beta. See the detailed release notes for more. Library stabilizations The largest new library feature is the eprint! and eprintln! macros. These work exactly the same as print! and println! but instead write to standard error, as opposed to standard output. Other new features: String now implements FromIterator> and Extend> Vec now implements From Box now implements From> SplitWhitespace now implements Clone And some freshly-stabilized APIs: OsString::shrink_to_fit cmp::Reverse Command::envs thread::ThreadId See the detailed release notes for more. Cargo features Cargo mostly received small but valuable improvements in this release. The largest is possibly that Cargo no longer checks out a local working directory for the crates.io index. This should provide smaller file size for the registry and improve cloning times, especially on Windows machines. Other improvements: Build scripts can now add environment variables to the environment the crate is being compiled in. Example: println!("cargo:rustc-env=FOO=bar"); Workspace members can now accept glob file patterns Added --all flag to the cargo bench subcommand to run benchmarks of all the members in a given workspace. Added an --exclude option for excluding certain packages when using the --all option The --features option now accepts multiple comma or space delimited values. Added support for custom target specific runners See the detailed release notes for more. Contributors to 1.19.0 Many people came together to create Rust 1.19. We couldn’t have done it without all of you. Thanks! [Less]