I Use This!
Very High Activity


Analyzed 10 days ago. based on code collected 21 days ago.
Posted 4 months ago
Background As part of my work on the Stylo / Quantum CSS team at Mozilla, I needed to be able to test changes to Firefox that only affect Linux 32-bit builds. These days, I believe you essentially have to use a 64-bit host to build Firefox to avoid ... [More] OOM issues during linking and potentially other steps, so this means some form of cross-compiling from a Linux 64-bit host to a Linux 32-bit target. I already had a Linux 64-bit machine running Ubuntu 16.04 LTS, so I set about attempting to make it build Firefox targeting Linux 32-bit. I should note that I only use Linux occasionally at the moment, so there could certainly be a better solution than the one I describe. Also, I recreated these steps after the fact, so I might have missed something. Please let me know in the comments. This article assumes you are already set up to build Firefox when targeting 64-bit. Multiarch Packages (Or: How It's Supposed to Work) Recent versions of Debian and Ubuntu support the concept of "multiarch packages" which are intended to allow installing multiple architectures together to support use cases including... cross-compiling! Great, sounds like just the thing we need. We should be able to install1 the core Gecko development dependencies with an extra :i386 suffix to get the 32-bit version on our 64-bit host: ``` (host) $ sudo apt install libasound2-dev:i386 libcurl4-openssl-dev:i386 libdbus-1-dev:i386 libdbus-glib-1-dev:i386 libgconf2-dev:i386 libgtk-3-dev:i386 libgtk2.0-dev:i386 libiw-dev:i386 libnotify-dev:i386 libpulse-dev:i386 libx11-xcb-dev:i386 libxt-dev:i386 mesa-common-dev:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libgtk-3-dev:i386 : Depends: gir1.2-gtk-3.0:i386 (= 3.18.9-1ubuntu3.3) but it is not going to be installed Depends: libatk1.0-dev:i386 (>= 2.15.1) but it is not going to be installed Depends: libatk-bridge2.0-dev:i386 but it is not going to be installed Depends: libegl1-mesa-dev:i386 but it is not going to be installed Depends: libxkbcommon-dev:i386 but it is not going to be installed Depends: libmirclient-dev:i386 (>= 0.13.3) but it is not going to be installed libgtk2.0-dev:i386 : Depends: gir1.2-gtk-2.0:i386 (= 2.24.30-1ubuntu1.16.04.2) but it is not going to be installed Depends: libatk1.0-dev:i386 (>= 1.29.2) but it is not going to be installed Recommends: python:i386 (>= 2.4) but it is not going to be installed libnotify-dev:i386 : Depends: gir1.2-notify-0.7:i386 (= 0.7.6-2svn1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ``` Well, that doesn't look good. It appears some of the Gecko libraries we need aren't happy about being installed for multiple architectures. Switch Approaches to chroot Since multiarch packages don't appear to be working here, I looked around for other approaches. Ideally, I would have something fairly self-contained so that it would be easy to remove when I no longer need 32-bit support. One approach to multiple architectures that has been around for a while is to create a chroot environment: effectively, a separate installation of Linux for a different architecture. A utility like schroot can then be used to issue the chroot(2) system call which makes the current session believe this sub-installation is the root filesystem. Let's grab schroot so we'll be able to enter the chroot once it's set up: (host) $ sudo apt install schroot There are several different types of chroots you can use with schroot. We'll use the directory type, as it's the simplest to understand (just another directory on the existing filesystem), and it will make it simpler to expose a few things to the host later on. You can place the directory wherever, but some existing filesystems are mapped into the chroot for convenience, so avoiding /home is probably a good idea. I went with /var/chroot/linux32: (host) $ sudo mkdir -p /var/chroot/linux32 We need to update schroot.conf to configure the new chroot: (host) $ sudo cat << EOF >> /etc/schroot/schroot.conf [linux32] description=Linux32 build environment aliases=default type=directory directory=/var/chroot/linux32 personality=linux32 profile=desktop users=jryans root-users=jryans EOF In particular, personality is important to set for this multi-arch use case. (Make sure to replace the user names with your own!) Firefox will want access to shared memory as well, so we'll need to add that to the set of mapped filesystems in the chroot: (host) $ sudo cat << EOF >> /etc/schroot/desktop/fstab /dev/shm /dev/shm none rw,bind 0 0 EOF Now we need to install the 32-bit system inside the chroot. We can do that with a utility called debootstrap: (host) $ sudo apt install debootstrap (host) $ sudo debootstrap --variant=buildd --arch=i386 --foreign xenial /var/chroot/linux32 http://archive.ubuntu.com/ubuntu This will fetch all the packages for a 32-bit installation and place them in the chroot. For a cross-arch bootstrap, we need to add --foreign to skip the unpacking step, which we will do momentarily from inside the chroot. --variant=buildd will help us out a bit by including common build tools. To finish installation, we have to enter the chroot. You can enter the chroot with schroot and it remains active until you exit. Any snippets that say (chroot) instead of (host) are meant to be run inside the chroot. So, inside the chroot, run the second stage of debootstrap to actually unpack everything: (chroot) $ sudo /debootstrap/debootstrap --second-stage Let's double-check that things are working like we expect: (chroot) $ arch i686 Great, we're getting closer! Install packages Now that we have a basic 32-bit installation, let's install the packages we need for development. The apt source list inside the chroot is pretty bare bones, so we'll want to expand it a bit to reach everything we need: (chroot) $ sudo cat << EOF > /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu xenial main universe deb http://archive.ubuntu.com/ubuntu xenial-updates main universe EOF (chroot) $ sudo apt update Let's grab the same packages from before (without :i386 since that's the default inside the chroot): (chroot) $ sudo apt install libasound2-dev libcurl4-openssl-dev libdbus-1-dev libdbus-glib-1-dev libgconf2-dev libgtk-3-dev libgtk2.0-dev libiw-dev libnotify-dev libpulse-dev libx11-xcb-dev libxt-dev mesa-common-dev python-dbus xvfb yasm You may need to install the 32-bit version of your graphics card's GL library to get reasonable graphics output when running in the 32-bit environment. (chroot) $ sudo apt install nvidia-384 We'll also want to have access to the X display inside the chroot. The simple way to achieve this is to disable X security in the host and expose the same display in the chroot: (host) $ xhost + (chroot) $ export DISPLAY=:0 We can verify that we have accelerated graphics: (chroot) $ sudo apt install mesa-utils (chroot) $ glxinfo | grep renderer OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2 Building Firefox In order for the host to build Firefox for the 32-bit target, it needs to access various 32-bit libraries and include files. We already have these installed in the chroot, so let's cheat and expose them to the host via symlinks into the chroot's file structure: (host) $ sudo ln -s /var/chroot/linux32/lib/i386-linux-gnu /lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/lib/i386-linux-gnu /usr/lib/ (host) $ sudo ln -s /var/chroot/linux32/usr/include/i386-linux-gnu /usr/include/ We also need Rust to be able to target 32-bit from the host, so let's install support for that: (host) $ rustup target add i686-unknown-linux-gnu We'll need a specialized .mozconfig for Firefox to target 32-bit. Something like the following: (host) $ cat << EOF > ~/projects/gecko/.mozconfig export PKG_CONFIG_PATH="/var/chroot/linux32/usr/lib/i386-linux-gnu/pkgconfig:/var/chroot/linux32/usr/share/pkgconfig" export MOZ_LINUX_32_SSE2_STARTUP_ERROR=1 CFLAGS="$CFLAGS -msse -msse2 -mfpmath=sse" CXXFLAGS="$CXXFLAGS -msse -msse2 -mfpmath=sse" if test `uname -m` = "x86_64"; then CFLAGS="$CFLAGS -m32 -march=pentium-m" CXXFLAGS="$CXXFLAGS -m32 -march=pentium-m" ac_add_options --target=i686-pc-linux ac_add_options --host=i686-pc-linux ac_add_options --x-libraries=/usr/lib fi EOF This was adapted from the mozconfig.linux32 used for official 32-bit builds. I modified the PKG_CONFIG_PATH to point at more 32-bit files installed inside the chroot, similar to the library and include changes above. Now, we should be able to build successfully: (host) $ ./mach build Then, from the chroot, you can run Firefox and other tests: (chroot) $ ./mach run Footnotes 1. It's commonly suggested that people should use ./mach bootstrap to install the Firefox build dependencies, so feel free to try that if you wish. I dislike scripts that install system packages, so I've done it manually here. The bootstrap script would likely need various adjustments to support this use case. ↩ [Less]
Posted 4 months ago by Bogdan Maris
As you may already know, last Friday – August 18th – we held a new Testday event, for Firefox 56 Beta 4. Thank you Iryna Thompson for helping us make Mozilla a better place. From India team: Fahima Zulfath A, Surentharan R.A, subash.M, Ponmurugesh.M ... [More] , R.KRITHIKA SOWBARNIKA. From Bangladesh team: Hossain Al Ikram, Maruf Rahman, Azmina Akter Papeya, Rahat Anwar, Saddam Hossain, Anika Alam, Iftekher Alam, Sajedul Islam, Tanjina Tonny, Kazi Nuzhat Tasnem, Tanvir Mazharul, Taseenul Hoque Bappi, Sontus Chandra Anik, Md. Rahimul Islam, Nafis Fuad, Saheda Reza Antora. Results: – several test cases executed for Media Block Autoplay, Preferences Search [Photon] and Photon Preference reorg V2 features; – 3 bugs verified: 1374972, 1387273 and 1375883. Thanks for another successful testday We hope to see you all in our next events, all the details will be posted on QMO! [Less]
Posted 4 months ago by ehsan
We’re now about mid-way through the Firefox 57 development cycle.  The progress of Quantum Flow bugs has been steady, we now have 65 open [qf:p1] bugs at the time of this writing and 283 fixed bugs.  There are still more bugs being flagged for triage ... [More] constantly.  I haven’t really spoken much about the triage process lately and the reason is that it has been working as per usual and the output should be fairly visible to everyone through our dashboard. On the Speedometer front, if you are watching the main tracking bugs, the addition of new dependencies every once in a while should be an indication that we are still profiling the benchmark looking for more areas where we can think of speedup opportunities.  Finding these new opportunities has become more and more difficult as we have been fixing more and more of the existing performance issues, which is exactly what you would expect working on improving the performance of Firefox on such a benchmark workload.  Of course, we still have ongoing work in the existing dependency tree (which is quite massive at this point) so more improvements should hopefully arrive as we keep landing more fixes on this front. I realize that I have been quite inconsistent in having a performance story section in these newsletters, and I hope the readers will forgive me for that!    But these past couple of weeks, Jan de Mooij’s continued effort on removing getProperty/setProperty JSClass hooks from SpiderMonkey made me want to write a few sentences about some useful lessons we have learned from performance measurements which can hopefully be used in the future when designing new components/subsystems.  Often times when we are thinking of how to design software, one can think of many extension points at various levels which consumers of the code can plug into in order to customize behavior.  But many such extension points come at a runtime cost.  The cost is usually quite small, we may need to consume some additional memory to store more state, we may need to branch on some conditions, we may need to perform some more indirect/virtual calls, etc.  The problem is that usually this cost is extremely small, and it can easily go unnoticed.  But this can often happen in many places, and over time performance issues like this tend to creep in and hide in corners.  Of course, usually when these extension points are added there are good reasons for creating them, but it may be a good idea to ask questions like “Is this mechanism too high level of a solution for this specific problem?”, “Is the runtime cost paid for this over the years to come justified to solve the issue at hand?”, “Could this issue be solved by adding an extension point in a more specialized place where the added cost would only affect a subset of the consumers?”, etc.  The reality of software engineering is that in a lot of cases we need to trade off having a generic, extensible architecture in our code versus having efficient code, so if you end up choosing extensibility, it’s a good idea to ensure you have had the performance aspects in mind.  It’s even better if you document the performance concerns! And since we touched on this, now may be a good time to also take a quick moment to call out another issue which I have seen come up on some of the performance issues we have been looking into in the past few months.  That is the death by a thousand cuts performance problems.  In my experience, many of the performance issues that we need to deal with, when profiled turn out to be caused by only a few really badly performing parts of the code, or at least are due to a few underlying causes.  But we also have no shortage of the other kind of performance issues which are honestly much more difficult to deal with.  The way things work out in the opposite scenario is you look at a profile from the badly performing case, you narrow down on the section of the profile which demonstrates the issue, and no matter how hard you squint, there are no major issues to be fixed.  Rather, the profile shows many individual issues each contributing to a tiny portion of the time spent during the workload.  These performance issues are much harder to analyze (since there are typically many ways you can start approaching it and it’s unclear where is a good place to start) and they take a much longer time to result in measurable improvements, as you’d need to fix quite a few issues in order to be able to measure the resulting improvement.  For a good example of this, please look at the saga of optimizing setting the value property of input elements.  This project has been going on for a few months now, and during this time the workload has been made faster by more than an order of magnitude, but still if you look at each of the individual patches that have landed, they look like micro-optimizations, and that’s for a good reason, because they are.  But overall they add up to significant improvements. Before closing, it is worth mentioning that the ongoing performance work isn’t suddenly going to stop with the release of Firefox 57!  In fact we have large performance projects which are going to get ready after Firefox 57, and that is a good thing, since I view Firefox 57 not as an ultimate performance goal, but as a solid performance foundation for us to start building upon.  A great example is the Quantum Render project which has been going on for a long time now.  This project aims to integrate the WebRender component of Servo into Firefox.  This project now has an exciting newsletter, and the first two issues are out!  Please take a moment to check it out. And now it is time to take a moment to thank the contributions of those who helped make Firefox faster last week.  As usual I hope I’m not forgetting any names! Evelyn Hung made it so that when we tell the content process to load a URI, we first initiate a speculative connection in the parent process so that we don’t have to wait for the content process to request network access for the connection to be set up.  In a similar vein, Evelyn also made us initiate a speculative connection upon the mousedown event on an awesomebar result entry to start setting up the network connection even faster when using the mouse to pick an awesomebar result. Paolo Amadini made FileUtils.getFile() not do main-thread IO for the common case.  You may remember some fixes to callers of this function were mentioned in the previous newsletters to avoid this pattern of main-thread IO, and this fix will hopefully address this issue for most of the remaining callers. Paolo also got rid of some reflows we were doing when opening up the AwesomeBar panel. prasanthp96 deferred reading several preferences to reduce their impact on startup performance. Botond Ballo enabled support for asynchronous autoscrolling on the Nightly channel. André Bargull inlined IsCallable when called from MIRType::Value. Olli Pettay added a nursory to the cycle collector purple buffer in order to speed up AddRef/Release calls to cycle collectible objects on the main thread.  He also added a faster variant of TextEditor::GetDocumentIsEmpty() in order to speed up setting the value property of input elements. Ming-Chou Shih enabled coalescing mousemove events to once per refresh cycle.  This feature helps performance by dispatching fewer mousemove events on pages which have expensive mousemove handlers, and is shipped in Chrome recently.  It is currently disabled behind a preference for testing. Kris Maglione cached some extension manifest data in the startup cache.  He also converted FrameLoader bindings to WebIDL for improved performance of the JS code going through these bindings to access the underlying C++ code.  He also made the WebExtension schema normalization code faster.  Last but not least, Kris added a UI for notifying the user about long running WebExtension content scripts and provide the option to stop them, similar to the existing UI we have for long running content scripts.  While this isn’t strictly a performance improvement in itself, it is worthy of mention here because it allows the user to interrupt a badly behaving WebExtension content script causing performance issues. Masayuki Nakano optimized TextEditRules::CollapseSelectionToTrailingBRIfNeeded(). Jessica Jong made sure we skip the potentially expensive pattern matching code when validating input elements if the element has no pattern attribute set. Jan de Mooij devirtualized MNode::kind() and MDefinition::op(). Bao Quan moved _saveStateAsync to the idle event queue. John Dai ensured that we avoid processing custom element reactions stack when web components are disabled. Makoto Kato enabled lazy frame construction in editable regions of HTML documents.  The original lazy frame construction optimization was enabled in 2010 and shipped in Firefox 4 but it never covered editable sections of HTML documents such as contents of input and textarea textboxes as well as contenteditable elements until now. Doug Thayer made it so that we avoid some main thread IO when registering MIME-type handlers on start-up. [Less]
Posted 4 months ago
This summer I had the pleasure of implementing Custom Elements in Servo under the mentorship of jdm. Introduction Custom Elements are an exciting development for the Web Platform. They are apart of the Web Components APIs. The goal is to allow web ... [More] developers to create reusable web components with first-class support from the browser. The Custom Element portion of Web Components allows for elements with custom names and behaviors to be defined and used via HTML tags. For example, a developer could create a custom element called fancy-button which has special behavior (for example, ripples from material design). This element is reusable and can be used directly in HTML: My Cool Button For examples of cool web components check out webcomponents.org. While using these APIs directly is very powerful, new web frameworks are emerging that harness the power of Web Component APIs and give developers even more power. One major contender with frontend web frameworks is Polymer. The Polymer framework builds on top of Web Components and removes boilerplate and makes using web components easier. Another exciting framework using Custom Elements is A-Frame (supported by Mozilla). A-Frame is a WebVR framework that allows developers to create entire Virtual Reality experiences using HTML elements and javascript. There has been some recent work in getting WebVR and A-Frame functional in Servo. Implementing Custom Elements removes the need for Servo to rely on a polyfill. For more information on what Custom Elements are and how to use them, I would suggest reading Custom Elements v1: Reusable Web Components. Implementation Before I began the implementation of Custom Elements, I broke down the spec into a few major pieces. The CustomElementRegistry Custom element creation Custom element reactions The CustomElementRegistry keeps track of all the defined custom elements for a single window. The registry is where you go to define new custom elements and later Servo will use the registry to lookup definitions give a possible custom element name. The bulk of the work in this section of the implementation was validating custom element definitions. Custom element creation is the process of taking a custom element definition and running the defined constructor on a HTMLElement or the element extends. This can happen either when a new element is created, or after an element has been created via an upgrade reaction. The final portion is triggering custom element reactions. There are two types of reactions: Callback reactions Upgrade reactions Callback reactions fire when custom elements: are connected from the DOM tree are disconnected from the DOM tree are adopted into a new document have an attribute that is modified When the reactions are triggered, the corresponding lifecycle method of the Custom Element is called. This allows the developer to implement custom behavior when any of these lifecycle events occur. Upgrade reactions are used to take a non-customized element and make it customized by running the defined constructor. There is quite a bit of trickery going on behind the scenes to make all of this work. I wrote a post about custom element upgrades explaining how they work and why they are needed. I used Gecko’s partial implementation of Custom Elements as a reference for a few parts of my implementation. This became extrememly useful whenever I had to use the SpiderMonkey API. Roadblocks As with any project, it is difficult to foresee big issues until you actually start writing the implementation. Most parts of the spec were straightforward and did not yield any trouble while I was writing the implementation; however, there were a few difficulties and unexpected problems that presented themselves. One major pain-point was working with the SpiderMonkey API. This was more due to my lack of experience with the SpiderMonkey API. I had to learn how compartments work and how to debug panics coming from SpiderMonkey. bzbarsky was extremely helpful during this process; they helped me step through each issue and understand what I was doing wrong. While I was in the midst of writing the implementation, I found out about the HTMLConstructor attribute. I had missed this part of the spec during the planning phase. The HTMLConstructor WebIDL attribute marks certain HTML elements that can be extended and generates a custom constructor for each that allows custom element constructors to work (read more about this in custom element upgrades). Notable Pull Requests Implement custom element registry Custom element creation Implement custom element reactions Custom element upgrades Conclusions I enjoyed working on this project this summer and hope to continue my involvement with the Servo project. I have a gsoc repository that contains a list of all my GSoC issues, PRs, and blog posts. I want to extend a huge thanks to my mentor jdm and to bzbarsky for helping me work through issues when using SpiderMonkey. [Less]
Posted 4 months ago by Jochai Ben-Avie
Mozilla is thrilled to see the Supreme Court of India’s decision declaring that the Right to Privacy is guaranteed by the Indian Constitution. Mozilla fights for privacy around the world as part of our mission, and so we’re pleased to see the Supreme ... [More] Court unequivocally end the debate on whether this right even exists in India. Attention must move now to Aadhaar, which the government is increasingly making mandatory without meaningful privacy protections. To realize the right to privacy in practice, swift action is needed to enact a strong data protection law. The post Mozilla applauds India Supreme Court’s decision upholding privacy as a fundamental right appeared first on Open Policy & Advocacy. [Less]
Posted 4 months ago by Mike
Just like jaws did last week, I’m taking over for dolske this week to talk about stuff going on with Photon Engineering. So sit back, strap in, and absorb Photon Engineering Newsletter #14! If you’ve got the release calendar at hand, you’ll note that ... [More] Nightly 57 merges to Beta on September 20th. Given that there’s usually a soft-freeze before the merge, this means that there are less than 4 weeks remaining for Photon development. That’s right – in less than a month’s time, folks on the Beta channel who might not be following Nightly development are going to get their first Photon experience. That’ll be pretty exciting! So with the clock winding down, the Photon team has started to shift more towards polish and bug-fixing. At this point, all of the major changes should have landed, and now we need to buff the code to a sparkling sheen. The first thing you may have noticed is that, after a solid run of dogefox, the icon has shifted again: We now return you to your regularly scheduled programming The second big change are our new 60fps1 loading throbbers in the tabs, coming straight to you from the Photon Animations team! I think it’s fair to say that Photon Animations are giving Firefox a turbo boost! Other recent changes Menus and structure A “Bookmarking Tools” subview has been added to the Library button so you can easily get to the bookmarks toolbar, sidebar, and bookmarks menu button How convenient! You might have noticed that the downloads button will only appear when downloads exist and isn’t movable anymore. This is something we’re still tinkering with, so stay tuned. We made the sync animation prettier! Check it out! Animations Did we mention the new tab loading throbber? Preferences All MVP work is completed! The team is now fixing polish bugs. Outstanding! Visual redesign The styles for the sidebar have been updated on WIndows! The bookmarks sidebar finally gets some attention! New icons have landed for a number of Firefox functions. Can you find them all? The team also landed a slew of polish and bug fixes. Here’s the full list! Onboarding The Firefox 57 tours have been enabled! The PageAction UITour highlight style has been updated to be more consistent with the rest of Photon Nifty! A bunch of polish and bugfixes for the onboarding tour have landed! Performance The performance team quickly diagnosed a regression in the FX_NEW_WINDOW_MS Telemetry probe and the regressing patch has been backed out. The screen capturing software I used here is only capturing at 30fps, so it’s really not doing it justice. This tweet might capture it better. ↩ [Less]
Posted 4 months ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted 4 months ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted 4 months ago by Potch
With Firefox’s move to a modern web-style browser extension API, it’s now possible to maintain one codebase and ship an extension in multiple browsers. However, since different browsers can have different capabilities, some extensions may require ... [More] modification to be truly portable. With this in mind, we’ve built the Extension Compatibility Tester to give developers a better sense of whether their existing extensions will work in Firefox. The tool currently supports Chrome extension bundle (.crx) files, but we’re working on expanding the types of extensions you can check. The tool generates a report showing any potential uses of APIs or permissions incompatible with Firefox, along with next steps on how to distribute a compatible extension to Firefox users. We will continue to participate in the Browser Extensions Community Group and support its goal of finding a common subset of extensible points in browsers and APIs that developers can use. We hope you give the tool a spin and let us know what you think! Try it out! >> “The tool says my extension may not be compatible“ Not to worry! Our analysis only shows API and permission usage, and doesn’t have the full context. If the incompatible functionality is non-essential to your extension you can use capability testing to only use the API when available: // Causes an Error browser.unavailableAPI(...); // Capability Testing FTW! if ('unavailableAPI' in browser) { browser.unavailableAPI(...); } Additionally, we’re constantly expanding the available extension APIs, so your missing functionality may be only a few weeks away! “The tool says my extension is compatible!” Hooray! That said, definitely try your extension out in Firefox before submitting to make sure things work as you expect. Common APIs may still have different effects in different browsers. “I don’t want to upload my code to a 3rd party website.” Understood! The compatibility testing is available as part of our extension development command-line tool or as a standalone module. If you have any issues using the tool, please file an issue or leave a comment here. The hope is that this tool is a useful first step in helping developers port their extensions, and we get a healthier, more interoperable extension ecosystem. Happy porting! [Less]
Posted 4 months ago by Ryan T. Harter
I just wrote up a style guide for our team's documentation. The documentation is rendered using Gitbook and hosted on Github Pages. You can find the PR here but I figured it's worth sharing here as well. Style Guide Articles should be written in ... [More] Markdown (not AsciiDoc). Markdown is usually powerful enough and is a more common technology than AsciiDoc. Limit lines to 100 characters where possible. Try to split lines at the end of sentences. This makes it easier to reorganize your thoughts later. This documentation is meant to be read digitally. Keep in mind that people read digital content much differently than other media. Specifically, readers are going to skim your writing, so make it easy to identify important information Use visual markup like bold text, code blocks, and section headers. Avoid long paragraphs. Short paragraphs that describe one concept each makes finding important information easier. Please squash your changes into meaningful commits and follow these commit message guidelines. [Less]