Very High Activity
I Use This!

News

Analyzed 1 day ago. based on code collected 1 day ago.
Posted almost 2 years ago by John
Its been a while since I last blogged about “remoties”, but it continues to be a very popular topic! In addition to Twilio in February, I’ve given presentations at Automattic (best known for WordPress), RiotGames (twice) and Haas, UCBerkeley (twice) ... [More] , as well as smaller private discussions with several other companies. You can get the slides in PDF format by clicking on the thumbnail of the first slide. (I’m happy to share the original very large keynote file, just let me know and we’ll figure out a way to share without hammering my poor website.) Remoties are clearly something that people care deeply about. Geo-distributed teams are becoming more commonplace, and yet the challenges continue to be very real. The interest before each presentation is cautiously high, while the Q+A discussions during/afterwards are very engaged and lively. Every time, I find myself tweaking, honing and refining the presentation again and again… yet, the core principles remain the same: remoties / geo-distributed teams can be very effective, and can be sustained over time. remoties != compromise. In fact, a geo-distributed team means you can hire best-available, not “just” best-willing-to-relocate. easy to use, cheap, technologies work just fine if used correctly (maybe even better then expensive systems?) crisp, careful organization of human processes is essential in a geo-distributed team, *everyone* is a remotie, even people who happen to sit in an office. If you are remote from someone else, that makes you *both* remoties. Hence the working title “we are ALL remoties”. Given how this topic impacts people’s jobs, and their lives, I’m not surprised by the passionate responses, and each time, the lively discussions encourage me to keep talking about this. As always, if you have any questions, suggestions or good/bad stories about working in a remote or geo-distributed teams, please let me know – I’d love to hear them. Thanks John. ps: I noticed in my website logs that a lot of people were still downloading my original remoties slides, first posted in apr2012, even though I’d posted multiple revisions of the slides since. So, I’ve gone back and updated my earlier “remoties” blog posts to all point to these latest-and-greatest slides. [Less]
Posted almost 2 years ago
David Boswell wants to create a Volunteer Agreement for Mozilla. I propose creating a Non-Volunteer Agreement for Mozilla.
Posted almost 2 years ago
How little can one pack and still have a fully-functional chef´s kitchen?
Posted almost 2 years ago
A summary of the most informed writing on the Brendan Eich controversy, and some perspective from my 15 years as a Mozillan.
Posted almost 2 years ago
Thoughts on the upcoming TAG election of 2012.
Posted almost 2 years ago by John
I was honored to give the opening keynote for USENIX URES14 East in Philadelphia in June 2014. “The Value of Release Engineering as a Force Multiplier” keynote built on top of the “RelEng as a Force Multiplier” presentation I gave at RelEngConf ... [More] 2013 and as then as a Google Tech Talk. (To get the slides in PDF format, click on thumbnail. Happy to share the original 25MB keynote file, just let me know and we’ll figure out a way to share without hammering my poor website.) Anyone who has ever talked with me about RelEng knows I feel very strongly that Release Engineering is important to the success of every software project. Writing a popular v1.0 product is just the first step. If you want to keep your initial early-adopter users by shipping v1.0.1 fixes, or grow your user base by shipping new v2.0 features to your existing users, you need a reproducible pipeline for accurately delivering software in a repeatable manner. Otherwise, you are “only” delivering a short-lived flash-in-the-pan one-off project. In my opinion, this pipeline is another product that software companies need to develop, alongside their own unique product, if they want to stay in the marketplace, and scale. Its typical for Release Engineers to talk about the value of RelEng in terms that Release Engineers value – timely delivery, accurate builds, turnaround time, etc. I believe its important to also describe Release Engineering in terms that people across an organization can understand. In my keynote, I specifically talked about the value of RelEng in terms that people-who-run-companies value – unique business opportunities, market / competitive advantages, new business models, reduced legal risk, etc. Examples included: Mozilla’s infrastructure improvements which reduced turnaround time for delivering security fixes as well as helped deter future attacks… Hortonwork’s business ability to provide enterprise-grade support SLAs to customers running mission critical production “big data” systems on 100% open source Apache Hadoop… and even NASA’s remote software update of the Mars Rover. People seemed to enjoy the presentation, with lively questions during, afterwards… and even into the end-of-day panel session. Big thanks to the organizers (especially Dinah McNutt (RelEng at Google), Gareth Bowles) – they did an awesome job putting together a unique and special event. Oh, and one more thing! Next week, USENIX URES14 West will start on Monday 10nov2014 in Seattle. If you are in the area, or can get there for Monday, you should attend! And make sure to see Kmoir’s presentation “Scaling Capacity While Saving Cash” – if you follow her blog, you know you can expect it to be well worth attending. [Updated to include links to usenix recordings. joduinn 08nov2014] [Less]
Posted almost 2 years ago by sole
That is the title of the talk I gave yesterday at Full Frontal in Brighton. The video is still not out but here are the slides (and the source for the slides, with all the source for the examples). If you were in my Web Audio workshop in Berlin, this ... [More] talk followed the same style, except I refined some points and sadly forgot a couple. I also showed the Web Audio Editor in Firefox DevTools, which I didn’t in Berlin because Jordan was going to talk about it after me. I had a little bit of a surprise at the end of the talk, when I “presented” for the first time a little project we’ve been working on for a while: OpenMusic. And I have “quoted” the presented word because the work has always been in GitHub in the open, so if you followed me in GitHub you might have seen all the repos popping up and wonder what the hell is Sole doing lately. So, just in case you weren’t in the conference, OpenMusic aims to be a nice collection of interoperable/reusable Web Audio modules and components. This is an idea that Angelina sort of had when they saw my audio tags talk last year, and has been brewing in the back of our minds until a couple of months ago when the A-HA! moment finally happened. And so I’ve been pulling apart components and pieces from my existing Web Audio-based code, because I realised I was doing the same thing over and over and I wanted to do new things but I didn’t want to do the same thing yet again. So, small npm based modules it is. And a bunch of them! I’m a bit short on time lately (and I’m being very generous on this description), so some of the modules are a bit too rushed and a tad obscure, but they should work and have some minimal documentation already, and they’ll get better. Be kind while I deconstruct my hacks–or better yet, start deconstructing yours too! =) Thanks to Remy for inviting me to this ultra cool conference… and accidentally triggering the A-HA moment! [Less]
Posted almost 2 years ago by ahal
There's a good chance you've heard something about a new review tool coming to Mozilla and how it will change everything. There's an even better chance you've stumbled across one of gps' blog posts on how we use mercurial at Mozilla. With mozreview ... [More] entering beta, I decided to throw out my old mq based workflow and try to use all the latest and greatest tools. That means mercurial bookmarks, a unified mozilla-central, using mozreview and completely expunging mq from my workflow. Making all these changes at the same time was a little bit daunting, but the end result seems to be a much easier and more efficient workflow. I'm writing the steps I took down in case it helps someone else interested in making the switch. Everything in this post is either repeating the mozreview documentation or one of gps' blog posts, but I figured it might help for a step by step tutorial that puts all the pieces together, from someone who is also a mercurial noob. Setup Mercurial Before starting you need to do a bit of setup. You'll need the mercurial reviewboard and firefoxtree extensions and mercurial 3.0 or later. Luckily you can run: $ mach mercurial-setup And hitting 'yes' to everything should get you what you need. Make sure you at least enable the rebase extension. In my case, mercurial > 3.0 didn't exist in my package repositories (Fedora 20) so I had to download and install it manually. MozReview There is also some setup required to use the mozreview tool. Follow the instructions to get started. Tagging the Baseline Because we enabled the firefoxtree extension, anytime we pull a remote repo from hg.mozilla.org, a local tag will be created for us. So before proceeding further, make sure we have our baseline tagged: $ hg pull https://hg.mozilla.org/mozilla-central $ hg log -r central Now we know where mozilla-central tip is. This is important because we'll be pulling mozilla-inbound on top later. Create path Aliases Edit: Apparently the firefoxtree extension provides built-in aliases so there's no need to do this step. The aliases follow the central, inbound, aurora convention. Typing the url out each time is tiresome, so I recommend creating path aliases in your ~/.hgrc: [paths] m-c = https://hg.mozilla.org/mozilla-central m-i = https://hg.mozilla.org/integration/mozilla-inbound m-a = https://hg.mozilla-org/releases/mozilla-aurora m-b = https://hg.mozilla-org/releases/mozilla-beta m-r = https://hg.mozilla-org/releases/mozilla-release Learning Bookmarks It's a good idea to be at least somewhat familiar with bookmarks before starting. Reading this tutorial is a great primer on what to expect. Start Working on a Bug Now that we're all set up and we understand the basics of bookmarks, it's time to get started. Create a bookmark for the feature work you want to do: $ hg bookmark my_feature Make changes and commit as often as you want. Make sure at least one of the commits has the bug number associated with your work, this will be used by mozreview later: ... do some changes ... $ hg commit -m "Bug 1234567 - Fix that thing that is broken" ... do more changes ... $ hg commit -m "Only one commit message needs a bug number" Maybe you want to pull central again and rebase your changes on top of it. No problem: $ hg update central $ hg pull central $ hg rebase -b my_feature -d central Pushing a Bookmark for Review When you are ready for review, all you do is: $ hg update my_feature $ hg push review Mercurial will automatically push the currently active bookmark to the review repository. This is equivalent (no need to update): $ hg push -r my_feature review At this point you should see some links being dumped to the console, one for each commit in your bookmark as well as a parent link to the overall review. Open this last link to see your review request. At this stage, the review is unpublished, you'll need to add some reviewers and publish it before anyone else can see it. Instead of explaining how to do this, I highly recommend reading the mozreview instructions carefully. I would have saved myself a lot of time if I had just paid closer attention to them. Once published, mozreview will automatically update the associated bug with appropriate information. Fixing Review Comments If all went well, someone has received your review request. If you need to make some follow up changes, it's super easy. Just activate the bookmark, make a new commit and re-push: $ hg update my_feature ... fix review comments ... $ hg commit -m "Address review comments" $ hg push review Mozreview will automatically detect which commits have been pushed to the review server and update the review accordingly. In the reviewboard UI it will be possible for reviewers to see both the interdiff and the full diff by moving a commit slider around. Pushing to Inbound Once you've received the r+, it's time to push to mozilla-inbound. Remember that firefoxtree makes local tags when you pull from a remote repo on hg.mozilla.org, so let's do that: $ hg update central $ hg pull inbound $ hg log -r inbound Next we rebase our bookmark on top of inbound. In this case I want to use the --collapse argument to fold the review changes into the original commit: $ hg rebase -b my_feature -d inbound --collapse A file will open in your default editor where you can modify the commit message to whatever you want. In this case I'll just delete everything except the original commit message and add "r=". And now everything is ready! Verify you are pushing what you expect and push: $ hg outgoing -r my_feature inbound $ hg push -r my_feature inbound Pushing to other Branches The beauty of this system is that it is trivial to land patches on any tree you want. If I wanted to land my_feature on aurora: $ hg pull aurora $ hg rebase -b my_feature -d aurora $ hg outgoing -r my_feature aurora $ hg push -r my_feature aurora Syncing work across Computers You can use a remote clone of mozilla-central to sync bookmarks between computers. Instead of pushing with -r, push with -B. This will publish the bookmark on the remote server: $ hg push -B my_feature <my remote mercurial server> From another computer, you can pull the bookmark in the same way: $ hg pull -B my_feature <my remote mercurial server> WARNING: As of this writing, Mozilla's user repositories are publishing! This means that when you push a commit to them, they will mark the commit as public on your local clone which means you won't be able to push them to either the review server or mozilla-inbound. If this happens, you need to run: $ hg phase -f --draft <rev> This is enough of a pain that I'd recommend avoiding user repositories for this purpose unless you can figure out how to make them non-publishing. Conclusion I'll need to play around with things a little more, but so far everything has been working exactly as advertised. Kudos to everyone involved in making this workflow possible! [Less]
Posted almost 2 years ago by Jim Chen
We use the adb logcat functionality a lot when developing or debugging Fennec. For example, outside of remote debugging, the quickest way to see JavaScript warnings and errors is to check the logcat, which the JS console redirects to. Sometimes ... [More] , we catch a Java exception (e.g. JSONException) and log it, but we otherwise ignore the exception. Unless you are actively looking at the logcat, it's easy to miss messages like these. In other cases, we simply want a way to check the logcat when away from a computer, or when a user is not familiar with adb or remote debugging. The LogView add-on, available now on AMO, solves some of these problems. It continuously records the logcat output and monitors it. When it sees an error in the logcat, the error is displayed as a toast for visibility. You can also access the current logs through the new about:logs page. The add-on only supports Jelly Bean (4.1) and above, and only Fennec logs are included rather than logs for all apps. Check out the source code or contribute on Github. Feature suggestions are also welcome! I think the next version will have the ability to filter logs in about:logs. It will also allow you to copy logs to the clipboard and/or post logs as a pastebin link. [Less]
Posted almost 2 years ago by smartin
While October 24-26 marked the fifth official MozFest celebration, it was an exhilarating first for the newly formed Policy & Advocacy track. Before we wrap up the event, the Policy & Advocacy Wranglers want to share our thoughts and ... [More] observations on the event. This year, we broadened our focus from 2013’s Privacy track to involve the entire Policy & Advocacy community, celebrating the Web We Want and highlighting the global movement to protect the free and open web. What We Planned Our track featured more than 20 sessions spanning digital citizenship, kids safety, net neutrality, privacy, security, and anti-surveillance. The advocacy sessions shared the secrets of successful campaigns, the tools of the trade, and how to use trouble to your advantage. One session invited people to conceptualize a new Internet Alert System. The track also featured talks about current events and issues, including the surveillance ecosystem, net neutrality, and Do Not Track. Those looking to use or gain technical skills had the opportunity to join four consecutive Hackathons — ranging from creating mesh networks to creating data visualizations — and a ‘Humane Cryptoparty’, which emphasized a human-centered approach to privacy tools and practical advice and guides for self-hosting email. Another unique session was our Privacy Learning Lab. The Learning Lab was an experiment to attract those who might want to consume and learn about privacy in smaller, less intimidating chunks. Participants could join at any time, and move through each of five tables, covering topics as diverse as location privacy, the Clean Data Movement, metadata, using Webmaker tools, and an eye-catching privacy game called OffGrid. Several of our Learning Lab participants also shared their ideas during Sunday night’s closing party demos. On the mainstage, we announced the Ford-Mozilla Open Web Fellows program, a new program recruiting tech leaders to work at nonprofit organizations that are protecting the open Web. The search is on for Fellows who will have opportunities ranging from the ACLU, where the fellow will work with the team that is defending Edward Snowden to Amnesty International, where the fellow will be at the center of human rights and the Internet, to the Open Technology Institute, where the fellow will work with the organization’s M-Lab initiative and serve as a data scientist for the open Web movement. Applications for the 2015 Fellows are still open, and the deadline to apply is Wednesday, December 31, 2014. Creating the Environment At MozFest, the interactive feel leads with the physical environment. The Policy & Advocacy track was housed high on the 7th floor of Ravensbourne, a media campus in the heart of London. In designing the right environment for our community, we planned several interactive displays to entice people to climb those stairs and fill those elevators to come see what we were all about. Our entrance included a ‘superhero photo booth’ which celebrated that we are all heroes of the web. Throughout the festival, people dressed up in superhero costumes, took selfies, and tweeted them to their networks with #WebWeWant. Continuing into our space, two thought provoking walls invited interaction.  At the colorful “Web We Want” ‘chalkboard’ (inspired by Candy Chang’s iconic work, anyone could grab a chalkboard pen to express their thoughts about the web – a big hit with participants and videographers alike.  Colorful responses ranged from “built by people, fun and open!” to “decentralized”, “private”, “empowering”, “an explosion of creativity,” and so much more. Another wall, based on a recent cross-cultural study on trust, invited people to write their personal definitions of transparency and privacy. On a central kiosk we just may have hosted the first-ever offline Reddit session (not intentionally, but when the Internet connection unexpectedly glitched, reddit quickly adapted with an innovative offline AMA). Using colorful post-it notes, participants expressed a set of principles and values important to the open Web. What We Learned As the first year for the Policy & Advocacy track, we were in prototyping mode. We were testing what works, what doesn’t and optimizing on the fly. We learned so many lessons that we’ll chew on for next year, but we’d also like to share a few here. We were incredibly inspired by what an AMAZING Policy & Advocacy community exists and the immeasurable value of face-to-face interaction to share ideas and solve problems together. For us Wranglers, the most difficult part of the planning process was having so many amazing proposals to choose from and not being able to include them all. Indeed, we may have created too many sessions, not giving people enough time to explore the rest of MozFest. Another thing we learned was the need to document what was happening in the sessions. We heard several requests for video (perhaps even Firefox phone) recordings, to enable people who couldn’t attend the festival participate and to mitigate schedule overload for the people there. We’ll pitch that idea to the organizers next year, along with additional Learning Labs as a way to share more ideas in smaller chunks. All in all, this was a great MozFest and a terrific beginning for the Policy & Advocacy track. We’d love to hear your feedback — email us at advocacy@mozilla.com. We look forward to putting what we learned into practice for next year. Your Friendly Policy & Advocacy Space Wranglers, Dave Steer, Alina Hua and Stacy Martin [Less]