I Use This!
Activity Not Available


Analyzed about 2 months ago. based on code collected about 2 months ago.
Posted 8 days ago by Michael Henretty
Every year Mozilla hosts DinoTank, an internal pitch platform, and this year instead of pitching ideas we focused on pitching problems. To give each DinoTank winner the best possible start, we set up a design sprint for each one. This is the first ... [More] installment of that series of DinoTank sprints…The ProblemI work on the Internet of Things at Mozilla but I am apprehensive about bringing most smart home products into my house. I don’t want a microphone that is always listening to me. I don’t want an internet connected camera that could be used to spy on me. I don’t want my thermostat, door locks, and light bulbs all collecting unknown amounts of information about my daily behavior. Suffice it to say that I have a vague sense of dread about all the new types of data being collected and transmitted from inside my home. So I pitched this problem to the judges at DinoTank. It turns out they saw this problem as important and relevant to Mozilla’s mission. And so to explore further, we ran a 5 day product design sprint with the help of several field experts.BrainstormingA team of 8 staff members was gathered in San Francisco for a week of problem refinement, insight gathering, brainstorming, prototyping, and user testing. Among us we had experts in Design Thinking, user research, marketing, business development, engineering, product, user experience, and design. The diversity of skillsets and backgrounds allowed us to approach the problem from multiple different angles, and through our discussion several important questions arose which we would seek to answer by building prototypes and putting them in front of potential consumers.The SolutionAfter 3 days of exploring the problem, brainstorming ideas and then them narrowing down, we settled on a single product solution. It would be a small physical device that plugs into the home’s router to monitor the network activity of local smart devices. It would have a control panel that could be accessed from a web browser. It would allow the user to keep up to date through periodic status emails, and only in critical situations would it notify the user’s phone with needed actions. We mocked up an end-to-end experience using clickable and paper prototypes, and put it in front of privacy aware IoT home owners.What We LearnedSurprisingly, our test users saw the product as more of an all inclusive internet security system rather than a IoT only solution. One of our solutions focused more on ‘data protection’ and here we clearly learned that there is a sense of resignation towards large data collection, with comments like “Google already has all my data anyway.”Of the positive learnings, the mobile notifications really resonated with users. And interestingly — though not surprisingly — people became much more interested in the privacy aspects of our mock-ups when their children were involved in the conversation.Next StepsThe big question we were left with was: is this a latent but growing problem, or was this never a problem at all? To answer this, we will tweak our prototypes to test different market positioning of the product as well as explore potential audiences that have a larger interest in data privacy.My ReflectionsNow, if I had done this project without DinoTank’s help, I probably would have started by grabbing a Raspberry Pi and writing some code. But instead I learned how to take a step back and start by focusing on people. Here I learned about evaluating a user problem, sussing out a potential solution, and testing its usefulness in front of users. And so regardless of what direction we now take, we didn’t waste any time because we learned about a problem and the people whom we could reach.If you’re looking for more details about my design sprint you can find the full results in our report. If you would like to learn more about the other DinoTank design sprints, check out the tumblr. And if you are interested in learning more about the methodologies we are using, check out our Open Innovation Toolkit.People I’d like to thank:Katharina Borchert, Bertrand Neveux, Christopher Arnold, Susan Chen, Liz Hunt, Francis Djabri, David Bialer, Kunal Agarwal, Fabrice Desré, Annelise Shonnard, Janis Greenspan, Jeremy Merle and Rina Jensen.The Problem with Privacy in IoT was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted 8 days ago by Myk Melez
Recently the Node.js Foundation announced that Mozilla is joining forces with IBM, Intel, Microsoft, and NodeSource on the Node.js API. So what’s Mozilla doing with Node? Actually, a few things… You may already know about SpiderNode, a Node.js ... [More] implementation on SpiderMonkey, which Ehsan Akhgari announced in April. Ehsan, Trevor Saunders, Brendan Dahl, and other contributors have since made a bunch of progress on it, and it now builds successfully on Mac and Linux and runs some Node.js programs. Brendan additionally did the heavy lifting to build SpiderNode as a static library, link it with Positron, and integrate it with Positron’s main process, improving that framework’s support for running Electron apps. He’s now looking at opportunities to expose SpiderNode to WebExtensions and to chrome code in Firefox. Meanwhile, I’ve been analyzing the Node.js API being developed by the API Working Group, and I’ve also been considering opportunities to productize SpiderNode for Node developers who want to use emerging JavaScript features in SpiderMonkey, such as WebAssembly and Shared Memory. If you’re a WebExtension developer or Firefox engineer, would you use Node APIs if they were available to you? If you’re a Node programmer, would you use a Node implementation running on SpiderMonkey? And if so, would you require Node.js Addons (i.e. native modules) to do so? [Less]
Posted 8 days ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted 9 days ago by Daniel.Pocock
There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes. More and more people are coming to realize that ... [More] there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks. On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets. Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too. For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI. Which boxes to start with? There are various considerations when going down this path: Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS. How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself? Is a completely silent/fanless solution necessary? Is it possibly to completely avoid embedded microcode and firmware? How many other free software developers are using the same box, or will you be first? Discussing these options I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices. [Less]
Posted 9 days ago by Giorgos
While working on migrating support.mozilla.org away from Kitsune (which is a great community support platform that needs love, remember that internet) I needed to convert about 4M database rows of a custom, Markdown inspired, format to HTML. The ... [More] challenge of the task is that it needs to happen as fast as possible so we can dump the database, convert the data and load the database onto the new platform with the minimum possible time between the first and the last step. I started a fresh MySQL container and started hacking: Load the database dump Kitsune's database weights about 35GiB so creating and loading the dump is a lengthy procedure. I used some tricks taken from different places with most notable ones: Set innodb_flush_log_at_trx_commit = 2 for more speed. This should not be used in production as it may break ACID compliance but for my use case it's fine. Set innodb_write_io_threads = 16 Set innodb_buffer_pool_size=16G and innodb_log_file_size=4G. I read that the innodb_log_file_size is recommended to be 1/4th of innodb_buffer_pool_size and I set the later based on my available memory. Loading the database dump takes about 60 minutes. I'm pretty sure there's room for improvement there. Extra tip: When dumping such huge databases from production websites make sure to use a replica host and mysqldump's --single-transaction flag to avoid locking the database. Create a place to store the processed data Kitsune being a Django project I created extra fields named content_html in the Models with markdown content, generated the migrations and run them against the db. Process the data An AWS m4.2xl gives 8 cores at my disposal and 32GiB of memory, of which 16 I allocated to MySQL earlier. I started with a basic single core solution:: for question in Question.objects.all(): question.content_html = parser.wiki_2_html(question.content) question.save() which obviously does the job but it's super slow. Transactions take a fair amount of time, what if we could bundle multiple saves into one transaction? def chunks(count, nn=500): """Yield successive n-sized chunks from l.""" offset = 0 while True: yield (offset, min(offset+nn, count)) offset += nn if offset > count: break for low, high in chunks(Question.objects.count()): with transaction.atomic(): for question in Question.objects.all().limit[low:high]: question.content_html = parser.wiki_2_html(question.content) question.save() This is getting better. Increasing the chunk size to 20000 items in the cost of more RAM used produces faster results. Anything above this value seems to require about the same time to complete. Tried pypy and I didn't get better results so I defaulted to CPython. Let's add some more cores into the mix using Python's multiprocessing library. I created a Pool with 7 processes (always leave one core outside the Pool so the system remains responsive) and used apply_async to generate the commands to run by the Pool. results = [] it = Question.objects.all() number_of_rows = it.count() pool = mp.Pool(processes=7) [pool.apply_async(process_chunk), (chunk,), callback=results.append) for chunk in chunks(it)] sum_results = 0 while sum_results < number_of_rows: print 'Progress: {}/{}'.format(sum_results, number_of_rows) sum_results = sum(results) sleep(1) Function process_chunk will process, save and return the number of rows processed. Then apply_async will append this number to results which is used in the while loop to give me an overview of what's happening while I'm waiting. So far so good, this is significantly faster. It took some tries before getting this right. Two things to remember when dealing with multiprocess and Django are: ./manage.py shell won't work. I don't know why but I went ahead and wrote a standalone python script, imported django and run django.setup(). When a process forks, Django's database connection which was already created by that time, needs to be cleared out and get re-created for every process. First thing process_chunk does is db.connections.close_all(). Django will take care re-creating when needed. OK I'm good to hit the road -I thought- and I launched the process with all the rows that needed parsing. As the time goes by I see the memory usage to increase dramatically and eventually the kernel would kill my process to free up memory. It seems that the queries would take too much memory. I set the Pool to shutdown and start a new process on every new chunk with maxtasksperchild=1 which helped a bit but again, the farther in the process the more the memory usage. I tried to debug the issue with different Django queries and profiling (good luck with that on a multiprocess program) and I failed. Eventually I needed to figure out a solution before it's too late, so back to the drawing board. Process the data, take two I read this interesting blog post the other day named Taco Bell Programming where Ted is claiming that many times you can achieve the desired functionality just by rearranging the Unix tool set, much like Taco Bell is producing its menu by rearranging basic ingredients. What you win with Taco Bell Programming is battle-tested tools and throughout documentation which should save you time from actual coding and time debugging problems already solved. I took a step back and re-thought by problem. The single core solution was working just fine and had no memory issues. What if I could find a program to paralellize multiple runs? And that tool (obviously) exists, it's GNU Parallel. In the true spirit of other GNU tools, Parallel has a gazillion command line arguments and can do a ton of things related to parallelizing the run of a list of commands. I mention just the most important to me at the moment: Read from command line a list of commands Show progress and provide ETA Limit the run to a number of cores Retry failed jobs, resume runs and book overall keeping. Send jobs to other machines (I wish I had the time to test that, amazing) Prepare the input to Parallel I reverted to the original one core python script and refactored it a bit so I can call it using python -c. I also removed the chunk generation code since I'll do that elsewhere def parse_to_html(it, from_field, to_field): with transaction.atomic(): for p in it: setattr(p, to_field, parser.wiki_to_html(getattr(p, from_field))) p.save() Now to process all questions I can call this thing using $ echo "import wikitohtml; it = Question.objects.all(); wikitohtml.parse_to_html(it, 'content', 'content_html')" | python - Then I wrote a small python script to generate the chunks and print out commands to be later used by Parallel CMD = '''echo "import wikitohtml; it = wikitohtml.Question.objects.filter(id__gte={from_value}, id__lt={to_value}); wikitohtml.parse_to_html(it, 'content', 'content_html')" | python - > /dev/null''' for i in range(0, 1200000, 10000): print CMD.format(from_value=i, to_value=i+step) I wouldn't be surprised if Parallel can do the chunking itself but in this case it's easier for me to fine tune it using Python. Now I can process all questions in parallel using $ python generate-cmds.py | parallel -j 7 --eta --joblog joblog So everything is working now in parallel and the memory leak is gone! But I'm not done yet. Deadlocks I left the script running for half an hour and then I started seeing MySQL aborted transactions that failed to grab a lock. OK that's should be an easy fix by increasing the wait lock time with SET innodb_lock_wait_timeout = 5000; (up from 50). Later I added --retries 3 in Parallel to make sure that if anything failed it would get retried. That actually made things worse as it introduced everyone's favorite issue in parallel programming, deadlocks. I reverted the MySQL change and looked deeper. Being unfamiliar with Kitsune's code I was not aware that the model.save() methods are doing a number of different things, including saving other objects as well, e.g. Answer.save() also calls Question.save(). Since I'm only processing one field and save the result into another field which is unrelated to everything else all the magic that happens in save() can be skipped. Besides dealing with the deadlock this can actually get us a speed increase for free. I refactored the python code to use Django's update() which directly hits the database and does not go through save(). def parse_to_html(it, from_field, to_field, id_field='id'): with transaction.atomic(): for p in it: it.filter(**{id_field: getattr(p, id_field)}).update(**{to_field: parser.wiki_to_html(getattr(p, from_field))}) Everything works and indeed update() did increase things a lot and solved the deadlock issue. The cores are 100% utilized which means that throwing more CPU power into the problem would buy more speed. The processing of all 4 million rows takes now about 30 minutes, down from many many hours. Magic! [Less]
Posted 9 days ago by glazou
In December 1998, our comrade Bert Bos released a W3C Note: List of suggested extensions to CSS. I thought it could be interesting to see where we stand 18 years later... Id Suggestion active WD CR, PR or REC Comment 1 Columns ✅ ✅ 2 Swash ... [More] letters and other glyph substitutions ✅ ✅ 3 Running headers and footers ✅ ❌ 4 Cross-references ✅ ❌ 5 Vertical text ✅ ✅ 6 Ruby ✅ ✅ 7 Diagonal text ✅ ❌ through Transforms 7 Text along a path ❌ ❌ 8 Style properties for embedded 2D graphics ➡️ ➡️ through filters 9 Hyphenation control ✅ ❌ 10 Image filters ✅ ❌ 11 Rendering objects for forms ✅ ✅ 12 :target ✅ ✅ 13 Floating boxes to top & bottom of page ❌ ❌ 14 Footnotes ✅ ❌ 15 Tooltips ❌ ❌ possible with existing properties 16 Maths ❌ ❌ there was no proposal, only an open question 17 Folding lists ❌ ❌ possible with existing properties 18 Page-transition effects ❌ ❌ 19 Timed styles ✅ ❌ Transitions & Animations 20 Leaders ✅ ❌ 21 Smart tabs ❌ ❌ not sure it belongs to CSS 22 Spreadsheet functions ❌ ❌ does not belong to CSS 23 Non-rectangular wrap-around ✅ ❌ Exclusions, Shapes 24 Gradients ✅ ✅ Backgrounds & Borders 25 Textures/images instead of fg colors ❌ ❌ 26 Transparency ✅ ✅ opacity 27 Expressions partly ✅ calc() 28 Symbolic constants ✅ ✅ Variables 29 Mixed mode rendering ❌ ❌ 30 Grids for TTY ❌ ❌ 31 Co-dependencies between rules ✅ ✅ Conditional Rules 32 High-level constraints ❌ ❌ 33 Float: gutter-side/fore-edge-side ❌ ❌ 34 Icons & minimization ❌ ❌ 35 Namespaces ✅ ✅ 36 Braille ❌ ❌ 37 Numbered floats ✅ ❌ GCPM 38 Visual top/bottom margins ❌ ❌ 39 TOCs, tables of figures, etc. ❌ ❌ 40 Indexes ❌ ❌ 41 Pseudo-element for first n lines ❌ ❌ 42 :first-word ❌ ❌ 43 Corners ✅ ✅ border-radius and border-image 44 Local and external anchors ✅ ❌ Selectors level 4 45 Access to attribute values ➡️ ❌ access to arbitrary attributes hosted by arbitrary elements theough a selector inside attr() was considered and dropped 46 Linked flows ✅ ❌ Regions 47 User states ❌ ❌ 48 List numberings ✅ ✅ Counter Styles 49 Substractive text-decoration ❌ ❌ 50 Styles for map/area ➡️ ➡️ never discussed AFAIK 51 Transliteration ➡️ ➡️ discussed and dropped 52 Regexps in selectors ❌ ❌ 53 Last-of... selectors ✅ ✅ 54 Control over progressive rendering ❌ ❌ 55 Inline-blocks ✅ ✅ 56 Non-breaking inlines ✅ ✅ white-space applies to all elements since CSS 2.0... 57 Word-spacing: none ❌ ❌ 58 HSV or HSL colors ✅ ✅ 59 Standardize X colors ✅ ✅ 60 Copy-fitting/auto-sizing/auto-spacing ✅ ✅ Flexbox 61 @page inside @media ❌ ❌ 62 Color profiles ✅ ❌ dropped from Colors level 3 but in level 4 63 Underline styles ✅ ✅ 64 BECSS ➡️ ➡️ BECSS, dropped 65 // comments ❌ ❌ 66 Replaced elements w/o intrinsic size ✅ ✅ object-fit 67 Fitting replaced elements ✅ ✅ object-fit [Less]
Posted 9 days ago by Christopher Finke
Last November, I wrote an iPhone app called Reenact that helps you reenact photos. It worked great on iOS 9, but when iOS 10 came out in July, Reenact would crash as soon as you tried to select a photo. It turns out that in iOS 10, if you don’t ... [More] describe exactly why your app needs access to the user’s photos, Apple will (intentionally) crash your app. For a casual developer who doesn’t follow every iOS changelog, this was shocking — Apple essentially broke every app that accesses photos (or 15 other restricted resources) if they weren’t updated specifically for iOS 10 with this previously optional feature… and they didn’t notify the developers! They have the contact information for the developer of every app, and they know what permissions every app has requested. When you make a breaking change that large, the onus is on you to proactively send some emails. I added the required description, and when I tried to build the app, I ran into another surprise. The programming language I used when writing Reenact was version 2 of Apple’s Swift, which had just been released two months prior. Now, one year later, Swift 2 is apparently a “legacy language version,” and Reenact wouldn’t even build without adding a setting that says, “Yes, I understand that I’m using an ancient 1-year-old programming language, and I’m ok with that.” After I got it to build, I spent another three evenings working through all of the new warnings and errors that the untouched and previously functional codebase had somehow started generating, but in the end, I didn’t do the right combination of head-patting and tummy-rubbing, so I gave up. I’m not going to pay $99/year for an Apple Developer Program membership just to spend days debugging issues in an app I’m giving away, all because Apple isn’t passionate about backwards-compatibility. So today, one year from the day I uploaded version 1.0 to the App Store (and serendipitously, on the same day that my Developer Program membership expires), I’m abandoning Reenact on iOS. …but I’m not abandoning Reenact. Web browsers on both desktop and mobile provide all of the functionality needed to run Reenact as a Web app — no app store needed — so I spent a few evenings polishing the code from the original Firefox OS version of Reenact, adding all of the features I put in the iOS and Android versions. If your browser supports camera sharing, you can now use Reenact just by visiting app.reenact.me. It runs great in Firefox, Chrome, Opera, and Amazon’s Silk browser. iOS users are still out of luck, because Safari supports precisely 0% of the necessary features. (Because if web pages can do everything apps can do, who will write apps?) One of these things just doesn’t belong. In summary: Reenact for iOS is dead. Reenact for the Web is alive. Both are open-source. Don’t trust anyone over 30. Leave a comment below. [Less]
Posted 9 days ago by Daniel Veditz
At roughly 1:30pm Pacific time on November 30th, Mozilla released an update to Firefox containing a fix for a vulnerability reported as being actively used to deanonymize Tor Browser users.  Existing copies of Firefox should update automatically over ... [More] the next 24 hours; users may also download the updated version manually. Early on Tuesday, November 29th, Mozilla was provided with code for an exploit using a previously unknown vulnerability in Firefox.  The exploit was later posted to a public Tor Project mailing list by another individual.  The exploit took advantage of a bug in Firefox to allow the attacker to execute arbitrary code on the targeted system by having the victim load a web page containing malicious JavaScript and SVG code.  It used this capability to collect the IP and MAC address of the targeted system and report them back to a central server.  While the payload of the exploit would only work on Windows, the vulnerability exists on Mac OS and Linux as well.  Further details about the vulnerability and our fix will be released according to our disclosure policy. The exploit in this case works in essentially the same way as the “network investigative technique” used by FBI to deanonymize Tor users (as FBI described it in an affidavit).  This similarity has led to speculation that this exploit was created by FBI or another law enforcement agency.  As of now, we do not know whether this is the case.  If this exploit was in fact developed and deployed by a government agency, the fact that it has been published and can now be used by anyone to attack Firefox users is a clear demonstration of how supposedly limited government hacking can become a threat to the broader Web. [Less]
Posted 9 days ago by Robert Helmer
While working on tracking down some tricky UI bugs in about:addons, I wondered what it would look like to rewrite it using web technologies. I've been meaning to learn React (which the Firefox devtools use), and it seems like a good choice for this ... [More] kind of application: easy to create reusable components XBL is used for this in the current about:addons, but this is a non-standard Mozilla-specific technology that we want to move away from, along with XUL. manage state transitions, undo, etc. There is quite a bit of code in the current about:addons implementation to deal with undoing various actions. React makes it pretty easy to track this sort of thing through libraries like Redux. To explore this a bit, I made a simple React version of about:addons. It's actually installable as a Firefox extension which overrides about:addons. Note that it's just a proof-of-concept and almost certainly buggy - the way it's hooking into the existing sidebar in about:addons needs some work for instance. I'm also a React newb so pretty sure I'm doing it wrong. Also, I've only implemented #1 above so far, as of this writing. I am finding React pretty easy to work with, and I suspect it'll take far less code to write something equivalent to the current implementation. [Less]
Posted 9 days ago by Robert Helmer
I've been playing with Rust lately, and since I mostly work on the Add-on Manager these days, I thought I'd combine these into a toy rust version. The Add-on Manager in Firefox is written in Javascript. It uses a lot of ES6 features, and has "chrome" ... [More] (as opposed to "content") privileges, which means that it can access internal Firefox-only APIs to do things like download and install extensions, themes, and plugins. One of the core components is a class named AddonInstall which implements a state machine to download, verify, and install add-ons. The main purpose of this toy Rust project so far has been to model the design and see what it looks like. So far mostly it's an exercise in how awesome Enum is compared to the JS equivalent (int constants), and how nice match is (versus switch statements). It's possible to compile the Rust app to a native binary, or alternatively to asm.js/wasm, so one thing I'd like to try soon is loading a wasm version of this Rust app inside a Firefox JSM (which is the type of JS module used for internal Firefox code). There's a webplatform crate on crates.io that enables which allows for easy DOM access, it'd be interesting to see if this works for Firefox chrome code too. [Less]