I Use This!
Very High Activity

News

Analyzed 10 days ago. based on code collected 29 days ago.
Posted 8 days ago by Tarek Ziade
I don't know why, but I am a bit obsessed with load testing tools. I've tried dozens, I built or been involved in the creation of over ten of them in the past 15 years. I am talking about load testing HTTP services with a simple HTTP client. Three ... [More] years ago I built Loads at Mozilla, which is still being used to load test our services - and it's still evolving. It was based on a few fundamental principles: A Load test is an integration test that's executed many times in parallel against a server. Ideally, load tests should be built with vanilla Python and a simple HTTP client. There's no mandatory reason we have to rely on a Load Test class or things like this - the lighter the load test framework is, the better. You should be able to run a load test from your laptop without having to deploy a complicated stack, like a load testing server and clients, etc. Because when you start building a load test against an API, step #1 is to start with small loads from one box - not going nuclear from AWS on day 1. Doing a massively distributed load test should not happen & be driven from your laptop. Your load test is one brick and orchestrating a distributed load test is a problem that should be entirely solved by another software that runs in the cloud on its own. Since Loads was built, two major things happened in our little technical word: Docker is everywhere Python 3.5 & asyncio, yay! Python 3.5+ & asyncio just means that unlike my previous attempts at building a tool that would generate as many concurrent requests as possible, I don't have to worry anymore about key principle #2: we can do async code now in vanilla Python, and I don't have to force ad-hoc async frameworks on people. Docker means that for running a distributed test, a load test that runs from one box can be embedded inside a Docker image, and then a tool can orchestrate a distributed test that runs and manages Docker images in the cloud. That's what we've built with Loads: "give me a Docker image of something that's performing a small load test against a server, and I shall run it in hundreds of box." This Docker-based design was a very elegant evolution of Loads thanks to Ben Bangert who had that idea. Asking for people to embed their load test inside a Docker image also means that they can use whatever tool they want as long as it performs HTTP calls on the server to stress, and optionally send some info via statsd. But proposing a helpful, standard tool to build the load test script that will be embedded in Docker is still something we want to suggest. And frankly, 90% of the load tests happen from a single box. Going nuclear is not happening that often. Introducing Molotov Molotov is a new tool I've been working on for the past few months - it's based on asyncio, aiohttp and tries to be as light as possible. Molotov scripts are coroutines to perform HTTP calls, and spawning a lot of them in a few processes can generate a fair amount of load from a single box. Thanks to Richard, Chris, Matthew and others - my Mozilla QA teammates, I had some great feedback to create the tool, and I think it's almost ready for being used by more folks - it stills need to mature, and the docs to improve but the design is settled, and it works well already. I've pushed a release at PyPI and plan to push a first stable final release this month once the test coverage is looking better & the docs are polished. But I think it's ready for a bit of community feedback. That's why I am blogging about it today -- if you want to try it, help building it here are a few links: Docs: http://molotov.readthedocs.io code: https://github.com/loads/molotov/ Try it with the console mode (-c), try to see if it fits your brain and let us know what you think. [Less]
Posted 9 days ago by Chris Heilmann
Last month I was very lucky to be invited to give the opening keynote of a brand new conference that can utterly go places: ScriptConf in Linz, Austria. What I liked most about the event was an utter lack of drama. The organisation for us ... [More] presenters was just enough to be relaxed and allowing us to concentrate on our jobs rather than juggling ticket bookings. The diversity of people and subjects on stage was admirable. The catering and the location did the job and there was not much waste left over. I said it before that a great conference stands and falls with the passion of the organisers. And the people behind ScriptConf were endearingly scared and amazed by their own success. There were no outrageous demands, no problems that came up in the last moment, and above all there was a refreshing feeling of excitement and a massive drive to prove themselves as a new conference in a country where JavaScript conferences aren’t a dime a dozen. ScriptConf grew out of 5 different meetups in Austria. It had about 500 extremely well behaved and excited attendees. The line-up of the conference was diverse in terms of topics and people and it was a great “value for money” show. As a presenter you got spoiled. The hotel was 5 minutes walk away from the event and 15 minutes from the main train station. We had a dinner the day before and a tour of a local ars electronica center before the event. It is important to point out that the schedule was slightly different: the event started at noon and ended at “whenever” (we went for “Leberkäse” at 3am, I seem to recall). Talks were 40 minutes and there were short breaks in between each two talks. As the opening keynote presenter I loved this. It is tough to give a rousing talk at 8am whilst people file slowly into the building and you’ve still got wet hair from the shower. You also have a massive lull in the afternoon when you get tired. It is a totally different thing to start well-rested at noon with an audience who had enough time to arrive and settle in. Presenters were from all around the world, from companies like Slack, NPM, Ghost, Google and serverless. The presentations: Here’s a quick roundup of who spoke on what: I was the opening keynote, talking about how JavaScript is not a single thing but a full development environment now and what that means for the community. I pointed out the importance of understanding different ways to use JavaScript and how they yield different “best practices”. I also did a call to arms to stop senseless arguing and following principles like “build more in shorter time” and “move fast and break things” as they don’t help us as a market. I pointed out how my employer works with its engineers as an example how you can innovate but also have a social life. It was also an invitation to take part in open source and bring more human, understanding communication to our pull requests. Raquel Vélez of NPM told the history of NPM and explained in detail how they built the web site and the NPM search Nik Graf of Serverless covered the serverless architecture of AWS Lambda Hannah Wolfe of Ghost showed how they moved their kickstarter-funded NodeJS based open blogging system from nothing to a ten people company and their 1.0 release explaining the decisions and mistakes they did. She also announced their open journalism fund “Ghost for journalism” Felix Rieseberg of Slack is an ex-Microsoft engineer and his talk was stunning. His slides about building Apps with Electron are here and the demo code is on GitHub. His presentation was a live demo of using Electron to built a clone of Visual Studio Code by embedding Monaco into an Electron Web View. He coded it all live using Visual Studio Code and doing a great job explaining the benefits of the console in the editor and the debugging capabilities. I don’t like live code, but this was enjoyable and easy to follow. He also did an amazing job explaining that Electron is not there to embed a web site into a app frame, but to allow you to access native functionality from JavaScript. He also had lots of great insight into how Slack was built using Electron. A great video to look forward to. Franziska Hinkelmann of the Google V8 team gave a very detailed talk about Performance Debugging of V8, explaining what the errors shown in the Chrome Profiler mean. It was an incredibly deep-tech talk but insightful. Franziska made sure to point out that optimising your code for the performance tricks of one JavaScript engine is not a good idea and gave ChakraCore several shout-outs. Mathieu Henri from Microsoft Oslo and JS1K fame rounded up the conference with a mind-bending live code presentation creating animations and sound with JavaScript and Canvas. He clearly got the most applause. His live coding session was a call to arms to play with technology and not care about the code quality too much but dare to be artsy. He also very much pointed out that in his day job writing TypeScript for Microsoft, this is not his mode of operation. He blogged about his session and released the code here. This was an exemplary conference, showing how it should be done and reminded me very much of the great old conferences like Fronteers, @media and the first JSConf. The organisers are humble, very much engaged and will do more great work given the chance. I am looking forward to re-live the event watching the videos and can safely recommend each of the presenters for any other conference. There was a great flow and lots of helping each other out on stage and behind the scenes. It was a blast. [Less]
Posted 9 days ago by Air Mozilla
mconley livehacks on real Firefox bugs while thinking aloud.
Posted 9 days ago
A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically ... [More] , potentially harming forward secrecy of resumed TLS session. Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started. Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG. Did web servers react? No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either. All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys. The Caddy web server I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates. Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers. But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS. 1-RTT handshakes by default One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys. If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips. That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes. Pre-shared keys in TLS 1.3 Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key. Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server. enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode; struct { PskKeyExchangeMode ke_modes<1..255>; } PskKeyExchangeModes; Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired. The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions. Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only. TLS 1.2 is surely here to stay In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE. But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine. [Less]
Posted 9 days ago
A while ago I wrote about the state of server-side session resumption implementations in popular web servers using OpenSSL. Neither Apache, nor Nginx or HAproxy purged stale entries from the session cache or rotated session tickets automatically ... [More] , potentially harming forward secrecy of resumed TLS session. Enabling session resumption is an important tool for speeding up HTTPS websites, especially in a pre-HTTP/2 world where a client may have to open concurrent connections to the same host to quickly render a page. Subresource requests would ideally resume the session that for example a GET / HTTP/1.1 request started. Let’s take a look at what has changed in over two years, and whether configuring session resumption securely has gotten any easier. With the TLS 1.3 spec about to be finalized I will show what the future holds and how these issues were addressed by the WG. Did web servers react? No, not as far as I’m aware. None of the three web servers mentioned above has taken steps to make it easier to properly configure session resumption. But to be fair, OpenSSL didn’t add any new APIs or options to help them either. All popular TLS 1.2 web servers still don’t evict cache entries when they expire, keeping them around until a client tries to resume — for performance or ease of implementation. They generate a session ticket key at startup and will never automatically rotate it so that admins have to manually reload server configs and provide new keys. The Caddy web server I want to seize the chance and positively highlight the Caddy web server, a relative newcomer with the advantage of not having any historical baggage, that enables and configures HTTPS by default, including automatically acquiring and renewing certificates. Version 0.8.3 introduced automatic session ticket key rotation, thereby making session tickets mostly forward secure by replacing the key every ~10 hours. Session cache entries though aren’t evicted until access just like with the other web servers. But even for “traditional” web servers all is not lost. The TLS working group has known about the shortcomings of session resumption for a while and addresses those with the next version of TLS. 1-RTT handshakes by default One of the many great things about TLS 1.3 handshakes is that most connections should take only a single round-trip to establish. The client sends one or more KeyShareEntry values with the ClientHello, and the server responds with a single KeyShareEntry for a key exchange with ephemeral keys. If the client sends no or only unsupported groups, the server will send a HelloRetryRequest message with a NamedGroup selected from the ones supported by the client. The connection will fall back to two round-trips. That means you’re automatically covered if you enable session resumption only to reduce network latency, a normal handshake is as fast as 1-RTT resumption in TLS 1.2. If you’re worried about computational overhead from certificate authentication and key exchange, that still might be a good reason to abbreviate handshakes. Pre-shared keys in TLS 1.3 Session IDs and session tickets are obsolete since TLS 1.3. They’ve been replaced by a more generic PSK mechanism that allows resuming a session with a previously established shared secret key. Instead of an ID or a ticket, the client will send an opaque blob it received from the server after a successful handshake in a prior session. That blob might either be an ID pointing to an entry in the server’s session cache, or a session ticket encrypted with a key known only to the server. enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode; struct { PskKeyExchangeMode ke_modes<1..255>; } PskKeyExchangeModes; Two PSK key exchange modes are defined, psk_ke and psk_dhe_ke. The first signals a key exchange using a previously shared key, it derives a new master secret from only the PSK and nonces. This basically is as (in)secure as session resumption in TLS 1.2 if the server never rotates keys or discards cache entries long after they expired. The second psk_dhe_ke mode additionally incorporates a key agreed upon using ephemeral Diffie-Hellman, thereby making it forward secure. By mixing a shared (EC)DHE key into the derived master secret, an attacker can no longer pull an entry out of the cache, or steal ticket keys, to recover the plaintext of past resumed sessions. Note that 0-RTT data cannot be protected by the DHE secret, the early traffic secret is established without any input from the server and thus derived from the PSK only. TLS 1.2 is surely here to stay In theory, there should be no valid reason for a web client to be able to complete a TLS 1.3 handshake but not support psk_dhe_ke, as ephemeral Diffie-Hellman key exchanges are mandatory. An internal application talking TLS between peers would likely be a legitimate case for not supporting DHE. But also for TLS 1.3 it might make sense to properly configure session ticket key rotation and cache turnover in case the odd client supports only psk_ke. It still makes sense especially for TLS 1.2, it will be around for probably longer than we wish and imagine. [Less]
Posted 9 days ago by Daniel Stenberg
I got myself a new 27″ 4K screen to my work setup, a Dell P2715Q, and replaced one of my old trusty twenty-four inch friends with it. I now work with the “Thinkpad 13″ on the left as my video conference machine (it does nothing else and it runs ... [More] Windows!), the two mid screens are a 24″ and the new 27” and they are connected to my primary dev machine while the rightmost thing is my laptop for when I need to move. Did everything run smoothly? Heck no. When I first inserted the 4K screen without modifying anything else in the setup, it was immediately obvious that I really needed to upgrade my graphics card since it didn’t have muscles enough to drive the screen at 4K so the screen would then instead upscale a 1920×1200 image in a slightly blurry fashion. I couldn’t have that! New graphics card So when I was out and about later that day I more or less accidentally passed a Webhallen store, and I got myself a new card. I wanted to play it easy so I stayed with an AMD processor and went with ASUS Dual-Rx460-O2G. The key feature I wanted was to be able to drive one 4K screen and one at 1920×1200, and then I unfortunately had to give up on the ones with only passive cooling and I instead had to pick what sounds like a gaming card. (I hate shopping graphics cards.)As I was about to do surgery on the machine anyway. I checked and noticed that I could add more memory to the motherboard so I bought 16 more GB to a total of 32GB. Blow some fuses Later that night, when the house was quiet and dark I shut down my machine, inserted the new card, the new memory DIMMs and powered it back up again. At least that was the plan. When I fired it back on, it said clock and my lamps around me all got dark and the machine didn’t light up at all. The fuse was blown! Man, wasn’t that totally unexpected? I did some further research on what exactly caused the fuse to blow and blew a few more in the process, as I finally restored the former card and removed the memory DIMMs again and it still blew the fuse. Puzzled and slightly disappointed I went to bed when I had no more spare fuses. I hate leaving the machine dead in parts on the floor with an uncertain future, but what could I do? A new PSU Tuesday morning I went to get myself a PSU replacement (Plexgear PS-600 Bronze), and once I had that installed no more fuses blew and I could start the machine again! I put the new memory back in and I could get into the BIOS config with both screens working with the new card (and it detected 32GB ram just fine). But as soon as I tried to boot Linux, the boot process halted after just 3-4 seconds and seemingly just froze. Hm, I tested a few different kernels and safety mode etc but they all acted like that. Weird! BIOS update A little googling on the messages that appeared just before it froze gave me the idea that maybe I should see if there’s an update for my bios available. After all, I’ve never upgraded it and it was a while since I got my motherboard (more than 4 years). I found a much updated bios image on ASUS support site, put it on a FAT-formatted USB-drive and I upgraded. Now it booted. Of course the error messages I had googled for are still present, and I suppose they were there before too, I just hadn’t put any attention to them when everything was working dandy! Displayport vs HDMI I had the wrong idea that I should use the display port to get 4K working, but it just wouldn’t work. DP + DVI just showed up on one screen and I even went as far as trying to download some Ubuntu Linux driver package for Radeon RX460 that I found, but of course it failed miserably due to my Debian Unstable having a totally different kernel running and what not. In a slightly desperate move (I had now wasted quite a few hours on this and my machine still wasn’t working), I put back the old graphics card – (with DVI + hdmi) only to note that it no longer works like it did (the DVI one didn’t find the correct resolution anymore). Presumably the BIOS upgrade or something shook the balance? Back on the new card I booted with DVI + HDMI, leaving DP entirely, and now suddenly both screens worked! HiDPI + LoDPI Once I had logged in, I could configure the 4K screen to show at its full 3840×2160 resolution glory. I was back. Now I only had to start fiddling with getting the two screens to somehow co-exist next to each other, which is a challenge in its own. The large difference in DPI makes it hard to have one config that works across both screens. Like I usually have terminals on both screens – which font size should I use? And I put browser windows on both screens… So far I’ve settled with increasing the font DPI in KDE and I use two different terminal profiles depending on which screen I put the terminal on. Seems to work okayish. Some texts on the 4K screen are still terribly small, so I guess it is good that I still have good eye sight! 24 + 27 So is it comfortable to combine a 24″ with a 27″ ? Sure, the size difference really isn’t that notable. The 27 one is really just a few centimeters taller and the differences in width isn’t an inconvenience. The photo below shows how similar they look, size-wise: [Less]
Posted 9 days ago by Tim Murray
Here, Michael Johnson (MJ), founder of johnson banks, and Tim Murray (TM), Mozilla creative director, have a long-distance conversation about the Mozilla open design process while looking in the rear-view mirror. TM: We’ve come a long way from our ... [More] meet-in-the-middle in Boston last August, when my colleague Mary Ellen Muckerman and I first saw a dozen or so brand identity design concepts that had emerged from the studio at johnson banks. MJ: If I recall, we didn’t have the wall space to put them all up, just one big table – but by the end of the day, we’d gathered around seven broad approaches that had promise. I went back to London and we gave them a good scrubbing to put on public show. It’s easy to see, in retrospect, certain clear design themes starting to emerge from these earliest concepts. Firstly, the idea of directly picking up on Mozilla’s name in ‘The Eye’ (and in a less overt way in the ‘Flik Flak’). ‘The Eye’ also hinted at the dinosaur-slash-Godzilla iconography that had represented Mozilla at one time. We also see at this stage the earliest, and most minimal version of the ‘Protocol’ idea. TM: You explored several routes related to code, and ‘Protocol’ was the cleverest. Mary Ellen and I were both drawn to ‘The Eye’ for its suggestion that Mozilla would be opinionated and bold. It had a brutal power, but we were also a bit worried that it was too reminiscent of the Eye of Sauron or Monsters, Inc. MJ: Logos can be a bit of a Rorschach test, can’t they? Within a few weeks, we’d come to a mutual conclusion as to which of these ideas needed to be left on the wayside, for various reasons. The ‘Open button’, whilst enjoyed by many, seemed to restrict Mozilla too closely to just one area of work. Early presentation favourites such as ‘The Impossible M’ turned out to be just a little too close to other things out in the ether, as did Flik Flak – the value, in a way, of sharing ideas publicly and doing an impromptu IP check with the world. ‘Wireframe World’ was to come back, in a different form in the second round. TM: This phase was when our brand advisory group, a regular gathering of Mozillians representing different parts of our organization, really came into play. We had developed a list of criteria by which to review the designs – Global Reach, Technical Beauty, Breakthrough Design, Scalability, and Longevity – and the group ranked each of the options. It’s funny to look back on this now, but ‘Protocol’ in its original form received some of the lowest scores. MJ: One of my sharpest memories of this round of designs, once they became public was how many online commentators critiqued the work for being ‘too trendy’ or said ‘this would work for a gallery not Mozilla’. This was clearly going to be an issue because, rightly or wrongly, it seemed to me that the tech and coding community had set the bar lower than we had expected in terms of design. TM: A bit harsh there, Michael? It was the tech and coding community that had the most affinity for ‘Protocol’ in the beginning. If it wasn’t for them, we might have let it go early on. MJ: Ok, that’s a fair point. Well, we also started to glimpse what was to become another recurring theme – despite johnson banks having been at the vanguard of broadening out brands into complete and wide-ranging identity systems, we were going to have to get used to a TL:DR way of seeing/reading that simply judged a route by its logo alone. TM: Right! And no matter how many times we said that these were early explorations we received feedback that they were too “unpolished.” Meanwhile, others felt that they were too polished, suggesting that this was the final design. We were whipsawed by feedback. MJ: Whilst to some the second round seemed like a huge jump from the first, to us it was a logical development. All our attempts to develop The Eye had floundered, in truth – but as we did that work, a new way to hint at the name and its heritage had appeared. It was initially nicknamed ‘Chomper’, but was then swiftly renamed ‘Dino 2.0’. We see, of course the second iteration of Protocol, this time with slab-serifs. And two new approaches – Flame and Burst.   TM: I kind of fell in love with ‘Burst’ and ‘Dino 2.0’ in this round. I loved the idea behind ‘Flame’ — that it represented both our eternal quest to keep the Internet alive and healthy and the warmth of community that’s so important to Mozilla — but not this particular execution. To be fair, we’d asked you to crank it out in a matter of days. MJ: Well, yes that’s true. With ‘Flame’ we then suffered from too close a comparison to all the other flame logos out in there in the ether, including Tinder. Whoops! ‘Burst’ was, and still is, a very intriguing thought – that we see Mozilla’s work through 5 key nodes and questions, and see the ‘M’ shape that appears as the link between statistical points. TM: Web developers really rebelled against the moire of Burst, didn’t they? We now had four really distinct directions that we could put into testing and see how people outside of Mozilla (and North America) might feel. The testing targeted both existing and desired audiences, the latter being ‘Conscious Choosers’, a group of people who made decisions based on their personal values. MJ: We also put these four in front of a design audience in Nashville at the Brand New conference, and of course commentary lit up the Open Design blog. The results in person and on the blogs were pretty conclusive – that Protocol and Dino 2.0 were the clear favourites. But one set of results were a curve ball – that the overall ‘feel’ of Burst was enjoyed by a key group of desired (and not current) supporters. TM: This was a tricky time. ‘Dino’ had many supporters, but just about as many vocal critics. As soon as one person remarked “It looks like a stapler,” the entire audience of blog readers piled on, or so it seemed. At our request, you put the icon through a series of operations that resulted in the loss of his lower jaw. Ouch. Even then, we had to ask, as beloved as this dino character was for its historical reference to the old Mozilla, would it mean anything to newcomers? MJ: Yes, this was an odd period. For many, it seemed that the joyful and playful side of the idea was just too much ‘fun’ and didn’t chime with the desired gravitas of a famous Internet not-for-profit wanting to amplify its voice, be heard and be taken seriously. Looking back, Dino died a slow and painful death. TM: Months later, rogue stickers are still turning up of the ‘Dino’ favicon showing just the open jaws without the Mozilla word mark. A sticker seemed to be the perfect vehicle for this design idea. And meanwhile, ‘Protocol’ kept chugging along like a tortoise at the back of the race, always there, never dropping out. MJ: We entered the autumn period in a slightly odd place. Dino had been effectively trolled out of the game. Burst had, against the odds, researched well, but more for the way it looked than what it was trying to say. And Protocol, the idea that had sat in the background whilst all around it hogged the spotlight, was still there, waiting for its moment. It had always researched well, especially with the coding community, and was a nice reminder of Mozilla’s pioneering spirit with its nod to the http protocol. TM: To me though, the ‘Protocol’ logo was a bit of a one-trick pony, an inside joke for the coding community that didn’t have as much to offer to the non-tech world. You and I came to the same conclusion at the same time, Michael. We needed to push ‘Protocol’ further, maybe even bring over some of the joy and color of routes such as ‘Burst’ and the fun that ‘Dino’ represented. MJ: Our early reboot work wasn’t encouraging. Simply putting the two routes together, such as this, looked exactly like what it is: two routes mashed together. A boardroom compromise to please no-one. But, gradually, some interesting work started to emerge, from different parts of the studio. We began to think more about how Mozilla’s key messages could really flow from the Moz://a ‘start’, and started to explore how to use intentionally attention grabbing copy-lines next to the mark, without word spaces, as though written into a URL. We also experimented with the Mozilla wordmark used more like a ‘selected’ item in a design or editing programme – where that which is selected reverses out from that around it. From an image-led perspective, we also began to push the Mozilla mark much further, exploring if it could be shattered, deconstructed or glitched, in so becoming a much more expressive idea. In parallel, an image-led idea had developed, which placed the Mozilla name at the beginning of an imaginary toolbar, and there followed imagery culled directly from the Internet, in an almost haphazard way. TM: This slide, #19 in the exploratory deck, was the one that really caught our attention. Without losing the ‘moz://a’ idea that coders loved, it added imagery that suggested freedom and exploration which would appeal to a variety of audiences. It was eye-catching. And the thought that this imagery could be ever-changing made me excited by the possibilities for extending our brand as a representation of the wonder of the Internet.   MJ: When we sat back and took stock of this frenetic period of exploration, we realised that we could bring several of these ideas together, and this composite stage was shared with the wider team. What you see from this, admittedly slightly crude ‘reboot’ of the idea, are the roots of the final route – a dynamic toolkit of words and images, deliberately using bright, neon, early-internet hex colours and crazy collaged imagery. This was really starting to feel like a viable way forward to us. To be fair, we’d kept Mozilla slightly in the dark for several weeks by this stage. We had retreated to our design ‘bunker’, and weren’t going to come out until we had a reboot of the Protocol idea that we thought was worthy. That took a bit of explaining to the team at Mozilla towers. TM: That’s a very comic-book image you have of our humble digs, Michael. But you’re right, our almost daily email threads went silent for a few weeks, and we tap-danced a bit back at the office when people asked when we’d see something. It was make or break time from all perspectives. As soon as we saw the new work arrive, we knew we had something great to work with. MJ: The good news was that, yes, the Mozilla team were on board with the direction of travel. But, certain amends began almost immediately: we’d realised that stand-out of the logo was much improved if it reversed out of a block, and we started to explore softening the edges of the type-forms. And whilst the crazy image collages were, broadly, enjoyed, it was clear that we were going to need some clear rules on how these would be curated and used. TM: We had the most disagreement over the palette. We found the initial neon palette to be breathtaking and a fun recollection of the early days of the Internet. On the other hand, it didn’t seem practical for use in the user interface for our web properties, especially when these colors were paired together. And our executive team found the neons to be quite polarizing. MJ: We were asked to explore a more ‘pastel’ palette, that to our eyes lacked some of the ‘oomph’ of the neons. We’d also started to debate the black bounding box, and whether it should or shouldn’t crop the type on the outer edges. We went back and forwards on these details for a while. TM: To us, it helped to know that the softer colors picked up directly from the highlight bar in Firefox and other browsers. We liked how distinctive the cropped bounding box looked and grew used to it quickly, but ultimately felt that it created too much tension around the logo. MJ: And as we continued to refine the idea for launch, we also debated the precise typeforms of the logo. In the first stage of the Protocol ‘reboot’ we’d proposed a slab-serif free font called Arvo as the lead font, but as we used it more, we realised many key characters would have to be redrawn. That started a new search – for a type foundry that could both develop a slab-serif font for Mozilla, and at the same time work on the final letterforms of the logo so there would be harmony between the two. We started discussions with Arvo’s creators, and also with Fontsmith in London, who had helped on some of the crafting of the interim logos and also had some viable typefaces. TM: Meanwhile a new associate creative director, Yuliya Gorlovetsky, had joined our Mozilla team, and she had distinct ideas about typeface, wordmark, and logo based on her extensive typography experience. MJ: Finally, after some fairly robust discussions, Typotheque was chosen to do this work, and above and below you can see the journey that the final mark had been on, and the editing process down to the final logo, testing different ways to tackle the key characters, especially the  ‘m’, ‘z’ and ‘a’. This work, in turn, and after many debates about letterspacing, led to the final logo and its set of first applications. TM: So here we are. Looking back at this makes it seem a fait accompli somehow, even though we faced setbacks and dead-ends along the way. You always got us over the roadblocks though, Michael, even when you disagreed profoundly with some of our decisions. MJ: Ha! Well, our job was and is to make sure you stay true to your original intent with this brand, which was to be much bolder, more provocative and to reach new audiences. I’ll admit that I sometimes feared that corporate forces or the online hordes were wearing down the distinctive elements of any of the systems we were proposing. But, even though the iterative process was occasionally tough, I think it’s worked out for the best and it’s gratifying that an idea that was there from the very first design presentation slowly but surely became the final route. TM: It’s right for Mozilla. Thanks for being on this journey with us, Michael. Where shall we go next?   [Less]
Posted 9 days ago by Eitan
I recently set the Compact Dark theme as my default in Firefox. Since we don’t yet have Linux client-side window decorations yet (when is that happening??), it looks kind of bad in GNOME. The window decorator shows up as a light band in a sea of ... [More] darkness: It just looks bad. You know? I looked for an addon that would change the decorator to the dark-themed one, but I couldn’t find any. I ended up adapting the gtk-dark-theme Atom addon to a Firefox one. It was pretty easy. I did it over a remarkable infant sleep session on a Saturday morning. Here is the result: You can grab the yet-to-be-reviewed addon here. [Less]
Posted 9 days ago by Fernando Serrano
This is a small summary of some new features of the latest A-Frame Inspector version that may pass unnoticed to some. Image assets dialog v0.5.0 introduces an assets management window to import textures in your scene without having to ... [More] manually type URLs. The updated texture widget includes the following elements: Preview thumbnail: it will open the image assets dialog. Input box: Hover the mouse over it and it will show the complete URL of the asset.. Open in a new tab: It will open a new tab with the full sized texture Clear: It will clear the value of the attribute. Once the image assets dialog is open you’ll see the list of images currently being used in your project, with the previous selection for the widget, if any, highlighted. You could click in any image from this gallery to set the value of the map attribute you’re editing. If you want to include new images to this list, click on LOAD TEXTURE and you’ll see several options to include a new image on your project: Here you could add new image assets to your scene by: Entering an URL Opening an uploadcare dialog that will let you upload files from your computer, google drive, dropbox.. and from other sources in ( this is currently uploading the images to our uploadcare account, so please be kind :), we’re working on letting you define your API key to use your own account). Drag and dropping from your computer. This will upload to uploadcare too. Choosing one from the curated list of images we’ve included in the assets-sample https://github.com/aframevr/sample-assets repo. Once added your image you’ll see a thumbnail showing some information about the image and the name that will have this texture in your project (the asset ID that can be referenced as #name). After editing the name if needed, click on LOAD TEXTURE and it will add your texture to the list of assets available in your project, showing you the list of textures you saw when you opened the dialog. Now just clicking on the newly created texture you’ll set the new value for the attribute you were editing. New features in the scenegraph Toggle visibility Toggle panels: New shortcuts: 1: Toggle scenegraph panel 2: Toggle components panel TAB: Toggle both panels Toggle entity visibility of each element in the scene is now possible by pressing the eye icon in the scenegraph. Broader scenegraph filtering In the previous version of the inspector we could filter by the tag name of the entity or by its ID. In the new version the filter will take into account also the names of the components that each entity has and the values of the attributes of these components. For example if we write: red it will return the entities which name contains red but also all of them with a red color in the material component. We could also filter by geometry, or directly by sphere and so on. We’ve added the shortcut CTRL or CMD + f to set the focus on the filter input for a faster filtering, and ESC to clear the filter. Cut, copy and paste Thanks to @vershwal it’s now possible to cut, copy and paste entities using the expected shorcuts: CTRL or CMD + x: Cut selected entity CTRL or CMD + c: Copy selected entity CTRL or CMD + v: Paste the latest copied or cut entity New shortcuts The list of the new shorcuts introduced in this version: 1: Toggle scenegraph panel 2: Toggle components panel TAB: Toggle both scenegraph and components panel CTRL or CMD + x: Cut selected entity CTRL or CMD + c: Copy selected entity CTRL or CMD + v: Paste a new entity CTRL or CMD + f: Focus scenegraph filter Remember that you can press h to show the list of all the shortcuts available: [Less]
Posted 9 days ago by Emma
Brian King was one of the first people I met at Mozilla.  He is someone whose opinion,  ideas, trust, support and friendship have meant a lot to me – and I know countless others would  similarly describe Brian as someone who made collaborating ... [More] , working and gathering together as a highlight of their Mozilla experiences, and personal success. Brian has  been a part of the Mozilla community for nearly 18 years – and even though we are thrilled for his new adventures, we really wanted to find a megaphone to say thank you…   Here are some highlights from my interview with him last week. Finding Mozilla Brian came to Mozilla all those years ago, as a developer.  He worked for a company that developed software which promoted minority languages including Basque, Catalan, Frisian, Irish, Welsh.  As many did back in the day – he met people in newsgroups and on IRC, and slowly became immersed in the community – regularly attending developer meetups.  Community, from the very beginning was the reason Brian became grew more deeply involved and connected to Mozilla’s mission. Shining Brightly Early contributions were code – becoming involved in with the HTML Editor, then part of the Mozilla Suite. He got a job at Activestate in Vancouver, and worked on the Komodo IDE for dynamic languages. Skipping forward he became more and more invested in transitioning to Add-On contribution, and review – even co-authoring a book “Creating Applications with Mozilla”  – which I did not know!  Very cool. During this time he describes himself as being “very fortunate” to be able to make a living by working in the Mozilla and Web ecosystem while running a consultancy writing Firefox add-ons and other software. Dear Community – “You had me at Hello”       Something Brian shared with me, was that being part of the community essentially sustained  his connection with Mozilla during times when he was to busy to contribute – and I think many other Mozillians feel this same way  – it’s never goodbye, only see you soon.  On Brian’s next adventure, I think we can take comfort that the open door of community will sustain our connection for years to come. As Staff Brian came on as Mozilla staff in 2012 as the European Community Manager, with success in this and overseeing the evolution of the Mozilla Reps program. He was instrumental in successfully building Firefox OS launch teams all around the world. Most recently he has been sharpening that skillset of empowering individuals, teams and communities with support for various programs, regional support, and the Activate campaign. Proud Moments With a long string of accomplishments at Mozilla, I asked Brian what his proudest moments were. Some of those he listed were: AMO editor for a few years reviewing thousands of Addons Building community in the Balkan area Building out the Mozilla Reps program, and being a founding council member. Helping drive Mozilla success at FOSDEM Building FFOS Launch Teams But he emphasized, in all of these, the opportunity to bring new people into the community, to nurture and help individuals and groups reach their goals provided an enormous sense of accomplishment and fulfillment. He didn’t mention it, but I also found this photo of Brian on TV in Transylvania, Romania that looks pretty cool. Look North! To wrap up, I asked Brian what he most wanted to see for Mozilla in the next 5 years, leaning on what he knows for years as part of, and leading community: “My hope is that Mozilla finds it’s North Star for the next 5-10 years, doubles down on recent momentum, and as part of that bakes community participation into all parts of the organization. It must be a must-have, and not a nice-to-have.” Thank you Brian King! You can give your thanks to Brian with #mozlove #brianking  – share gratitude, laughs and stories. Save Save Save Share [Less]