I Use This!
Very High Activity

News

Analyzed 24 days ago. based on code collected 28 days ago.
Posted 2 days ago by Chris Lord
Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a ... [More] 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin. 1. No main-thread UI The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen. 2. Contextually-aware compositor This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually). 3. Memory bandwidth budget This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms). It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS). 4. Level-of-detail This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago. Pitfalls I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so. You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so. Don’t lose site of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves. One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic. [Less]
Posted 2 days ago by Jochai Ben-Avie
This opinion piece by Mozilla Executive Chairwoman Mitchell Baker and Mozilla community member Ankit Gadgil first appeared in the Business Standard. Imagine your government required you to consent to ubiquitous stalking in order to participate in ... [More] society — to do things such as log into a wifi hotspot, register a SIM card, get your pension, or even obtain a food ration of rice. Imagine your government was doing this in ways your Supreme Court had indicated were illegal. This isn’t some dystopian future, this is happening in India right now. The government of India is pushing relentlessly to roll out a national biometric identity database called Aadhaar, which it wants India’s billion-plus population to use for virtually all transactions and interactions with government services. The Indian Supreme Court has directed that Aadhaar is only legal if it’s voluntary and restricted to a limited number of schemes. Seemingly disregarding this directive, Prime Minister Narendra Modi’s government has made verification through Aadhaar mandatory for a wide range of government services, including vital subsidies that some of India’s poorest citizens rely on to survive. Vital subsidies aren’t voluntary. Even worse, the government of India is selling access to this database to private companies to use and combine with other datasets as they wish. This would allow companies to have access to some of your most intimate details and create detailed profiles of you, in ways you can’t necessarily see or control. The government can also share user data “in the interest of national security,” a term that remains dangerously undefined. There are little to no protections on how Aadhaar data is used, and certainly no meaningful user consent. Individual privacy and security cannot be adequately protected and users cannot have trust in systems when they do not have transparency or a choice in how their private information will be used. This is all possible because India currently does not have any comprehensive national law protecting personal security through privacy. India’s Attorney General has recently cast doubt on whether a right to privacy exists in arguments before the Supreme Court, and has not addressed how individual citizens can enjoy personal security without privacy. We have long argued that enacting a comprehensive privacy and data protection law should be a national policy priority for India. While it is encouraging to see the Attorney General also indicate to the Supreme Court in a separate case that the government of India intends to develop a privacy and data protection law by Diwali, it is not at all clear that the draft law the government will put forward will contain the robust protections needed to ensure the security and privacy of individuals in India. At the same time, the government of India is still exploiting this vacuum in legal protections by continuing to push ahead with a massive initiative that systematically threatens individuals’ security and privacy. The world is looking to India to be a leader on internet policy, but it is unclear if Prime Minister Modi’s government will seize this opportunity and responsibility for India to take its place as a global leader on protecting individual security and privacy. The protection of individual security and privacy is critical to building safe online systems. It is the lifeblood of the online ecosystem, without which online efforts such as Aadhaar and Digital India are likely to fail or become deeply dangerous. One of Mozilla’s founding principles is the idea that security and privacy on the internet are fundamental and must not be treated as optional. This core value underlines and guides all of Mozilla’s work on online privacy and security issues—including our product development and design decisions and policies, and our public policy and advocacy work. The Mozilla Community in India has also long sought to empower Indians to protect their privacy themselves including through national campaigns with privacy tips and tools. Yet, we also need the government to do its part to protect individual security and privacy. The Mozilla Community in India has further been active in promoting the use, development, and adoption of open source software. Aadhaar fails here as well. The Government of India has sought to soften the image of Aadhaar by wrapping it in the veneer of open source. It refers to the Aadhaar API as an “Open API” and its corporate partners as “volunteers.” As executive chairwoman and one of the leading contributors to Mozilla, one of the largest open source projects in the world, let us be unequivocally clear: There’s nothing open about this. The development was not open, the source code is not open, and companies that pay to get a license to access this biometric identity database are not volunteers. Moreover, requiring Indians to use Aadhaar to access so many services dangerously intensifies the already worrying trend toward centralisation of the internet. This is disappointing given the government of India’s previous championing of open source technologies and the open internet. Prime Minister Modi and the government of India should pause the further roll out of Aadhaar until a strong, comprehensive law protecting individual security and privacy is passed. We further urge a thorough and open public process around these much-needed protections, India’s privacy law should not be passed in a rushed manner in the dead of night as the original Aadhaar Act was. As an additional act of openness and transparency and to enable an informed debate, the government of India should make Aadhaar actually open source rather than use the language of open source for an initiative that has little if anything “open” about it. We hope India will take this opportunity to be a beacon to the world on how citizens should be protected. The post Aadhaar isn’t progress — it’s dystopian and dangerous appeared first on Open Policy & Advocacy. [Less]
Posted 3 days ago by Jorge Villalobos
Up until a few weeks ago, AMO had “View Source” links that allowed users to inspect the contents of listed add-ons. Due to performance issues, the viewer didn’t work reliably, so we disabled it to investigate the issue. Unfortunately, the standard ... [More] error message that was shown when people tried to click the links led to some confusion, and we decided to remove them altogether. What’s Next for Source Viewing The open issue has most of the background and some ideas on where to go from here. It’s likely we won’t support this feature again. Instead, we can make it easier for developers to point users to their code repositories. I think this is an improvement, since sites like GitHub can provide a much better source-viewing experience. If you still want to inspect the actual package hosted on AMO, you can download it, decompress it, and use your tool of preference to give it a look. The post View Source links removed from listing pages appeared first on Mozilla Add-ons Blog. [Less]
Posted 3 days ago by Patrick Cloke
I subscribe to a fair amount of feeds for news, blogs, articles, etc. I’m currently subscribed to 122 feeds, some of which have tens of articles a day (news sites), some of which are dead. [1] Unfortunately there’s still a few sites that I was visiting manually each …
Posted 3 days ago
The Ride to Conquer Cancer is a fund raising event by the BC Cancer Foundation. Each year thousands of riders go from Vancouver down to Seattle in the US and over the years has raised millions of dollars. This year the Mozilla Vancouver has put ... [More] together a team. Myself, Roland Tanglao and Eva Szekely from the Vancouver Mozilla office will be riding and raising money. If you'd like to support our cause, then please consider donating. We've got a fundraising goal and we are going to make it. I attended the ride last year and found it moving to see so many people working to help a cause. This year I'm riding for family and friends who have been affected with cancer, but specifically for my father who passed away a few years ago. If any Mozillians from the wider Mozilla community in Vancouver want to join in, please drop me a line. [Less]
Posted 3 days ago by erahm
Aside from some pangs of nostalgia, it is with great pleasure that I announce the retirement of areweslimyet.com, the areweslimyet github project, and its associated infrastructure (a sad computer in Mountain View under dvander’s desk and a possibly ... [More] less sad computer running the website that’s owned by the former maintainer). Wait, what? Don’t worry! Are we slim yet, aka AWSY, lives on, it’s just moved in-tree and is run within Mozilla’s automated testing infrastructure. For equivalent graphs check out: – Explicit – RSS – Miscellaneous You can build your own graph from Perfherder. Just choose ‘+ Add test data’, ‘awsy’ for the framework and the tests and platforms you care about. Wait, why? I spent a few years maintaining and updating AWSY and some folks spent a fair amount of time before me. It was an ad hoc system that had bits and pieces bolted on over time. I brought it into the modern age from using the mozmill framework over to marionette, added support for e10s, and cleaned up some old slightly busted code. I tried to reuse packages developed by Mozilla to make things a bit easier (mozdownload and friends). This was all pretty good, but things kept breaking. We weren’t in-tree, so breaking changes to marionette, mozdownload, etc would cause failures for us and it would take a while to figure out what happened. Sometimes the hard drive filled up. Sometimes the status file would get corrupted due to a poorly timed shutdown. It just had a lot of maintenance for a project with nobody dedicated to it. The final straw was the retirement of archive.mozilla.org for what we call tinderbox builds, builds that are done more or less per push. This completely broke AWSY back in January and we decided it was just better to give in and go in-tree. So is this a good thing? It is a great thing. We’ve gone from 18,000 lines of code to 1,000 lines of code. That is not a typo. We now run on linux64, win32, and win64. Mac is coming soon. We turned on e10s. We have results on mozilla-inbound, autoland, try, mozilla-central, and mozilla-beta. We’re going to have automated crash analysis soon. We were able to use the project to give the greenlight for the e10s-multi project on memory usage. Oh and guess what? Developers can run AWSY locally via mach. That’s right, try this out: mach awsy-test --quick Big thanks go out to Paul Yang and Bob Clary who pulled all this together — all I did was do a quick draft of an awsy-lite implementation — they did the heavy lifting getting it in tree, integrated with task cluster, and integrated with mach. What’s next? Now that we’re in-tree we can easily add new tests. Imagine getting data points for running the AWSY test with a specific add-on enabled to see if it regresses memory across revisions. And anyone can do this, no crazy local setup. Just mach awsy-test. [Less]
Posted 3 days ago by Andreas
Disclaimer: I worked for 7 years at Mozilla and was Mozilla’s Chief Technology Officer before leaving 2 years ago to found an embedded AI startup. Mozilla published a blog post two days ago highlighting its efforts to make the Desktop Firefox browser ... [More] competitive again. I used to closely follow the browser market but haven’t looked in a few years, so I figured it’s time to look at some numbers: The chart above shows the percentage market share of the 4 major browsers over the last 6 years, across all devices. The data is from StatCounter and you can argue that the data is biased in a bunch of different ways, but at the macro level it’s safe to say that Chrome is eating the browser market, and everyone else except Safari is getting obliterated. Trend I tried a couple different ways to plot a trendline and an exponential fit seems to work best. This aligns pretty well with theories around the explosive diffusion of innovation, and the slow decline of legacy technologies. If the 6 year trend holds, IE should be pretty much dead in 2 or 3 years. Firefox is not faring much better, unfortunately, and is headed towards a 2-3% market share. For both IE and Firefox these low market share numbers further accelerate the decline because Web authors don’t test for browsers with a small market share. Broken content makes users switch browsers, which causes more users to depart. A vicious cycle. Chrome and Safari don’t fit as well as IE and Firefox. The explanation for Chrome is likely that the market share is so large that Chrome is running out of users to acquire. Some people are stuck on old operating systems that don’t support Chrome. Safari’s recent growth is underperforming its trend most likely because iOS device growth has slowed. Desktop market share Looking at all devices blends mobile and desktop market shares, which can be misleading. Safari/iOS is dominant on mobile whereas on Desktop Safari has a very small share. Firefox in turn is essentially not present on mobile. So let’s look at the Desktop numbers only. The Desktop-only graph unfortunately doesn’t predict a different fate for IE and Firefox either. The overall desktop PC market is growing slightly (most sales are replacement PCs, but new users are added as well). Despite an expanding market both IE and Firefox are declining unsustainably. Adding users? Eric mentioned in the blog post that Firefox added users last year. The relative Firefox market share declined from 16% to 14.85% during that period. For comparison, Safari Desktop is relatively flat, which likely means Safari market share is keeping up with the (slow) growth of the PC/Laptop market. Two possible theories are that Eric meant in his blog post that browser installs were added. People often re-install the browser on a new machine, which could be called an “added user”, but it comes usually at the expense of the previous machine becoming disused. It’s also possible that the absolute daily active user count has indeed increased due to the growth of the PC/laptop market, despite the steep decline in relative market share. Firefox ADUs aren’t public so it’s hard to tell. From these graphs it’s pretty clear that Firefox is not going anywhere. That means that the esteemed Fox will be around for many many years, albeit with an ever diminishing market share. It also, unfortunately, means that a turnaround is all but impossible. With a CEO transition about 3 years ago there was a major strategic shift at Mozilla to re-focus efforts on Firefox and thus the Desktop. Prior to 2014 Mozilla heavily invested in building a Mobile OS to compete with Android: Firefox OS. I started the Firefox OS project and brought it to scale. While we made quite a splash and sold several million devices, in the end we were a bit too late and we didn’t manage to catch up with Android’s explosive growth. Mozilla’s strategic rationale for building Firefox OS was often misunderstood. Mozilla’s founding mission was to build the Web by building a browser. Mobile thoroughly disrupted this mission. On mobile browsers are much less relevant–even more so third party mobile browsers. On mobile browsers are a feature of the Facebook and Twitter apps, not a product. To influence the Web on mobile, Mozilla had to build a whole stack with the Web at its core. Building mobile browsers (Firefox Android) or browser-like apps (Firefox Focus) is unlikely to capture a meaningful share of use cases. Both Firefox for Android and Firefox Focus have a market share close to 0%. The strategic shift in 2014, back to Firefox, and with that back to Desktop, was significant for Mozilla. As Eric describes in his article, a lot of amazing technical work has gone into Firefox for Desktop the last years. The Desktop-focused teams were expanded, and mobile-focused efforts curtailed. Firefox Desktop today is technically competitive with Chrome Desktop in many areas, and even better than Chrome in some. Unfortunately, looking at the graphs, none of this has had any effect on market trends. Browsers are a commodity product. They all pretty much look the same and feel the same. All browsers work pretty well, and being slightly faster or using slightly less memory is unlikely to sway users. If even Eric–who heads Mozilla’s marketing team–uses Chrome every day as he mentioned in the first sentence, it’s not surprising that almost 65% of desktop users are doing the same. What does this mean for the Web? I started Firefox OS in 2011 because already back then I was convinced that desktops and browsers were dead. Not immediately–here we are 6 years later and both are still around–but both are legacy technologies that are not particularly influential going forward. I don’t think there will be a new browser war where Firefox or some other competitor re-captures market share from Chrome. It’s like launching a new and improved horse in the year 2017. We all drive cars now. Some people still use horses, and there is value to horses, but technology has moved on when it comes to transportation. Does this mean Google owns the Web if they own Chrome? No. Absolutely not. Browsers are what the Web looked like in the first decades of the Internet. Mobile disrupted the Web, but the Web embraced mobile and at the heart of most apps beats a lot of JavaScript and HTTPS and REST these days. The future Web will look yet again completely different. Much will survive, and some parts of it will get disrupted. I left Mozilla because I became curious what the Web looks like once it consists predominantly of devices instead of desktops and mobile phones. At Silk we created an IoT platform built around open Web technologies such as JavaScript, and we do a lot of work around democratizing data ownership through embedding AI in devices instead of sending everything to the cloud. So while Google won the browser wars, they haven’t won the Web. To stick with the transportation metaphor: Google makes the best horses in the world and they clearly won the horse race. I just don’t think that race matters much going forward. Update: A lot of good comments in a HackerNews thread here. My favorite was this one: “Mozilla won the browser war. Firefox lost the browser fight. But there’s many wars left to fight, and I hope Mozilla dives into a new one.” Couldn’t agree more.Filed under: Mozilla [Less]
Posted 3 days ago by Air Mozilla
These calls will be held in the Localization Vidyo room every second (14:00 UTC) and fourth (20:00 UTC) Thursday of the month and will be...
Posted 3 days ago by Peiying Mo
In front of the iconic Taipei 101. In discussing our plans for this years’ event, the city of Taipei was on a short list of preferred locations. Peter from our Taipei community helped us solidify the plan. We set the date for April 21-22 in favour ... [More] of cooler weather and to avoid typhoon season!  This would be my third visit in Taiwan. Working with our community leaders, we developed nomination criteria and sent out invitations. In addition to contributing to localizing content, we also reviewed community activities in other areas such as testing Pontoon, leading and managing community projects, and active participation in community channels. 360° view of the meetup. In total, we invited representatives from 12 communities and all were represented at our event. We had a terrific response, more than 80% of the invitees accepted the invitation and were able to join us. It was a good mix of familiar faces and newcomers. We asked everyone to set personal goals in addition to team goals. Flod and Gary joined me for the second year in a row, while this was Axel’s first meeting with these communities in Asia. Based on the experience and feedback from last year’s event, we switched things up, balancing discussion and presentation sessions with community-oriented breakout sessions throughout the weekend. These changes were well received. Our venue was the Mozilla Taipei office, right at the heart of financial centre, a few minutes from Taipei 101. On Saturday morning, Axel covered the removal of the Aurora branch and cross-channel, while later Flod talked about Quantum and Photon and their impact on localization. We then held a panel Q&A session with the localisers and l10n-drivers. Though we solicited questions in advance, most questions were spontaneous, both technical and non-technical. They covered a broad range of subjects including Firefox, new brand design, vendor management and crowd sourcing practices by other companies. We hoped this new format would be interactive. And it was! We loved it, and from the survey, the response was positive too. In fact, we were asked to conduct another session the following day, so more questions could be answered. Localisers were briefed on product updates. The upcoming Firefox browser launch in autumn creates new challenges for our communities, including promoting the product in their languages. In anticipation, we are developing a Firefox l10n marketing kit for the communities. We took advantage of the event to collect input on local experiences that worked well and that didn’t. We covered communication channels, materials needed for organising an event, and key messages to promote the localised product. Flod shared the design of Photon, with a fun, new look and feel. On Sunday, Flod demonstrated all the new development on Pontoon, including how to utilise the tool to work more efficiently. He covered the basic activities for different roles as a suggester, as a translator and as a locale manager. He also covered advanced features such as batch processing, referencing other languages for inspiration and filters, before describing future feature improvements. Though it was unplanned, many localisers tried their hands on the tool while they listened in attentively. It worked out better than expected! Quality was the focus and theme for this year’s event. We shared test plans for desktop, mobile, and mozilla.org, then allowed the communities to spend the breakout sessions testing their localisation work. Axel also made a laptop available to test Windows Installer. Each community worked on their group goals between sessions for the rest of the weekend. Last stop of the 貓空纜車 (Maokong Gondola ride) Of course, we found some time to play. Though the weather was not cooperative, we braved unseasonally cold, wet, and windy weather to take a gondola ride on 貓空纜車 (Taipei Maokong Gondola) over the Taipei Zoo in the dark. Irvin introduced the visitors to the local community contributors at 摩茲工寮  (Mozilla community space). Gary led a group to visit Taipei’s famed night markets. Others followed Joanna to her workplace at 三七茶堂 (7 Tea House), to get an informative session on tea culture. Many brought home some local teas, the perfect souvenir from Taiwan. Observing the making of the famous dumplings at 鼎泰豐 (Din Tai Fung at Taipei 101) We were also spoiled by the abundance of food Taipei had to offer. The local community put a lot of thought in the planning phase. Among the challenges were the size of the group, the diversity of the dietary needs, and the desire of having a variety of cuisines. Flod and Axel had an eye opening experience with all the possible food options! There was no shortage, between snacks, lunch and dinner. Many of us gained a few pounds before heading home. All of us were pleased with the active participation of all the attendees, their collaborations within the community and beyond. We hope you had achieved your personal goals. We are especially grateful for the tremendous support from Peter, Joanna and Lora who helped with each step of the planning, hotel selection, transportation directions, visa application process, food and restaurant selections and cultural activities. We could have not done it with their knowledge, patience and advice in planning and execution. Behind the scenes, community veterans Bob and Irvin lent their support to make sure things went as seamlessly as possible. It was true team effort to host a successful event of this size. Thanks to you all for creating this wonderful experience together. We look forward to another event in Asia next year. In which country, using what format? We want to hear from you! [Less]
Posted 4 days ago by Nathan Froyd
Several days ago, somebody pointed me at Why Amazon is eating the world and the key idea has been rolling around in my head ever since: [The reason that Amazon’s position is defensible is] that each piece of Amazon is being built with a ... [More] service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition. The most obvious example of Amazon’s [service-oriented architecture] structure is Amazon Web Services (Steve Yegge wrote a great rant about the beginnings of this back in 2011). Because of the timing of Amazon’s unparalleled scaling — hypergrowth in the early 2000s, before enterprise-class SaaS was widely available — Amazon had to build their own technology infrastructure. The financial genius of turning this infrastructure into an external product (AWS) has been well-covered — the windfalls have been enormous, to the tune of a $14 billion annual run rate. But the revenue bonanza is a footnote compared to the overlooked organizational insight that Amazon discovered: By carving out an operational piece of the company as a platform, they could future-proof the company against inefficiency and technological stagnation. …Amazon has replaced useless, time-intensive bureaucracy like internal surveys and audits with a feedback loop that generates cash when it works — and quickly identifies problems when it doesn’t. They say that money earned is a reasonable approximation of the value you’re creating for the world, and Amazon has figured out a way to measure its own value in dozens of previously invisible areas. Open source is the analogue of this strategy into the world of software.  You have some small collection of code that you think would be useful to the wider world, so you host your own repository or post it on Github/Bitbucket/etc.  You make an announcement in a couple of different venues where you expect to find interested people.  People start using it, express appreciation for what you’ve done, and begin to generate ideas on how it could be made better, filing bug reports and sending you patches.  Ideally, all of this turns into a virtuous cycle of making your internal code better as well as providing a useful service to external contributors.  The point of the above article is that Amazon has applied an open-source-like strategy to its business relentlessly, and it’s paid off handsomely. Google is probably the best (unintentional?) practitioner of this strategy, exporting countless packages of software, such as GTest, Go, and TensorFlow, not to mention tools like their collection of sanitizers. They also do software-related exports like their C++ style guide. Facebook opens up in-house-developed components with React, HHVM, and Buck, among others. Microsoft has been charging into this arena in the past couple of years, with examples like Visual Studio Code, TypeScript, and ChakraCore.  Apple doesn’t really play the open source game; their opensource site and available software is practically the definition of “throwing code over the wall”, even if having access to the source is useful in a lot of cases.  To the best of my knowledge, Amazon doesn’t really play in this space either.  I could also list examples of exported code from other smaller but still influential technology companies: Github, Dropbox, Twitter, and so forth, as well as companies that aren’t traditional technology companies, but have still invested in open-sourcing some of their software. Whither Mozilla in the above list?  That is an excellent question.  I think in many cases, we haven’t tried, and in the Firefox-related cases where we tried, we decided (incorrectly, judging through the above lens) that the risks of the open source approach weren’t worth it.  Two recent cases where we have tried exporting software and succeeded wildly have been asm.js/WebAssembly and Rust, and it’d be worth considering how to translate those successes into Firefox-related ones.  I’d like to make a follow-up post exploring some of those ideas soon.     [Less]