I Use This!
Very High Activity

News

Analyzed 1 day ago. based on code collected 1 day ago.
Posted about 7 years ago by Honza Bambas
Did you ever dream of debugging Firefox in Visual Studio with all its child processes attached automatically?  And also when being started externally from a test suit like mochitest or browsertest?  Tired of finding the right pid and time to attach ... [More] manually?  Here is the solution for you! Combination of the following two extensions to Visual Studio Community 2015 will do the trick: Spawned Process Catcher X – attaches automatically to all child processes the debugee (and its children) spawns Entrian Attach – attaches the IDE automatically to an instance of a process spawned FROM ANYWHERE, e.g. when running tests via mach where Firefox is started by a python script – yes, magic happens ;) Spawned Process Catcher X works automatically after installation without a need for any configuration. Entrian Attach is easy to configure: In the IDE, in the main menu go to TOOLS/Entrian Attach: Configuration…, you’ll get the following window: It’s important to enter the full path for the executable.  Other important option is to set Attach at process start when to “I’m not already debugging its exe”.  Otherwise, when firefox.exe is started externally, a shim process is inserted between the parent and a child process which breaks our security and other checks for expected pid == actual pid.  You would just end up with a MOZ_CRASH. Note that version 1.4.2 of Entrian Attach has a flaw with these settings: when you start Firefox binary from the IDE (F5) the extension will insert the shim executable despite how the condition is set.  I am already in contact with the author about this issue. Workaround: disable Entrian Attach when you want to run Firefox directly from the IDE.  There is Ctrl-E, Ctrl-E hot key (yes, twice) for quick enabling and disabling of the extension. Entrian Attach developer is very responsive.  We’ve already cooked the “When I’m not already debugging its exe” option to allow child process attaching without the inserted shim process, took just few days to release a fixed version. Entrian Attach is a shareware with 10-day trial.  Then a single developer license is for $29.  There are volume discounts available.  Hence, since this is so super-useful, Mozilla could consider buying a multi-license.  Anyway, I believe it’s money very well spent! The post Automatically attaching child and test-spawned Firefox processes in Visual Studio IDE appeared first on mayhemer's blog. [Less]
Posted about 7 years ago by Honza Bambas
Did you ever dream of debugging Firefox in Visual Studio with all its child processes attached automatically?  And also when being started externally from a test suit like mochitest or browsertest?  Tired of finding the right pid and time to attach ... [More] manually?  Here is the solution for you! Combination of the following two extensions to Visual Studio Community 2015 will do the trick: Spawned Process Catcher X – attaches automatically to all child processes the debugee (and its children) spawns Entrian Attach – attaches the IDE automatically to an instance of a process spawned FROM ANYWHERE, e.g. when running tests via mach where Firefox is started by a python script – yes, magic happens ;) Spawned Process Catcher X works automatically after installation without a need for any configuration. Entrian Attach is easy to configure: In the IDE, in the main menu go to TOOLS/Entrian Attach: Configuration…, you’ll get the following window: UPDATE: It’s important to enter the full path for the executable.  The Windows API for capturing process spawning is stupid – it only takes name of an executable, not a full path or wildchars.  Hence you can only specify names of executable files you want Entrian Attach to automatically attach to.  Obviously, when Visual Studio is running with Entrian Attach enabled and you start your regular browser, it will attach too.  I’ve added a toolbar button EntrianAttachEnableDisable to the standard toolbar for a quick switch and status visibility. Other important option is to set Attach at process start when to “I’m not already debugging its exe”.  Otherwise, when firefox.exe is started externally, a shim process is inserted between the parent and a child process which breaks our security and other checks for expected pid == actual pid.  You would just end up with a MOZ_CRASH. Note that the extension configuration and the on/off switch are per-solution. Entrian Attach developer is very responsive.  We’ve already cooked the “When I’m not already debugging its exe” option to allow child process attaching without the inserted shim process, took just few days to release a fixed version. Entrian Attach is a shareware with 10-day trial.  Then a single developer license is for $29.  There are volume discounts available.  Hence, since this is so super-useful, Mozilla could consider buying a multi-license.  Anyway, I believe it’s money very well spent! The post Automatically attaching child and test-spawned Firefox processes in Visual Studio IDE appeared first on mayhemer's blog. [Less]
Posted about 7 years ago by Giorgos
On 24th of January we had the first Python meetup for 2017 as always in Hackerspace.gr. Based on our typical setup we started with two talks: Myself, I kicked off the meetup with a Micropython on ESP8266 talk. I started with quick intros on ... [More] Micropython and the ESP8266, moved to using the RERL over Serial with minicom and ended spectacularly with blinking neopixels. Spyros followed talking about Nameco based microservices and his Tweetmark project to save tweets for later reading and do it in style. Moved on with a live demo and successfully fixed the obligatory demo bug defeating the Demo Gods. He vanished walking towards the sunset. Late February we plan on the next meetup with more thrilling talks on gEvent and Python on AWS Lambda. Join our mailing list and meetup for exact dates and venue info. See also: Micropython ESP8266 neopixels Python Greece Mailing List Athens Python Users Meetup [Less]
Posted about 7 years ago by Ryan Harter
I’m working on a big overhaul of my team’s documentation. I’ve noticed writing documentation is a difficult thing to get right. I haven’t seen any great example for a data product, either. I don’t have much experience in this area, so I decided to ... [More] review what’s already been written about creating great documentation. This is a summary of what I’ve found, both for my own reference and to help others understand my thought process. Findings I should note, all the literature I could find focused on documenting software products. I am willing to bet that a data product is going to have different documentation needs than most software products. But, this is as good a place to start as any. Structure & What to Write Most seem to agree that a README is a critical piece of documentation. The README is usually comprised of two key parts: A quick introduction explaining what this project is, why the reader should care, and whether it’s worth investing time to understand it better. A simple tutorial to get the reader started and give a feel for what the tool actually does. If the reader decides they want to learn more, there should be a set of topical guides or tutorials which comprise the bulk of the documentation. Think of each of these guides as a class focused on teaching your student (reader) a single skill. Reading all of these guides should take "someone who has never seen your product and make them an expert user". [TDT] With that in mind, make sure there’s some sense of order to these lessons (easy to hard). If your reader gets this far, they are now very comfortable with your product. From here, they need high-quality reference material. In my experience, this is the most common documentation provided, but it is needed latest in the process and only by the most advanced users! When I started this research, I was having a hard time figuring out how we were going to separate our prose documentation from our development notes. Now I see that these are just different stages in this learning process. First we explain what it is, then how to use it, and finally, how to extend it. Style Most articles suggest adopting a style guide to make it easier for a user to read your documentation. The writing should pull you through the document and feel natural. If you want your documentation to read naturally, you should try to become a better writer. This comes as cold solace to most folks, since I need my documentation now and I can’t wait 10,000 hours to become an expert writer, but it’s worth mentioning. The overwhelming consensus is that the best way to become a better writer is to write a lot. If you want to write great documentation, consider building habits that will make you a great writer. As with programming, maintaining a consistent style will help readers understand your documentation naturally. Note, the important word here is "consistent". Choose a style and stick with it. This sounds obvious, but I rarely find corporate documentation with consistent style across tutorials. Have a style guide and enforce it. As you choose your style guide, be aware that most of the advice is focused on physical media. Your documentation is probably going to be read digitally, so your readers will have different expectations. Specifically, readers are going to skim your writing, so make it easy to identify important information. Use visual markup like bold text, code blocks, call-outs (e.g 1, 2), and section headers. Similarly, avoid long paragraphs. Short paragraphs that describe one concept each makes finding important information easier. Most guides suggest keeping a conversational tone. This makes the guide more approachable and easier to read. Everyone seems to agree that you should have an editor. In fact, Jacob Kaplan-Moss dedicated an entire article to this point [JKM 3]. If you don’t have access to an editor, review your own work thrice then ask for someone else’s review before publishing. Try adjusting your margins to force the text to re-flow. It’s a very effective way to catch spelling or grammatical mistakes. Tools I’ll start this section with a warning. Tools often receive an undue amount of attention, especially from programmers. With documentation, writing is the hard, important work. It’s important to use good tools, but make sure you’re not bike shedding. Your documentation should be stored in plain text and in version control. Most of your documentation is going to be written by programmers, and programmers have powerful tools for manipulating text. Using anything besides plain text is a frustration that makes it less likely they’ll enjoy writing documentation. You should have a process for reviewing changes to the documentation. Review will help maintain a consistent voice across your documentation and will provide useful feedback to the writer. Think of how useful code reviews are for improving your programming. I’d jump at the chance to get feedback from an expert writer. You should not use a wiki for documentation. Wikis make documentation "everyone’s responsibility", which really means it’s nobody’s responsibility. Without this responsibility, wikis tend to decay into a web of assorted links without any sense of order or importance. Wikis make it impossible to maintain a consistent voice throughout your documentation. Finally, it’s difficult to get review for your work before publishing. Recognize that automatically-generated documentation isn’t a replacement for hand-crafted prose. Remember that the bulk of your documentation should be tutorials meant to slowly ramp up your users to expert status. Docstrings have very little utility in this process. Resources Most of what I’ve summarized here came from very few sources. I highly recommend you read the following articles if you’re interested in learning more: [TDT]: Teach, Don’t Tell (Steve Losh) [JKM 1]: What to Write (Jacob Kaplan Moss) [JKM 2]: Technical Style (Jacob Kaplan Moss) [JKM 3]: You Need an Editor (Jacob Kaplan Moss) For later reference, I also reviewed these articles to form opinions about general consensus outside of the primary sources above: The Science of Scientific Writing (George Gopen, Judith Swan): Good overview of how to structure a paper so readers find information where they expect it to be WriteTheDocs.org, specifically A Beginner’s Guide to Writing Docs Art of README: An arguement for writing good READMEs and a template to help you get started Scala Documentation Discussion A discussion of why Scala’s official documentation is so bad Vignettes (Hadley Wickham): Hadley is a rockstar in the R universe. This is an article from his style guide for writing R package documentation. This is the closest I could come to finding documentation advice for data products. Programming’s Dirtiest Little Secret (Steve Yegge): Steve Yegge on why it’s important to type well Writing Great Documentation: This article comments on documentation’s propensity towards kippleization. GNU Manual Style Guide [Less]
Posted about 7 years ago
This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. For an overview, see my earlier post. My understanding of the overarching goal for the Connected Devices group ... [More] within Mozilla is to have a tangible impact on the evolution of the Internet of Things to maintain the primacy of the user; their right to own their own data and experience, to chose between products and organizations. We want Mozilla to be a guiding light, an example others can follow when developing technology in this new space that respects user privacy, implements good security and promotes open, common standards. In that context, the plan is to develop an IoT platform alongside a few carefully selected consumer products that will exercise and validate that platform and start building the exposure and experience for Mozilla in this space. Over the last few months, the vision for this platform has aligned with the emerging Web of Things which builds on patterns for attaching “Things” to the web. From one perspective, the web is a just a network of interconnected content nodes. It follows that the scope for standardizing the evolution of the Internet of Things is to define a sensible architecture and build frameworks for incorporating these new devices and their capabilities to maintain interoperability, promote discoverability etc. This maps well onto connected sensors, smart appliances and other physical objects whose attributes we want to query and set over the network. Give these things URLs and a RESTful interface and you get all the rich semantics of the web, addressability, tools, developer talent pool - the list goes on and on and its all for “free”. In one stroke you remove the need for a lot of wheel re-inventions and proprietary-ness and nudge this whole movement in the direction of the interoperable, standardized web. Its a no-brainer. In this context however, the communication device envisaged by Project Haiku is orthogonal. While you can model it to give URLs to the people/devices and the private communication channel they share, the surface area of the resulting “API” is tiny and has limited value. It is conceptually powerful as it brings along all the normal web best practices for RESTful API design, access control, caching and offline strategies and so-on. Still, the Haiku device would be more web client than web resource and doesn’t fit neatly into this story. This and the relatively skinny overlap in shared functionality with the proposed Mozilla IoT platform was one of the rocks on which Project Haiku floundered. And I agree that it would make little sense to have teams pulling in different directions and plotting courses that by design would not benefit each other much. But I’m also sad about a missed opportunity here. Its like we enlarged the stage but stopped short of taking the performance outside the theater. I didn’t set out with a personal ambition to create products from headless, embedded web clients. We asked questions and followed the answers and wound up in a place that seemed to make sense. With the emergence of cheap networking hardware, we can imagine using the web from devices other than the highly horizontal, multi-functional browsers on our desktop and mobile computers. When we could afford only a single computing device, it made sense to make highly flexible software capable of bringing us any experience the web could manage. Our web browser was a viewer for web content - any and all web content. It was left to the users to figure out how to partition out the different ways in which they used the web - the different hats they wore. Now, we can dedicate a device to a single task - and in doing so remove layer upon layer of complexity in the user experience. Instead of toolbars and menus and scrolling and clicks, typing or even speaking to request some part of the web, we can have a single button. Or maybe not even that - its just there as long as the power and network permit. We can give some piece of the web a tangible, physical space in our lives. It could be a screen on the office wall that displays my bug list. I don’t need to context-switch as I switch tabs, instead context changes naturally as I move from one room to another, and the information is in its proper place. The product Project Haiku proposed follows the same philosophy. Yes, you could use a phone, or a tablet or one of the many new and lovely VoIP devices to facilitate communication between two people. But they are behind an icon, tucked away under some menu - the device sits between you and them and allows you to speak through it. Contrast that to a device whose sole function is to keep a channel open to that one person. You can send a voice or emoji message at any time, and - if they are available and nearby - talk in real time. The device is a proxy for the person, and they are represented by it in real space in your home. In this scenario the internet is just magic geography-defying tubes between one house and another. I know Mozilla is not walking away from this entirely and I hope we’ll get to circle back and explore this some more. The same ideas have spontaneously emerged in one form or another too many times to not stick at some point. We already saw mobile apps packaging up content where the app is essentially a single-task browser without all the noise. In the app store duopoly, these apps represent gated communities, taking chunks of the web and building walls around them. In IoT we have another opportunity to fix this - to keep the benefits and maintain choice, freedom, privacy and security for the users of the technology rather than its keepers. We should attack it from both ends: the publishing and the requesting of content; both resource and client. [Less]
Posted about 7 years ago by Mike Kaply
Last year, Mozilla announced that support for NPAPI plugins (except for Flash) would be ending in March 2017. That date is approaching fast, so I wanted to give folks more information about what’s happening. If you subscribe to my newsletter, this is ... [More] the same information I gave there. In short, Firefox 51 (which was released last week) is the last release of mainline Firefox that will support NPAPI plugins (except for Flash). Starting with Firefox 52, the only version of Firefox that will support plugins is the ESR. Firefox 52 WILL NOT have plugin support (except for Flash). Firefox 52 ESR WILL have plugin support. That means that if your users are currently on Firefox 51 and you need plugin support, you need to switch Firefox so that it gets updates from the ESR channel. To do this, you need to change two files, channel-prefs.js and update-settings.ini. In defaults/prefs/channel-prefs.js, change: pref("app.update.channel", "release"); to pref("app.update.channel", "esr"); In update-settings.ini, change: ACCEPTED_MAR_CHANNEL_IDS=firefox-mozilla-release to ACCEPTED_MAR_CHANNEL_IDS=firefox-mozilla-esr It is important that you make this change as close as possible to the release of Firefox 52 ESR (March 7, 2017), otherwise security updates to Firefox 51 will not be applied. Plugin support will continue in the 52 ESR line only, meaning Firefox 59 will not have plugin support. Some folks may ask why both Mozilla didn’t wait until Firefox 53 to deprecate plugins so that both versions of Firefox 52 would have the same capabilities. If they had done that, users who needed plugins would have had to downgrade to Firefox 52 ESR and that could cause incompatibilities with profiles. It made more sense to encourage people to switch to the same version (52 to 52 ESR). [Less]
Posted about 7 years ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted about 7 years ago
This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. We landed on a WebRTC-based implementation of a 1:1 communication device. For an overview of the project as a ... [More] whole, see my earlier post. This was one of those insights that seems obvious with hindsight. If you want to allow two people communicate privately and securely, using non-proprietary protocols, and have no need or interest in storing or mediating this communication - you want WebRTC. For Project Haiku we had a list of reasons for not wanting to write a lot of server software. Mozilla takes user privacy seriously, and the best way to protect user data is to not collect it in the first place. We also wanted to minimize lock-in, and make the product easily portable to other providers. Cloud services convenience comes at a price: it can be hard to move once you start investing time into using a service. Our product aimed to facilitate communication between a grandparent and grandchild. We didn’t want to intrude into that by up-selling some premium service. There really wasn’t much the server would need to do if we did this right. Connect us and go awayHere’s how it worked. Grandparent and grandchild want to talk more, so the grandparent (or parent) installs Device A in the child’s home and connects it to WiFi. Grandparent can either install the app or their own Device B. An invitation to connect/pair can be generated from either side, and sent to the other party. Once “paired” in this way, when both devices (peers) connect to the server, a secure channel is negotiated directly between the peers and the server’s work is done. Actual messages/data is sent directly between the peers. STUN, TURN and making it workOn the server side, we need to authenticate each incoming connection and shuttle the negotiation of offers and capabilities between two clients. In WebRTC terminology, this broker is called a signalling server. We built a simple proof of concept using node.js and WebSocket. The other necessary components of this system are a STUN and TURN server - both well defined and with existing open source implementations. The complexities associated with WebRTC kick in with multi-party conferencing where different data streams might need to be composited together on the server or client or both. Then there’s the need for real-time transcoding and re-sampling of audio and video streams to fit the capabilities of the different clients wanting to connect, and the networks they are connecting over. And interfacing with traditional telephony stacks and networks. In this landscape, the very limited set of parameters needed for Haiku’s WebRTC use-case make the solution relatively simple - we just don’t need most of the things that bring along all that complexity. The client and the catchThere is always a catch isn’t there? Almost all WebRTC client implementation effort to date has come from desktop browser vendors. Search the web and most of what you find about WebRTC assumes you are using a conventional browser like Firefox, Chome/Chromium etc. That’s no use for the Haiku device, where we are running in an embedded Linux environment, without the expected display or input methods and limited system capabilities. Existing standalone headless web clients (such as Curl, or node.js’ built-in HTTP modules) do not yet speak WebRTC. There is some useful work in the wrtc module that provides native bindings for WebRTC, provided you can compile for your architecture. We were able to use this to put together a simple proof of concept, running on Debian on a BeagleBone Black. wrtc gives you the PeerConnection and DataChannel but no audio/video media streams. It was enough for us to taste sweet prototype success: a headless single-board computer securely contacting and authenticating at our signaling server, and conducting a P2P, fully opaque exchange of messages with a remote client. Going from this smoke test of the concept to a complete and robust implementation is definitely doable, but its not a trivial piece of work. Our user studies concluded that the asynchronous exchange of discrete messages was good for some scenarios, but the kids and grandparents also wanted to talk in real time. So to pick this back up means solving enough of the headless WebRTC client problem to enable audio streaming between devices. And with the added need to support a mobile app as client, likely transcoding audio too. Bug 156 on the wrtc module’s repo discusses some options. What next?Putting the Haiku project on hold has meant walking away from this. I hope others will arrive at the same conclusions and we’ll see WebRTC adoption expand beyond the browser. There are so many possibilities. Just stop for a moment and count the number of ways in which one device needs to talk securely to another using common protocols. Yet for reasons that suddenly seem unclear, this conversation is gated and channeled (and observed and logged) through a server in the cloud. Both the desktop and mobile browser represent just one way to connect users to the Web. There are others, we should be looking into them. Although Mozilla exists to promote and protect the open web, it is historically a browser company. I can’t tell you the number of long conversations I’ve had with colleagues which end with “wait, you mean this isn’t happening in the browser?” Moving into IoT and Connected Devices means challenging this. We’ve set aside that challenge for now, I sincerely hope we’ll come back to it. [Less]
Posted about 7 years ago
This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. By focusing on the user problem and not the business model, we quickly determined that we wanted as little data ... [More] from our users as we could get away with. For context and an overview of the project, please see my earlier post. When I was a kid, my brothers and I had wired walkie-talkies. Intercoms really. Each unit was attached with about 100’ of copper wire. One could be downstairs and with the wire trailed dangerously under doors and up stairs we could communicate between kitchen and bedroom. Later, in order to talk with a friend in the appartment block opposite us, we got a string pulled taut between our two balconies. With tin cans on each end of the string, you could just about hear what the other was saying. RF-based wireless communication had existed for a long time already, but I bring these specific communication examples up because the connection we made was exclusive and private. We didn’t need to agree on a frequency and hope no-one else was listening in. The devices didn’t just enable the connection, they were the connection. We didn’t sign up for a service, didn’t pay any subscription, and when we tired of it and it was given away, no contracts needed to be amended; the new owners simply picked up each end and started their own direct and private conversation. In Project Haiki, when we thought about IoT and connecting people, this was the analogy we adopted. That doesnt sound like a very radical position to take. But look around you and some of the ways you communicate with friends and loved ones today when you are physically apart. Cellular/SMS, Facebook, Skype, Facetime, Twitter, WhatsApp, Telegram… In each and every case you have accounts and arrangement with companies to make communication possible. And in each case that company can log at least the metadata if not the content of your conversations, change terms of agreement and policies, raise prices, or terminate the agreement entirely and disconnect you. They can be subpoena’d for their records and may even be obliged by law to retain and hand over data with their customers transactions. In our tin can and string analogy, there’s a large, locked black junction box sitting between you and your friend. You may own the equipment you use on your end, but it can be rendered effectively useless or even intrusive and hostile at any time, and there’s probably not a thing you can do about it. Everything about this situation is wrong for the direct, personal and private channel we wanted to establish to let kids and grandparent share moments and be a part of each others lives from a distance. Clearly, some parts of this problem are more tractable than others. We’re not in the ISP business for example; how you get to the internet is outside our control. But, keeping this analogy in mind provided a north star for our project. Whenever we were faced with a decision to make, it helped steer us. So when we thought about the unboxing and setup experience, we asked ourselves, “Do we actually need user accounts for these people?” Here’s the typical scenario when you install an app or take delivery of some shiny new tech. You plug it in or fire it up for the first time and you are asked to login or sign up. You create a new account with company X, providing your name, address, email, maybe gender or age bracket, perhaps they want categories of interest, and an agreement to be spammed. If its a paid service, they’ll want your credit card info too. Furthermore, the app then requests a set of permissions giving it access to your address book. Remember, the goal here is to allow a kid and their grandparent to exchange messages and chat from time to time. Which of these do we as Mozilla - the service provider - actually need? Name? These two people are already in touch, so they don’t need to find each other in a directory. Their invitations to connect/pair could take the form of a URL sent via text, or a QR code printed and sent via snail mail. These devices only connect these two people so we don’t have to identify who a message is from. And even if we did want that, they can configure it and send it from the device itself. We don’t really need to store their names. Address? Why would we care? We don’t need to send them anything. If they do need to replace a device, they can provide a shipping address at that time. Email? Most companies want to maintain a relationship with their customers. They’ll email news of other services, offers of upgrades from time to time. Its called customer engagement and that database of email addresses is one of a company’s key assets. What if we didn’t do that? What if we treated this product just like the tin cans, or the intercom or any other thing you might purchase from a store. There’s a single transaction to acquire the thing, and that’s it. In this scenario, we only care about the device itself, its owned and used by whoever has it, and they can transfer it or sell it on and we don’t need to know or care. In our grandparent/grandchild scenario, as one child grows up, maybe they pass it on to a younger sibling. Or gift it to another family. All the users need is a way to break the “connection” that ties their devices together, and a way to start over with the invitation-to-connect process. The device itself needs to be uniquely identified to facilitate this, but not the user. How this would shake out is one of the things we’ll have to wait on, now that Project Haiku is on hold. Would it really have been practical to run a service like this with no visibility into who was using it? Would we be able to run the service at a low enough cost to allow us to support those devices indefinitely? Would this proposition have been understood and embraced by the market? The anonymity and opacity works both ways: we cant retrieve message histories for users, we can’t restore lost connections from the server side. If a device was stolen or even just picked up by a sibling, we can’t filter or block connections and nuisance messages. Each connected/paired device can sever that pairing, but as long as they are connected, any message between the two is legitimate by definition. We’ve grown accustomed to the need for user accounts, and for some part of our relationships to be owned and gated by 3rd parties we maintain agreements with. If Project Haiku and its aspirations can serve to question these assumptions and provide some food for thought, it was time well spent. [Less]
Posted about 7 years ago by Amy Tsay
With last week’s product update, we introduced a new look for the main and top-level listing pages of the Developer Hub. We went for a modern and friendly design, and surfaced more useful information and links. The topmost button you see before you ... [More] sign in takes you to documentation on how to build an add-on. Moving down the page, you see links to submit and manage your add-on, and information on porting an extension. The lower sections include a feed of our latest blog posts, validation and compatibility tools, and a much more organized and informational footer. We also added a new section on how to contribute to the add-on project. We pinned commonly accessed links to the top and added an announcement area for people who are signed in. The list of your add-ons is easier to read, with pertinent information like Last Updated and Status more prominently displayed. It’s also possible to click to submit a new theme directly from this page (previously, it only appeared on theme pages). You’ll see this new design on Android as well, although only the signed-out view is mobile-friendly for now. We are making steady improvements to the add-on submission and management experience throughout the year, and hope you enjoy this latest update. [Less]