I Use This!
Very High Activity

News

Analyzed 22 days ago. based on code collected 25 days ago.
Posted 1 day ago by Nathan Froyd
Several days ago, somebody pointed me at Why Amazon is eating the world and the key idea has been rolling around in my head ever since: [The reason that Amazon’s position is defensible is] that each piece of Amazon is being built with a ... [More] service-oriented architecture, and Amazon is using that architecture to successively turn every single piece of the company into a separate platform — and thus opening each piece to outside competition. The most obvious example of Amazon’s [service-oriented architecture] structure is Amazon Web Services (Steve Yegge wrote a great rant about the beginnings of this back in 2011). Because of the timing of Amazon’s unparalleled scaling — hypergrowth in the early 2000s, before enterprise-class SaaS was widely available — Amazon had to build their own technology infrastructure. The financial genius of turning this infrastructure into an external product (AWS) has been well-covered — the windfalls have been enormous, to the tune of a $14 billion annual run rate. But the revenue bonanza is a footnote compared to the overlooked organizational insight that Amazon discovered: By carving out an operational piece of the company as a platform, they could future-proof the company against inefficiency and technological stagnation. …Amazon has replaced useless, time-intensive bureaucracy like internal surveys and audits with a feedback loop that generates cash when it works — and quickly identifies problems when it doesn’t. They say that money earned is a reasonable approximation of the value you’re creating for the world, and Amazon has figured out a way to measure its own value in dozens of previously invisible areas. Open source is the analogue of this strategy into the world of software.  You have some small collection of code that you think would be useful to the wider world, so you host your own repository or post it on Github/Bitbucket/etc.  You make an announcement in a couple of different venues where you expect to find interested people.  People start using it, express appreciation for what you’ve done, and begin to generate ideas on how it could be made better, filing bug reports and sending you patches.  Ideally, all of this turns into a virtuous cycle of making your internal code better as well as providing a useful service to external contributors.  The point of the above article is that Amazon has applied an open-source-like strategy to its business relentlessly, and it’s paid off handsomely. Google is probably the best (unintentional?) practitioner of this strategy, exporting countless packages of software, such as GTest, Go, and TensorFlow, not to mention tools like their collection of sanitizers. They also do software-related exports like their C++ style guide. Facebook opens up in-house-developed components with React, HHVM, and Buck, among others. Microsoft has been charging into this arena in the past couple of years, with examples like Visual Studio Code, TypeScript, and ChakraCore.  Apple doesn’t really play the open source game; their opensource site and available software is practically the definition of “throwing code over the wall”, even if having access to the source is useful in a lot of cases.  To the best of my knowledge, Amazon doesn’t really play in this space either.  I could also list examples of exported code from other smaller but still influential technology companies: Github, Dropbox, Twitter, and so forth, as well as companies that aren’t traditional technology companies, but have still invested in open-sourcing some of their software. Whither Mozilla in the above list?  That is an excellent question.  I think in many cases, we haven’t tried, and in the Firefox-related cases where we tried, we decided (incorrectly, judging through the above lens) that the risks of the open source approach weren’t worth it.  Two recent cases where we have tried exporting software and succeeded wildly have been asm.js/WebAssembly and Rust, and it’d be worth considering how to translate those successes into Firefox-related ones.  I’d like to make a follow-up post exploring some of those ideas soon.     [Less]
Posted 1 day ago by Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
Posted 1 day ago by mconley
Highlights Form Autofill is enabled-by-default in Nightly! Give it a try by manually creating an address profile in about:preferences. Nice! Photon Engineering Newsletter #1 went out last week, #2 coming later today! Flash is now ... [More] Click-to-Play by default on Nightly! It also favors html5 video content over Flash content. Check out this video! We are running a Shield Study on this to fine-tune our blocklist with the goal of sending this to the release audience in Firefox 55. A streamlined stub installer for Windows is shipping in 55: Amazing things are now installing on your hard drive! A bunch of primary UI changes as part of Photon landed in Nightly in the last two weeks: Dao detached the back, forward and reload/stop buttons from the urlbar Dale implemented the new location and search bar design Nihanth implemented the new toolbar button hover and active background styles and increased the vertical padding of the toolbar buttons Nihanth also implemented a new back button design. mconley has kicked off the first episode The Joy of Profiling, which is a weekly video series on performance profile analysis. Got a slow Firefox? Submit Gecko Profiler profiles here! Also see ehsan’s recording for “Gecko And Native Profiler” talk Friends of the Firefox team (Give a shoutout/thanks to people for helping fix and test bugs. Introductions) Resolved bugs (excluding employees): More than one bug fixed: https://mzl.la/2rwO12P Dan Banner Kevin Jones Milind L (:milindl) Swapnesh Kumar Sahoo [:swapneshks] tfe New contributors ( = First Patch!) jomer14 got rid of some leftover l10n files that Firefox Accounts didn’t need anymore! Pauline got rid of PlacesUtils.asyncGetBookmarkIds (which isn’t needed anymore thanks to Bookmarks.jsm), which also reduced our memory footprint! Shashwat Jolly cleaned up some of our strings in about:license! Project Updates Activity Stream ‘Graduation’ team is getting close to preffing on Activity Stream in Nightly and working on Search Suggestions, about:home, cleaning up tests, and perf telemetry Aiming for more regular / weekly landings from github to mozilla-central Replaced custom React search suggestions from Test Pilot with existing contentSearchUI used for about:home/about:newtab simplifying tests Test Pilot team is finishing up customization (drag’n’drop, add/edit topsite), as well as beginning work on Sections Activity Stream, when enabled on m-c runs in the content process! Removed most of Content Services / Suggested Tiles code from about:newtab resulting in perf improvements and removed / ~8x fewer intermittent test failures Electrolysis (e10s) The e10s-multi team is still looking over the data being gathered from the Beta population against the release criteria to determine whether or not e10s-multi will ship in Firefox 54, or will have to wait until Firefox 55 e10s-a11y support in Firefox 55 has been marked “at risk” due to stability issues. A go/no-go will be happen no later than June 3rd erahm has a blog post about how memory usage with 4 content processes continues to be the sweet spot Firefox Core Engineering Reminder: Firefox 55 is also installing 64-bit by default on 64-bit OS’s; updates will come later. Resolved race condition in Rust runtime. This may have been bugging some tests (no pun intended). We have preliminary top crashers lists from crash stacks sent via crash pings. Expanding that analysis is pending review of the data & correlation with crash reports. Form Autofill Lots of focus was on a linux64 Talos startup regression which is now resolved by making various pieces of autofill initialization lazier. This slowed down other landings in order to not add other noise. Form Autofill bugs that were resolved this week: Enable Form Autofill by default on Nightly Rename formautofill preference prefix and address pref suffix Add a chrome-only API to preview the option to be auto-selected in a [Form Autofill] autofill’s autocomplete popup filtering is broken Support feature detection on autocomplete attribute [Form Autofill] Implement the credit-card storage profileStorage.get should return null instead of throwing, and .add should return the newly created guid. Add SchemaVersion for each record in ProfileStorage [Form Autofill] Dismiss preview highlight if the filled fields is being changed Photon Performance More rigorous reflow tests have landed for window opening, tab opening and tab closing. More tests coming up for windows and tabs, and the AwesomeBar Kudos to the Structure / Menus team for making the subview animations smooth as silk! (Notice that Oh no! Reflow! is detecting no synchronous reflows in that video) Task.jsm and Promise.defer() removal big patches landed as pre-announced during the last meeting. This covered the browser/ and toolkit/ folders, more patches coming soon for other folders. Structure The page action menu has started taking shape and now has items to copy/email links, and will soon have a ‘send to device’ submenu; Tomorrow’s nightly will have Firefox Account and cut/copy/paste items in the main hamburger panel; Main work on the permanent overflow panel (as a replacement for the customizable bits of the existing hamburger panel) is done, working on polish and bugfixes; Work will start on the new library button this week; We’ll be working to flip the photon pref by default on Nightly in the next week or two; Animation Patches going through review for download animation and refresh/stop animations. Working on getting toolbarbutton icons to scale up on press A patch to run more of our animations on the compositor bounced but should get landed again today. Visuals We changed a bunch of stuff! See the highlights for details. Johann is working on compact and touch modes. Onboarding The Automigration workflow screencast UX spec for Activity Stream is in progress The UI part will be done separately in Activity Stream Skeleton of onboarding overlay system add-on is under review with Mossop This is a skeleton without tours. Tours will be landed in follow-up bugs. Screencast Under discussion about onboarding tour’s update between different versions. Preferences Search is taking shape on Nightly! It now comes with the right highlight color, tooltips for sub-dialog search results. With the help from QA engineers, we are closing the gap between implementation & spec. UX team asked us to revise re-org. The change will likely postpone re-org shipping by one release (to 56); the good news is the release population will be presented with the new search & re-org at the same time, if that happens. Project Mortar (PDFium) peterv made some progress of the JSPlugin architecture – the last few pieces of work are reviewing. Hopefully this is the final round of review, and we will land all of them (8 patches!) soon. Search One-off buttons in the location bar are ready to ride the trains in 55. Search suggestions are now enabled by default. Users who had explicitly opted out of search suggestions in the past, will not see them. Hi-res favicon improvements: the default globe favicon is now hi-res (SVG) everywhere in the UI and some ugly icon rescaling was fixed. Sync / Firefox Accounts Form autofill sync engine in progress. Sync team working with MattN, seanlee, and lchang on profile reconciliation New sync engine checkboxes for autofill profiles and credit card data coming to Preferences juwei is working on opt-in UX Removing nested event loops from Sync! Test Pilot Containers experiment release 2.3.0 coming this week adds a “site assignment” on-boarding panel to increase site assignments Also removes SDK code! Screenshots feature is now aiming for 55 We’re hoping WebExtensions start-up will be performant enough by 55 Our backup plan: move UI (toolbar button, context menu item) into bootstrap.js code, lazy-load WebExtension on click We’re planning to start a Test Pilot blog (with help from Marketing) Test Pilot and experiments are moving away from the SDK Looking into replacing Test Pilot addon functionality with a WebExtension API Experiment (learn more) Here are the raw meeting notes that were used to derive this list. Want to help us build Firefox? Get started here! Here’s a tool to find some mentored, good first bugs to hack on. [Less]
Posted 2 days ago by Nicholas D. Matsakis
For my next post discussing chalk, I want to take kind of a different turn. I want to talk about the general struct of chalk queries and how chalk handles them right now. (If you’ve never heard of chalk, it’s sort of “reference implementation” for ... [More] Rust’s trait system, as well as an attempt to describe Rust’s trait system in terms of its logical underpinnings; see this post for an introduction to the big idea.) The traditional, interactive Prolog query In a traditional Prolog system, when you start a query, the solver will run off and start supplying you with every possible answer it can find. So if I put something like this (I’m going to start adopting a more Rust-like syntax for queries, versus the Prolog-like syntax I have been using): ?- Vec: AsRef The solver might answer: Vec: AsRef continue? (y/n) This continue bit is interesting. The idea in Prolog is that the solver is finding all possible instantiations of your query that are true. In this case, if we instantiate ?U = [i32], then the query is true (note that the solver did not, directly, tell us a value for ?U, but we can infer one by unifying the response with our original query). If we were to hit y, the solver might then give us another possible answer: Vec: AsRef> continue? (y/n) This answer derives from the fact that there is a reflexive impl (impl AsRef for T) for AsRef. If were to hit y again, then we might get back a negative response: no Naturally, in some cases, there may be no possible answers, and hence the solver will just give me back no right away: ?- Box: Copy no In some cases, there might be an infinite number of responses. So for example if I gave this query, and I kept hitting y, then the solver would never stop giving me back answers: ?- Vec: Clone Vec: Clone continue? (y/n) Vec>: Clone continue? (y/n) Vec>>: Clone continue? (y/n) Vec>>>: Clone continue? (y/n) As you can imagine, the solver will gleefully keep adding another layer of Box until we ask it to stop, or it runs out of memory. Another interesting thing is that queries might still have variables in them. For example: ?- Rc: Clone might produce the answer: Rc: Clone continue? (y/n) After all, Rc is true no matter what type ?T is. Do try this at home: chalk has a REPL I should just note that ever since aturon recently added a REPL to chalk, which means that – if you want – you can experiment with some of the examples from this blog post. It’s not really a “polished tool”, but it’s kind of fun. I’ll give my examples using the REPL. How chalk responds to a query chalk responds to queries somewhat differently. Instead of trying to enumerate all possible answers for you, it is looking for an unambiguous answer. In particular, when it tells you the value for a type variable, that means that this is the only possible instantiation that you could use, given the current set of impls and where-clauses, that would be provable. Overall, chalk’s answers have three parts: Status: Yes, No, or Maybe Refined goal: a version of your original query with some substitutions applied Lifetime constraints: these are relations that must hold between the lifetimes that you supplied as inputs. I’ll come to this in a bit. Future compatibility note: It’s worth pointing out that I expect some the particulars of a “query response” to change, particularly as aturon continues the work on negative reasoning. I’m presenting the current setup here, for the most part, but I also describe some of the changes that are in flight (and expected to land quite soon). Let’s look at these three parts in turn. The status and refined goal of a query response The “status” tells you how sure chalk is of its answer, and it can be yes, maybe, or no. A yes response means that your query is uniquely provable, and in that case the refined goal that we’ve given back represents the only possible instantiation. In the examples we’ve seen so far, there was one case where chalk would have responded with yes: > cargo run ?- load libstd.chalk ?- exists { Rc: Clone } Solution { successful: Yes, refined_goal: Query { value: Constrained { value: [ Rc0>: Clone ], constraints: [] }, binders: [ U0 ] } } (Since this is the first example using the REPL, a bit of explanation is in order. First, cargo run executs the REPL, naturally. The first command, load libstd.chalk, loads up some standard type/impl definitions. The next command, exists { Rc: Clone } is the actual query. In the section of Prolog examples, I used the Prolog convention, which is to implicitly add the “existential quantifiers” based on syntax. chalk is more explicit: writing exists { ... } here is saying “is there a T such that ... is true?”. In future examples, I’ll skip over the first two lines.) You can see that the response here (which is just the Debug impl for chalk’s internal data structures) included not only Yes, but also a “refined-goal”. I don’t want to go into all the details of how the refined goal is represented just now, but if you skip down to the value field you will pick out the string Rc0>: Clone – here the ?0 indicates an existential variable. This is saying thatthe “refined” goal is the same as the query, meaning that Rc: Clone is true no matter what Clone is. (We saw the same thing in the Prolog case.) So what about some of the more ambiguous cases. For example, what happens if we ask exists { Vec: Clone }. This case is trickier, because for Vec to be clone, T must be Clone, so it matters what T is: ?- exists { Vec: Clone } Solution { successful: Maybe, ... // elided for brevity } Here we get back maybe. This is chalk’s way of saying that the query is provable for some instants of ?T, but we need more type information to find a unique answer. The idea is that we will continue type-checking or processing in the meantime, which may yield results that further constrain ?T; e.g., maybe we find a call to vec.push(22), indicating that the type of the values within is i32. Once that happens, we can repeat the query, but this time with a more specific value for ?T, so something like Vec: Clone: ?- Vec: Clone Solution { successful: Yes, ... } Finally, some times chalk can decisively prove that something is not provable. This would occur if there is just no impl that could possibly apply (but see aturon’s post, which covers how we plan to extend chalk to be able to reason beyond a single crate): ?- Box: Copy `Copy` is not implemented for `Box` in environment `Env(U0, [])` Refined goal in action The refined goal so far hasn’t been very important; but it’s generally a way for the solver to communicate back a kind of substitution – that is, to communicate back what values the type variables have to have in order for the query to be provable. Consider this query: ?- exists { Vec: AsRef> } Now, in general, a Vec implements AsRef twice: Vec: AsRef> (chalk doesn’t understand the syntax [i32], so I made a type Slice for it) Vec: AsRef> But here, we know we are looking for AsRef>. This implies then that U must be i32. And indeed, if we give this query, chalk tells us so, using the refined goal: ?- exists { Vec: AsRef> } Solution { successful: Yes, refined_goal: Query { value: Constrained { value: [ Vec: AsRef> ], constraints: [] }, binders: [] } } Here you can see that there are no variables. Instead, we see Vec: AsRef>. If we unify this with our original query (skipping past the exists part), we can deduce that U = i32. You might imagine that the refined goal can only be used when the response is yes – but, in fact, this is not so. There are times when we can’t say for sure if a query is provable, but we can still say something about what the variables must be for it to be provable. Consider this example: ?- exists { Vec>: AsRef> } Solution { successful: Maybe, refined_goal: Query { value: Constrained { value: [ Vec>: AsRef>> ], constraints: [] }, binders: [ U0 ] } } Here, we were asking if Vec> implements AsRef>. We got back a maybe response. This is because the AsRef impl requires us to know that U: Sized, and naturally there are many sized types that U could be, so we need to wait until we get more information to give back a definitive response. However, leaving aside concerns about U: Sized, we can see that Vec> must equal Vec, which implies that, for this query to be provable, Vec = V must hold. And the refined goal reflects as much: Vec>: AsRef>> Open vs closed queries Queries in chalk are always “closed” formulas, meaning that all the variables that they reference are bound by either an exists or a forall binder. This is in contrast to how the compiler works, or a typical prolog implementation, where a trait query occurs in the context of an ongoing set of processing. In terms of the current rustc implementation, the difference is that, in rustc, when you wish to do some trait selection, you invoke the trait solver with an inference context in hand. This defines the context for any inference variables that appear in the query. In chalk, in contrast, the query starts with a “clean slate”. The only context that it needs is the global context of the entire program – i.e., the set of impls and so forth (and you can consider those part of the query, if you like). To see the difference, consider this chalk query that we looked at earlier: ?- exists { Vec: AsRef> } In rustc, such a query would look more like Vec: AsRef>, where we have simply used an existing inference variable (?22). Moreover, the current implementation simply gives back the yes/maybe/no part of the response, and does not have a notion of a refined goal. This is because, since we have access to the raw inference variable, we can just unify ?22 (e.g., with i32) as a side-effect of processing the query. The new idea then is that when some part of the compiler needs to prove a goal like Vec: AsRef>, it will first create a canonical query from that goal (chalk code is in query.rs). This is done by replacing all the random inference variables (like ?22) with existentials. So you would get exists Vec: AsRef> as the output. One key point is that this query is independent of the precise inference variables involved: so if we have to solve this same query later, but with different inference variables (e.g., Vec: AsRef>), when we make the canonical form of that query, we’d get the same result. Once we have the canonical query, we can invoke chalk’s solver. The code here varies depending on the kind of goal, but the basic strategy is the same. We create a “fulfillment context”, which is the combination of an inference context (a set of inference variables) and a list of goals we have yet to prove. (The compiler has a similar data structure, but it is setup somewhat differently; for example, it doesn’t own an inference context itself.) Within this fulfillment context, we can “instantiate” the query, which means that we replace all the variables bound in an exists<> binder with an inference variable (here is an example of code invoking instantiate(). This effectively converts back to the original form, but with fresh inference variables. So exists Vec: AsRef> would become Vec: AsRef>. Next we can actually try to prove the goal, for example by searching through each impl, unifying the goal with the impl header, and then recursively processing the where-clauses on the impl to make sure they are satisfied. An advantage of the chalk approach where queries are closed is that they are much easier to cache. We can solve the query once and then “replay” the result an endless number of times, so long as the enclosing context is the same. Lifetime constraints I’ve glossed over one important aspect of how chalk handles queries, which is the treatment of lifetimes. In addition to the refined goal, the response from a chalk query also includes a set of lifetime constraints. Roughly speaking, the model is that the chalk engine gives you back the lifetime constraints that would have to be satisfied for the query to be provable. In other words, if you have a full, lifetime-aware logic, you might say that the query is provable in some environment Env that also includes some facts about the lifetimes (i.e., which lifetime outlives which other lifetime, and so forth): Env, LifetimeEnv |- Query but in chalk we are only giving in Env, and the engine is giving back to us a LifetimeEnv: chalk(Env, Query) = LifetimeEnv with the intention that we know that if we can prove that LifetimeEnv holds, then Query also holds. One of the main reasons for this split is that we want to ensure that the results from a chalk query do not depend on the specific lifetimes involved. This is because, in part, we are going to be solving chalk queries in contexts when lifetimes have been fully erased, and hence we don’t actually know the original lifetimes or their relationships to one another. (In this case, the idea is roughly that we will get back a LifetimeEnv with the relationships that would have to hold, but we can be sure that an earlier phase in the compiler has proven to us that this LifetimeEnv will be satisfied.) Anyway, I plan to write a follow-up post (or more…) focusing just on lifetime constraints, so I’ll leave it at that for now. This is also an area where we are doing some iteration, particularly because of the interactions with specialization, which are complex. Future plans Let me stop here to talk a bit about the changes we have planned. aturon has been working on a branch that makes a few key changes. First, we will replace the notion of “refined goal” with a more straight-up substitution. That is, we’d like chalk to answer back with something that just tells you the values for the variables you’ve given. This will make later parts of the query processing easier. Second, following the approach that aturon outlined in their blog post, when you get back a “maybe” result, we are actually going to be considering two cases. The current code will return a refined substitution only if there is a unique assignment to your input variables that must be true for the goal to be provable. But in the newer code, we will also have the option to return a “suggestion” – something which isn’t necessary for the goal to be provable, but which we think is likely to be what the user wanted. We hope to use this concept to help replicate, in a more structured and bulletproof way, some of the heuristics that are used in rustc itself. Finally, we plan to implement the “modal logic” operators, so that you can make queries that explicitly reason about “all crates” vs “this crate”. [Less]
Posted 2 days ago by erahm
Goal: Replace Gecko’s XML parser, libexpat, with a Rust-based XML parser Firefox currently uses an old, trimmed down, and slightly modified version of libexpat, a library written in C, to support parsing of XML documents. These files include plain ... [More] old XML on the web, XSLT documents, SVG images, XHTML documents, RDF, and our own XUL UI format. While it’s served it’s job well it has long been unmaintained and has been a source of many security vulnerabilities, a few of which I’ve had the pleasure of looking into. It’s 13,000 lines of rather hard to understand code and tracing through everything when looking into security vulnerabilities can take days at a time. It’s time for a change. I’d like us to switch over to a Rust-based XML parser to help improve our memory safety. We’ve done this already with at least two other projects: an mp4 parser, and a url parser. This seems to fit well into that mold: a standalone component with past security issues that can be easily swapped out. There have been suggestions adding full XML 1.0 v5 support, there’s a 6-year old proposal to rewrite our XML stack which doesn’t include replacing expat, there’s talk of the latest and greatest, but not quite fully speced, XML5. These are all interesting projects, but they’re large efforts. I’d like to see us make a reasonable change now. What do we want? In order to avoid scope creep and actually implement something in the short term I just want a library we can drop in that has parity with the features of libexpat that we currently use. That means: A streaming, sax-like interface that generates events as we feed it a stream of data Support for DTDs and external entities XML 1.0 v4 (possibly v5) support A UTF-16 interface. This isn’t a firm requirement; we could convert from UTF-16 -> UTF-8 -> UTF-16, but that’s clearly sub-optimal As fast as expat with a low memory footprint Why do we need UTF-16? Short answer: That’s how our current XML parser stack works. Slightly longer answer: In Firefox libexpat is wrapped by nsExpatDriver which implements nsITokenizer. nsITokenizer uses nsScanner which exposes the data it wraps as UTF-16 and takes in nsAString, which as you may have guessed is a wide string. It can also read in c-strings, but internally it performs a character conversion to UTF-16. On the other side all tokenized data is emitted as UTF-16 so all consumers would need to be updated as well. This extends further out, but hopefully that’s enough to explain that for a drop-in replacement it should support UTF-16. What don’t we need? We can drop the complexity of our parser by excluding parts of expat or more modern parsers that we don’t need. In particular: Character conversion (other parts of our engine take care of this) XML 1.1 and XML5 support Output serialization A full rewrite of our XML handling stack What are our options? There are three Rust-based parsers that I know of, none of which quite fit our needs: xml-rs StAX based, we prefer SAX Doesn’t support DTD, entities UTF-8 only Doesn’t seem very active RustyXML Is SAX-like Doesn’t support DTD, entities Seems to only support UTF-8 Doesn’t seem to be actively developed xml5ever Used in Servo Only aims to support XML5 Permissive about malformed XML Doesn’t support DTD, entities Where do we go from here? My recommendation is to implement our own parser that fits the needs and use cases of Firefox specifically. I’m not saying we’d necessarily start from scratch, it’s possible we could fork one of the existing libraries or just take inspiration from a little bit of all of them, but we have rather specific requirements that need to be met. [Less]
Posted 2 days ago by Air Mozilla
The Bugzilla Project developers meeting.
Posted 2 days ago by Air Mozilla
We'll be answering questions and chatting about the 2017 Global Sprint, Mozilla's 2-day, world-wide collaboration party for the open web.
Posted 2 days ago by Air Mozilla
This is the sumo weekly call
Posted 3 days ago by Giorgos
Over at MozMeao we are using APScheduler to schedule the execution of periodic tasks, like Django management tasks to clear sessions or to fetch new job listings for Mozilla Careers website. A couple of services provide monitoring of cron job ... [More] execution including HealthChecks.io and DeadManSnitch. The idea is that you ping a URL after the successful run of the cron job. If the service does not receive a ping within a predefined time window then it triggers notifications to let you know. With shell scripts this is as simple as running curl after your command: $ ./manage.py clearsessions && curl https://hchk.io/XXXX For python based scripts like APScheduler's I created a tool to help with that: Babis provides a function decorator that will ping monitor URLs for you. It will ping a URL before the start or after the end of the execution of your function. With both before and after options combined, the time required to complete the run can be calculated. You can also rate limit your pings. So if you're running a cron job every minute but your check window is every 15 minutes can you play nicely and avoid DOSing your monitor by defining a rate of at most one request per 15 minutes with 1/15m. In some cases network hiccups or monitor service maintenance can make Babis to fail. With the silent_failures flag you can ignore any failures to ping the defined URLs. The most common use of Babis is to ping a URL after the function has returned without throwing an exception. @babis.decorator(ping_after='https://hchk.io/XXXX') def cron_job(): pass Babis is available on PyPI and just a pip install away. Learn more at Babis GitHub Page [Less]
Posted 3 days ago by Giorgos
Over at MozMeao we are using APScheduler to schedule the execution of periodic tasks, like Django management tasks to clear sessions or to fetch new job listings for Mozilla Careers website. A couple of services provide monitoring of cron job ... [More] execution including HealthChecks.io and DeadManSnitch. The idea is that you ping a URL after the successful run of the cron job. If the service does not receive a ping within a predefined time window then it triggers notifications to let you know. With shell scripts this is as simple as running curl after your command: $ ./manage.py clearsessions && curl https://hchk.io/XXXX For python based scripts like APScheduler's I created a tool to help with that: Babis provides a function decorator that will ping monitor URLs for you. It will ping a URL before the start or after the end of the execution of your function. With both before and after options combined, the time required to complete the run can be calculated. You can also rate limit your pings. So if you're running a cron job every minute but your check window is every 15 minutes can you play nicely and avoid DOSing your monitor by defining a rate of at most one request per 15 minutes with 1/15m. In some cases network hiccups or monitor service maintenance can make Babis to fail. With the silent_failures flag you can ignore any failures to ping the defined URLs. The most common use of Babis is to ping a URL after the function has returned without throwing an exception. @babis.decorator(ping_after='https://hchk.io/XXXX') def cron_job(): pass Babis is available on PyPI and just a pip install away. Learn more at Babis GitHub Page [Less]