I Use This!
Very High Activity

News

Analyzed 8 days ago. based on code collected 9 days ago.
Posted over 4 years ago by Marco
Over the past year and a half, I have ventured time and again into the federated Mastodon social network. In those ventures, I have contributed bug reports to both the Mastodon client as well as some alternative clients on the web, iOS, and Android. ... [More] One of those clients, a single-page, progressive web app, is Pinafore by Nolan Lawson. He had set out to create a fast, light-weight, and accessible, client from the ground up. When I started to use Pinafore, I immediately noticed that a lot of thought and effort had already gone into the client and I could immediately start using it. I then started contributing some bug reports, and over time, Nolan has improved what was already very good tremendously, by adding more keyboard support, so that even as a screen reader user, one can use Pinafore without using virtual buffers, various light and dark themes, support for reducing animations, and much, much more. And now, Nolan has shared what he has learned about accessibility in the process. His post is an excellent recollection of some of the challenges when dealing with an SPA, cross-platform, taking into account screen readers, keyboard users, styling stuff etc., and how to overcome those obstacles. It is an excellent read which contains suggestions and food for thought for many web developers. Enjoy the read! [Less]
Posted over 4 years ago by TWiR Contributors
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust ... [More] or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Completing the transition to the new borrow checker. Rust support for Windows Runtime in the works by the author of C++ WinRT. You probably didn't want .into_iter().cloned(). Clippy is removing its plugin interface. Rust concurrency patterns: condvars and locks. How to make your C codebase rusty: rewriting keyboard firmware keymap in Rust. When writing a bump allocator, always bump downwards. Adventures in motion control: initial motion system. 2019-10-24 compiler team triage meeting. #Rust2020 Find all #Rust2020 posts at Read Rust. Crate of the Week This week's crate is displaydoc, a procedural derive macro to implement Display by string-interpolating the doc comment. Thanks to Willi Kappler for the suggesion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Announcing safety-dance: removing unnecessary unsafe code from popular crates. RFC: make Cargo embed dependency versions in the compiled binary. [good first issue] cargo-sweep: Could cargo-sweep work without rustup? [good first issue] Rubble: Add a function for reading the device address to rubble-nrf52. [good first issue] Rubble: Don't give up when missing the initial transmit window. [good first issue] Rubble: LLCP updates are not applied when the event is missed. [good first issue] Rubble: Log buffer overflow on nrf52832. [good first issue] Rubble: Try out scroll or zerocopy for de/encoding of PDUs. [good first issue] Rubble: Only reply to LL_VERSION_IND once. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 217 pull requests were merged in the last week Allow foreign exceptions to unwind through Rust code and Rust panics to unwind through FFI expand: Feature gate out-of-line modules in proc macro input Lint ignored #[inline] on function prototypes Improve the "try using a variant of the expected type" hint Use heuristics to recover parsing of missing ; Point at local similarly named element and tweak references to variants Custom lifetime error for impl item doesn't conform to trait Add lint and tests for unnecessary parens around types Correct handling of type flags with ConstValue::Placeholder Use structured suggestion for unnecessary bounds in type aliases save-analysis: Account for async desugaring in async fn return types Switch CrateMetadata's source_map_import_info from RwLock to Once Don't use eval_always for miri queries used from codegen rustc: use IndexVec instead of Vec Make promote_consts emit the errors when required promotion fails Implement ordered/sorted iterators on BinaryHeap Make *{const, mut} T>::offset_from const fn Stabilize float_to_from_bytes feature hashbrown: Introduce ahash-compile-time-rng feature cargo: Add --filter-platform to cargo metadata cargo: Fix cargo fix not showing colors chalk: Remove delayed literals chalk: Add TypeName::Error variant chalk: Output multiple solutions rustdoc: Stabilize cfg(doctest) Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: No RFCs were approved this week. Final Comment Period Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs [disposition: merge] Announcing the FFI-unwinding Project Group. [disposition: postpone] Signing registry index commits. Tracking Issues & PRs [disposition: merge] Stabilize --extern flag without a path. [disposition: merge] Fully integrate derive helpers into name resolution. [disposition: merge] Make the semantics of Vec::truncate(N) consistent with slices. [disposition: merge] Use ptr::drop_in_place for VecDeque::truncate and VecDeque::clear. New RFCs Add method Result::into_ok. Make Cargo embed dependency versions in the compiled binary. Vec::recycle. Target tier policy. [T]::rejoin. Upcoming Events Asia Pacific Nov 13. Selangor, MY - Rust Malaysia Meetup November 2019. Europe Nov 9 & 10. Barcelona, ES - RustFest Barcelona 2019. Nov 12. Hamburg, DE - Rust Hack & Learn November 2019. Nov 13. Wrocław, PL - Rust Wrocław Meetup #14. Nov 13. Berlin, DE - OpenTechSchool Berlin - Rust Hack and Learn. Nov 14. Zurich, CH - Rust Zurich - RustFest Decompression Zürich. Nov 14. Moscow, RU - Rust Moscow November 2019 Meetup. Nov 15. Barcelona, ES - Rust GTK/GStreamer Workshop at Linux Application Summit 2019. Nov 21. Turin, IT - Mozilla Torino - Gruppo di studio Rust. North America Nov 12. Seattle, WA, US - Seattle Rust Meetup - Monthly meetup. Nov 13. Atlanta, GA, US - Grab a beer with fellow Rustaceans. Nov 13. Vancouver, BC, CA - Vancouver Rust meetup. Nov 14. San Diego, CA, US - San Diego Rust November Meetup. Nov 14. Lehi, UT, US - Utah Rust - November 2019 Regular Meetup. Nov 14. Columbus, OH, US - Columbus Rust Society - Monthly Meeting. Nov 14. Montreal, QC, CA - Montreal Rust Meetup - November 2019 RustMTL: November Common Traits & Causal Profiling. Nov 20. Portland, OR, US - PDXRust - Hack Night. If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Rust Jobs Rust Engineer at Commure, Inc. (San Francisco, Boston, Montreal). Data Analysis Software Engineer at Swift Navigation, San Francisco, US. Rust/Core Developer at Parity, Berlin, DE (Remote available). Tweet us at @ThisWeekInRust to get your job offers listed here! Quote of the Week I did manage to get this compile in the end - does anyone else find that the process of asking the question well on a public forum organizes their thoughts well enough to solve the problem? – David Mason on rust-users Thanks to Daniel H-M for the suggestion! Please submit quotes and vote for next week! This Week in Rust is edited by: nasa42, llogiq, and Flavsditz. Discuss on r/rust. [Less]
Posted over 4 years ago
In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more The post Tracking Diaries with Matt Navarra appeared first on The Firefox Frontier.
Posted over 4 years ago
In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more The post Tracking Diaries with Matt Navarra appeared first on The Firefox Frontier.
Posted over 4 years ago by Mozilla
In April we announced our intent to reduce the amount of annoying permission prompts for receiving desktop notifications that our users are seeing on a daily basis. To that effect, we ran a series of studies and experiments around restricting these ... [More] prompts. Based on these studies, we will require user interaction on all notification permission prompts, starting in Firefox 72. That is, before a site can ask for notification permission, the user will have to perform a tap, click, or press a key. In this blog post I will give a detailed overview of the study results and further outline our plans for the future. Experiments As previously described, we designed a measurement using Firefox Telemetry that allows us to report details around when and how a user interacts with a permission prompt without revealing personal information. The full probe definition can be seen in our source code. It was enabled for a randomly chosen pool of study participants (0.1% of the user population) on Firefox Release, as well as for all users of Firefox Nightly. The Release study additionally differentiated between new users and existing users, to account for an inherent bias of existing users towards denying permission requests (because they usually already “have” the right permissions on sites relevant to them). We further enabled requiring user interaction for notification permission prompts in Nightly and Beta. Results Most of the heavy lifting here was done by Felix Lawrence, who performed a thorough analysis of the data we collected. You can read his full report for our Firefox Release study. I will highlight some of the key takeaways: Notification prompts are very unpopular. On Release, about 99% of notification prompts go unaccepted, with 48% being actively denied by the user. This is even worse than what we’ve seen on Nightly, and it paints a dire picture of the user experience on the web. To add from related telemetry data, during a single month of the Firefox 63 Release, a total of 1.45 Billion prompts were shown to users, of which only 23.66 Million were accepted. I.e, for each prompt that is accepted, sixty are denied or ignored. In about 500 Million cases during that month, users actually spent the time to click on “Not Now”. Users are unlikely to accept a prompt when it is shown more than once for the same site. We had previously given websites the ability to ask users for notification every time they visit a site in a new tab. The underlying assumption that users would want to take several visits to make up their minds turns out to be wrong. As Felix notes, around 85% of prompts were accepted without the user ever having previously clicked “Not Now”. Most notification prompts don’t follow user interaction. Especially on Release, the overall number of prompts that are already compatible with this intervention is very low. Prompts that are shown as a result of user interaction have significantly better interaction metrics. This is an important takeaway. Along with the significant decrease in overall volume, we can see a significantly better rate of first-time allow decisions (52%) after enforcing user interaction on Nightly. The same can be observed for prompts with user interaction in our Release study, where existing users will accept 24% of first-time prompts with user interaction and new users would accept a whopping 56% of first-time prompts with user interaction. Changes Based on the outlined results we have decided to enact two new restrictions: Starting from Firefox 70, replace the default “Not Now” option with “Never”, which will effectively hide the prompt on a page forever. Starting from Firefox 72, require user interaction to show notification permission prompts. Prompts without interaction will only show a small icon in the url bar. When a permission prompt is denied by Firefox, the user still has the ability to override this automatic decision by clicking the small permission icon that appears in the address bar. This lets users use the feature on websites with that prompt without waiting for user interaction.   Besides the clear improvements in user interaction rates that our study has shown, these restrictions were derived from a few other considerations: Easy to upgrade. Requiring user interaction allows for an easy upgrade path for affected websites, while hiding annoying “on load” prompts. Transparent. Unlike other heuristics (such as “did the user visit this site a lot in the past”), interaction is easy to understand for both developers and users. Encourages pre-prompting. We want to websites to use in-content controls to enable notifications, as long as they have an informative style and do not try to mimic native browser UI. Faking (“spoofing”) browser UI is considered a security risk and will be met with stronger enforcement in the future. A good pre-prompt follows the style of the page and adds additional context to the request. Pre-prompting, when done well, will increase the chance of users opting to receive notifications. Annoying users, as our data shows, will lead to churn. We will release additional information and resources for web developers on our Mozilla Hacks blog. We hope that these restrictions will lead to an overall better and less annoying user experience for all our users while retaining the functionality for those that need notifications. The post Restricting Notification Permission Prompts in Firefox appeared first on Future Releases. [Less]
Posted over 4 years ago by mkohler
Please join us in congratulating Shina Dhingra, Rep of the Month for October 2019! Shina is from Pune, Maharashtra, India. Her journey started with the Mozilla Pune community while she was in college in 2017, with Localization in Hindi and quality ... [More] assurance bugs. She’s been an active contributor to the community and since then has helped a lot of newcomers in their onboarding and helping them understand better what the Mozilla Community is all about. She joined the Reps Program in February 2019 and since then she has actively participated and contributed to Common Voice, A-Frame, Localization, Add-ons, and other Open Source Contributions. She built her own project as a mentee under the Open Leaders Program, and will be organizing and hosting her own cohort called “Healthier AI” which she launched at MozFest this year. Congratulations and keep rocking the open web! To congratulate her, please head over to Discourse! [Less]
Posted over 4 years ago by Nathan Froyd
In our last post, we highlighted some of the advantages that Bazel would bring.  The remote execution and caching benefits Bazel bring look really attractive, but it’s difficult to tell exactly how much they would benefit Firefox.  I looked for ... [More] projects that had switched to Bazel, and a brief summary of each project’s experience is written below. The Bazel rules for nodejs highlight Dataform’s switch to Bazel, which took about 2 months.  Their build involves some combination of “NPM packages, Webpack builds, Node services, and Java pipelines”. Switching plus enabling remote caching reduced the average time for a build in CI from 30 minutes to 5 minutes; incremental builds for local development have been “reduced to seconds from minutes”.  It’s not clear whether the local development experience is also hooked up to the caching infrastructure as well. Pinterest recently wrote about their switch to Bazel for iOS.  While they call out remote caching leading to “build times [dropping] under a minute and as low as 30 seconds”, they state their “time to land code” only decreased by 27%.  I wasn’t sure how to reconcile such fast builds with (relatively) modest decreases in CI time.  Tests have gotten a lot faster, given that test results can be cached and reused if the tests in question have their transitive dependencies unchanged. One of the most complete (relatively speaking) descriptions I found was Redfin’s switch from Maven to Bazel, for building a large amount of JavaScript modules and Java code, nearly 30,000 files in all.  Their CI builds went from 40-90 minutes to 5-6 minutes; in fairness, it must be mentioned that their Maven builds were not parallelized (for correctness reasons) whereas their Bazel builds were.  But it’s worth highlighting that they managed to do this incrementally, by generating Bazel build definitions from their Maven ones, and that the quoted build times did not enable caching.  The associated tech talk slides/video indicates builds would be roughly in the 1-2 minute range with caching, although they hadn’t deployed that yet. None of the above accounts talked about how long the conversion took, which I found peculiar.  Both Pinterest and Redfin called out how much more reliable their builds were once they switched to Bazel; Pinterest said, “we haven’t performed a single clean build on CI in over a year.” In some negative results, which are helpful as well, Dropbox wrote about evaluating Bazel for their Android builds.  What’s interesting here is that other parts of Dropbox are heavily invested in Bazel, so there’s a lot of in-house experience, and that Bazel was significantly faster than their current build system (assuming caching was turned on; Bazel was significantly slower for clean builds without caching).  Yet Dropbox decided to not switch to Bazel due to tooling and development experience concerns.  They did leave open the possibility of switching in the future once the ecosystem matures. The oddly-named Bazel Fawlty describes a conversion to Bazel from Go’s native tooling, and then a switch back after a litany of problems, including slower builds (but faster tests), a poor development experience (especially on OS X), and various things not being supported in Bazel leading to the native Go tooling still being required in some cases.  This post was also noteworthy for noting the amount of porting effort required to switch: eight months plus “many PR’s accepted into the bazel go rules git repo”.  I haven’t used Go, but I’m willing to discount some of the negative experience here due to the native Go tools being so good. Neither one of these negative experiences translate exactly to Firefox: different languages/ecosystems, different concerns, different scales.  But both of them cite the developer experience specifically, suggesting that not only is there a large investment required to actually do the switchover, but you also need to write tooling around Bazel to make it more convenient to use. Finally, a 2018 BazelCon talk discusses two Google projects that made the switch to Bazel and specifically to use remote caching and remote execution on Google’s public-facing cloud infrastructure: Android Studio and TensorFlow.  (You may note that this is the first instance where somebody has called out supporting remote execution as part of the switch; I think that implies getting a build to the point of supporting remote execution is more complicated than just supporting remote caching, which makes a certain amount of sense.)  Android Studio increased their test presubmit coverage by 4x, presumably by being able to run more than 4x test jobs than previously due to remote execution.  In the same vein, TensorFlow decreased their build and test times by 80%, and they could use significantly less powerful machines to actually run the builds, given that large machines in the cloud were doing the actual heavy lifting. Unfortunately, I don’t think expecting those same reductions in test time, were Firefox to switch to Bazel, is warranted.  I can’t speak to Android Studio, but TensorFlow has a number of unit tests whose test results can be cached.  In the Firefox context, these would correspond to cppunittests, which a) we don’t have that many of and b) don’t take that long to run.  The bulk of our tests depend in one way or another on kitchen-sink-style artifacts (e.g. libxul, the JS shell, omni.ja) which essentially depend on everything else.  We could get some reductions for OS-specific modifications; Windows-specific changes wouldn’t require re-running OS X tests, for instance, but my sense is that these sorts of changes are not common enough to lead to an 80% reduction in build + test time.  I suppose it’s also possible that we could teach Bazel that e.g. devtools changes don’t affect, say, non-devtools mochitests/reftests/etc. (presumably?), which would make more test results cacheable. I want to believe that Bazel + remote caching (+ remote execution if we could get there) will bring Firefox build (and maybe even test) times down significantly, but the above accounts don’t exactly move the needle from belief to certainty. [Less]
Posted over 4 years ago
On Friday 2019-10-25 I participated in Redecentralize Conference 2019, a one-day unconference in London, England on the topics of decentralisation, privacy, autonomy, and digital infrastructure. I gave a 3 minute lightning talk, helped run an ... [More] IndieWeb standards & methods session in the first open slot of the day, and participated in two more sessions. The second open session had no Etherpad notes, so this post is from my one week ago memory recall. Decentralized lunch After the first open session of the day, the Redecentralize confrerence provided a nice informal buffet lunch for participants. Though we picked up our eats from a centralized buffet, people self-organized into their own distributed groups. There were a few folks I knew or had recently met, and many more that I had not. I sat with a few people who looked like they had just started talking and that’s when I met Kate. I asked if she was running a session and she said yes in the next time slot, on decentralized identity and rethinking reputation. She also noted that she wanted to approach it from a human exploration perspective rather than a technical perspective, and was looking to learn from participants. I decided I’d join, looking forward to a humans-first (rather than technology plumbing first) conversation and discussion. Discussion circle After lunch everyone found their way to various sessions or corners of the space to work on their own projects. The space for Kate’s session was an area in the middle of a large room, without a whiteboard or projector. About a half dozen of us assembled chairs in a rough oval to get started. As we informally chatted a few more people showed up and we broadened our circle. The space was a bit noisy with chatter drifting in from other sessions, yet we could hear each other we if leaned in a little. Kate started us off asking our opinions of the subject matter, experiences, and about existing approaches in contrast to letting any one company control identity and reputation. Gaming of centralized systems We spent quite a bit of time on discussing existing online or digital reputation systems, and how portable or not these were. China was a subject of discussion along with the social reputation system that they had put in place that was starting to be used for various purposes. Someone provided the example of people putting their phones into little shaker machines to fake an increased stepcount to increase their reputation in that way. Apparently lots of people are gaming the Chinese systems in many ways. Portability and resets Two major concerns were brought up about decentralized reputation systems. Reputation portability. If you build reputation in one system or service, how do you transfer that reputation to another? Reset abuse. If you develop a bad repuation in a system, what is to stop you from deleting that identity, and creating a new one to reset your reputation? No one had good answers for either. I offered one observation for the latter, which was that as reputation systems evolve over time, the lack of reputation, i.e. someone just starting out (or a reset), is seen as having a default negative reputation, that they have to prove otherwise. For example the old Twitter “eggs”, so called due to the default icons that Twitter (at some point) assigned to new users that were a white cartoon egg on a pastel background. Another subsequent thought, Twitter’s profile display of when someone joined has also reinforced some of this “default negative” reputation, as people are suspicious of accounts that seem to just recently joined Twitter and all of sudden are posting forcefully (especially about political or breaking news stories). Are they bots or state operatives pretending to be someone they’re not? Hard to tell. Session dynamics While Kate did a good job keeping discussions on topic, prompting with new questions when the group appeared to rathole in some area, there were a few challenging dynamics in the group. It looked like no one was using laptop to take notes (myself included), emergently so (no one was told not to use their laptop). While “no laptop” meetings are often praised for focus & attention, they do have several downsides. First, no one is writing anything down, so follow-up discussions become difficult, or rather, it becomes likely that past discussions will be repeated without any new information. Caught in a loop. History repeating. Second, with only speaking and no writing or note-taking, conversations tend to become more reactive, less thoughtful, and more about the individuals & personalities than about the subject matter. I noticed that one participant in particular was much more forceful and spoke a lot more than anyone else in the group, asserting all kinds of domain knowledge (usually without citation or reasoning). Normally I tend to question this kind of behavior, but this time I decided to listen and observe instead. On a session about reputation, how would this person’s behavior affect their dynamic reputation in this group? Eventually Kate was able to ask questions and prompt others who were quiet to speak-up, which was good to see. Decentralized identity We did not get into any deep discussions of any specific decentralized identity systems, and that was perhaps ok. Mostly there discussion about the downsides of centrally controlled identity, and how each of us wanted more control over various aspects of our online identities. For anyone who asked, I posited that a good way to start with decentralized identity was to buy and use a personal domain name for your primary online presence, setting it up to sign-into sites, and build a reputation using that. Since you can pick the domain name, you can pick whatever facet(s) of your identity you wish to represent. It may not be perfectly distributed, however it does work today, and is a good way to explore a lot of the questions and challenges of decentralized identity. The Nirvana Fallacy Another challenge discussing various systems both critically, and aspirationally, was the inability to really assess how “real” any examples were, or applicable to any of us, or their usability, or even if they were deployed in any even experimental way instead of just being a white paper proposal. This was a common theme in several sessions, that of comparing the downsides of real existing systems with the aspirational features of conceived but unimplemented systems. I had just recently come across a name for this phenomenon, and like many things you learn about, was starting to see it a lot: The Nirvana Fallacy. I didn’t bring it up in this session but rather tried to keep it in mind as a way to assess various comparisons. Distributed reputation After lunch sessions are always a bit of a challenge. People are full or tired. I myself was already feeling a bit spent from the lightning talk and the session Kevin and I had led right after that. All in all it was a good discussion, even though we couldn’t point to any notes or conclusions. It felt like everyone walked away having learned something from someone else, and in general people got to know each other in a semi-distributed way, starting to build reputation for future interactions. Watching that happen in-person made me wonder if there was some way to apply a similar kind of semi-structured group discussion dynamic as a method for building reputation in the online world. Could there be some way to parse out the dynamics of individual interactions in comments or threads to reflect that back to user in the form of customized per-person-pair reputations that you could view as a recent summary or trends over the years? Previous #Redecentralize 2019 posts IndieWeb Decentralized Standards and Methods Lightning talk: Showing redecentralization by example with my personal web site [Less]
Posted over 4 years ago
On Friday 2019-10-25 I participated in Redecentralize Conference 2019, a one-day unconference in London, England on the topics of decentralisation, privacy, autonomy, and digital infrastructure. I gave a 3 minute lightning talk, helped run an ... [More] IndieWeb standards & methods session in the first open slot of the day, and participated in two more sessions. The second open session had no Etherpad notes, so this post is from my one week ago memory recall. Decentralized lunch After the first open session of the day, the Redecentralize confrerence provided a nice informal buffet lunch for participants. Though we picked up our eats from a centralized buffet, people self-organized into their own distributed groups. There were a few folks I knew or had recently met, and many more that I had not. I sat with a few people who looked like they had just started talking and that’s when I met Kate. I asked if she was running a session and she said yes in the next time slot, on decentralized identity and rethinking reputation. She also noted that she wanted to approach it from a human exploration perspective rather than a technical perspective, and was looking to learn from participants. I decided I’d join, looking forward to a humans-first (rather than technology plumbing first) conversation and discussion. Discussion circle After lunch everyone found their way to various sessions or corners of the space to work on their own projects. The space for Kate’s session was an area in the middle of a large room, without a whiteboard or projector. About a half dozen of us assembled chairs in a rough oval to get started. As we informally chatted a few more people showed up and we broadened our circle. The space was a bit noisy with chatter drifting in from other sessions, yet we could hear each other we if leaned in a little. Kate started us off asking our opinions of the subject matter, experiences, and about existing approaches in contrast to letting any one company control identity and reputation. Gaming of centralized systems We spent quite a bit of time on discussing existing online or digital reputation systems, and how portable or not these were. China was a subject of discussion along with the social reputation system that they had put in place that was starting to be used for various purposes. Someone provided the example of people putting their phones into little shaker machines to fake an increased stepcount to increase their reputation in that way. Apparently lots of people are gaming the Chinese systems in many ways. Portability and resets Two major concerns were brought up about decentralized reputation systems. Reputation portability. If you build reputation in one system or service, how do you transfer that reputation to another? Reset abuse. If you develop a bad reputation in a system, what is to stop you from deleting that identity, and creating a new one to reset your reputation? No one had good answers for either. I offered one observation for the latter, which was that as reputation systems evolve over time, the lack of reputation, i.e. someone just starting out (or a reset), is seen as having a default negative reputation, that they have to prove otherwise. For example the old Twitter “eggs”, so called due to the default icons that Twitter (at some point) assigned to new users that were a white cartoon egg on a pastel background. Another subsequent thought, Twitter’s profile display of when someone joined has also reinforced some of this “default negative” reputation, as people are suspicious of accounts that have just recently joined Twitter and all of sudden start posting forcefully (especially about political or breaking news stories). Are they bots or state operatives pretending to be someone they’re not? Hard to tell. Session dynamics While Kate did a good job keeping discussions on topic, prompting with new questions when the group appeared to rathole in some area, there were a few challenging dynamics in the group. It looked like no one was using laptop to take notes (myself included), emergently so (no one was told not to use their laptop). While “no laptop” meetings are often praised for focus & attention, they do have several downsides. First, no one writes anything down, so follow-up discussions are difficult, or rather, it becomes likely that past discussions will be repeated without any new information. Caught in a loop. History repeating. Second, with only speaking and no writing or note-taking, conversations tend to become more reactive, less thoughtful, and more about the individuals & personalities than about the subject matter. I noticed that one participant in particular was much more forceful and spoke a lot more than anyone else in the group, asserting all kinds of domain knowledge (usually without citation or reasoning). Normally I tend to question this kind of behavior, but this time I decided to listen and observe instead. On a session about reputation, how would this person’s behavior affect their dynamic reputation in this group? Eventually Kate was able to ask questions and prompt others who were quiet to speak-up, which was good to see. Decentralized identity We did not get into any deep discussions of any specific decentralized identity systems, and that was perhaps ok. Mostly there discussion about the downsides of centrally controlled identity, and how each of us wanted more control over various aspects of our online identities. For anyone who asked, I posited that a good way to start with decentralized identity was to buy and use a personal domain name for your primary online presence, setting it up to sign-into sites, and build a reputation using that. Since you can pick the domain name, you can pick whatever facet(s) of your identity you wish to represent. It may not be perfectly distributed, however it does work today, and is a good way to explore a lot of the questions and challenges of decentralized identity. The Nirvana Fallacy Another challenge discussing various systems both critically, and aspirationally, was the inability to really assess how “real” any examples were, or applicable to any of us, or their usability, or even if they were deployed in any even experimental way instead of just being a white paper proposal. This was a common theme in several sessions, that of comparing the downsides of real existing systems with the aspirational features of conceived but unimplemented systems. I had just recently come across a name for this phenomenon, and like many things you learn about, was starting to see it a lot: The Nirvana Fallacy. I didn’t bring it up in this session but rather tried to keep it in mind as a way to assess various comparisons. Distributed reputation After lunch sessions are always a bit of a challenge. People are full or tired. I myself was already feeling a bit spent from the lightning talk and the session Kevin and I had led right after that. All in all it was a good discussion, even though we couldn’t point to any notes or conclusions. It felt like everyone walked away having learned something from someone else, and in general people got to know each other in a semi-distributed way, starting to build reputation for future interactions. Watching that happen in-person made me wonder if there was some way to apply a similar kind of semi-structured group discussion dynamic as a method for building reputation in the online world. Could there be some way to parse out the dynamics of individual interactions in comments or threads to reflect that back to users in the form of customized per-person-pair reputations that they could view as a recent summary or trends over the years? Previous #Redecentralize 2019 posts IndieWeb Decentralized Standards and Methods Lightning talk: Showing redecentralization by example with my personal web site [Less]
Posted over 4 years ago by Kevin Jacobs
At Mozilla we are well aware of how fragile the Web Public Key Infrastructure (PKI) can be. From fraudulent Certification Authorities (CAs) to implementation errors that leak private keys, users, often unknowingly, are put in a position where their ... [More] ability to establish trust on the Web is compromised. Therefore, in keeping with our mission to create a Web where individuals are empowered, independent and safe, we welcome ideas that are aimed at making the Web PKI more robust. With initiatives like our Common CA Database (CCADB), CRLite prototyping, and our involvement in the CA/Browser Forum, we’re committed to this objective, and this is why we embraced the opportunity to partner with Cloudflare to test Delegated Credentials for TLS in Firefox, which is currently undergoing standardization at the IETF. As CAs are responsible for the creation of digital certificates, they dictate the lifetime of an issued certificate, as well as its usage parameters. Traditionally, end-entity certificates are long-lived, exhibiting lifetimes of more than one year. For server operators making use of Content Delivery Networks (CDNs) such as Cloudflare, this can be problematic because of the potential trust placed in CDNs regarding sensitive private key material. Of course, Cloudflare has architectural solutions for such key material but these add unwanted latency to connections and present with operational difficulties. To limit exposure, a short-lived certificate would be preferable for this setting. However, constant communication with an external CA to obtain short-lived certificates could result in poor performance or even worse, lack of access to a service entirely. The Delegated Credentials mechanism decentralizes the problem by allowing a TLS server to issue short-lived authentication credentials (with a validity period of no longer than 7 days) that are cryptographically bound to a CA-issued certificate. These short-lived credentials then serve as the authentication keys in a regular TLS 1.3 connection between a Firefox client and a CDN edge server situated in a low-trust zone (where the risk of compromise might be higher than usual and perhaps go undetected). This way, performance isn’t hindered and the compromise window is limited. For further technical details see this excellent blog post by Cloudflare on the subject. See How The Experiment Works We will soon test Delegated Credentials in Firefox Nightly via an experimental addon, called TLS Delegated Credentials Experiment. In this experiment, the addon will make a single request to a Cloudflare-managed host which supports Delegated Credentials. The Delegated Credentials feature is disabled in Firefox by default, but depending on the experiment conditions the addon will toggle it for the duration of this request. The connection result, including whether Delegated Credentials was enabled or not, gets reported via telemetry to allow for comparative study. Out of this we’re hoping to gain better insights into how effective and stable Delegated Credentials are in the real world, and more importantly, of any negative impact to user experience (for example, increased connection failure rates or slower TLS handshake times). The study is expected to start in mid-November and run for two weeks. For specific details on the telemetry and how measurements will take place, see bug 1564179. See The Results In Firefox You can open a Firefox Nightly or Beta window and navigate to about:telemetry. From here, in the top-right is a Search box, where you can search for “delegated” to find all telemetry entries from our experiment. If Delegated Credentials have been used and telemetry is enabled, you can expect to see the count of Delegated Credentials-enabled handshakes as well as the time-to-completion of each. Additionally, if the addon has run the test, you can see the test result under the “Keyed Scalars” section. Delegated Credentials telemetry in Nightly 72 You can also read more about telemetry, studies, and Mozilla’s privacy policy by navigating to about:preferences#privacy. See It In Action If you’d like to enable Delegated Credentials for your own testing or use, this can be done by: In a Firefox Nightly or Beta window, navigate to about:config. Search for the “security.tls.enable_delegated_credentials” preference – the preference list will update as you type, and “delegated” is itself enough to find the correct preference. Click the Toggle button to set the value to true. Navigate to https://dc.crypto.mozilla.org/ If needed, toggling the value back to false will disable Delegated Credentials. Note that currently, use of Delegated Credentials doesn’t appear anywhere in the Firefox UI. This will change as we evolve the implementation. We would sincerely like to thank Christopher Patton, fellow Mozillian Wayne Thayer, and the Cloudflare team, particularly Nick Sullivan and Watson Ladd for helping us to get to this point with the Delegated Credentials feature. The Mozilla team will keep you informed on the development of this feature for use in Firefox, and we look forward to sharing our results in a future blog post.       The post Validating Delegated Credentials for TLS in Firefox appeared first on Mozilla Security Blog. [Less]