I Use This!
Very High Activity

News

Analyzed about 8 hours ago. based on code collected 1 day ago.
Posted about 5 years ago by Owen Bennett
We’ve written a lot recently about the dangers that the EU Terrorist Content regulation poses to internet health and user rights, and efforts to combat violent extremism. One aspect that’s particularly concerning is the rule that all online hosts ... [More] must remove ‘terrorist content’ within 60 minutes of notification. Here we unpack why that obligation is so problematic, and put forward a more nuanced approach to content takedowns for EU lawmakers. Since the early days of the web, ‘notice & action’ has been the cornerstone of online content moderation. As there is so much user-generated content online, and because it is incredibly challenging for an internet intermediary to have oversight of each and every user activity, the best way to tackle illegal or harmful content is for online intermediaries to take ‘action’ (e.g. remove it) once they have been ‘notified’ of its existence by a user or another third party. Despite the fast-changing nature of internet technology and policy, this principle has shown remarkable resilience. While it often works imperfectly and there is much that could be done to make the process more effective, it remains a key tool for online content control. Unfortunately, the EU’s Terrorist Content regulation stretches this tool beyond its limit. Under the proposed rules, all hosting services, regardless of their size, nature, or exposure to ‘terrorist content’ would be obliged to put in place technical and operational infrastructure to remove content within 60 minutes of notification. There’s three key reasons why this is a major policy error: Regressive burden: Not all internet companies are the same, and it is reasonable to suggest that in terms of online content control, those who have more should do more. More concretely, it is intuitive that a social media service with billions in revenue and users should be able to remove notified content more quickly than a small family-run online service with a far narrower reach. Unfortunately however, this proposal forces all online services – regardless of their means – to implement the same ambitious 60-minute takedown timeframe. This places a disproportionate burden on those least able to comply, giving an additional competitive advantage to the handful of already dominant online platforms. Incentivises over-removal: A crucial aspect of the notice & action regime is the post-notification review and assessment. Regardless of whether a notification of suspected illegal content comes from a user, a law enforcement authority, or a government agency, it is essential that online services review the notification to assess its validity and conformity with basic evidentiary standards. This ‘quality assurance’ aspect is essential given how often notifications are either inaccurate, incomplete, or in some instances, bogus. However, a hard deadline of 60 minutes to remove notified content makes it almost impossible for most online services to do the kind of content moderation due diligence that would minimise this risk. What’s likely to result is the over-removal of lawful content. Worryingly, the risk is especially high for ‘terrorist content’ given its context-dependent nature and the thin line between intentionally terroristic and good-faith public interest reporting. Little proof that it actually works: Most troubling about the European Commission’s 60-minute takedown proposal is that there doesn’t seem to be any compelling reason why 60 minutes is an appropriate or necessary timeframe. To this date, the Commission has produced no research or evidence to justify this approach; a surprising state of affairs given how radically this obligation departs from existing policy norms. At the same time, a ‘hard’ 60 minute deadline strips the content moderation process of strategy and nuance, allowing for no distinction between type of terrorist content, it’s likely reach, or the likelihood that it will incite terrorist offences. With no distinction there can be no prioritisation. For context, the decision by the German government to mandate a takedown deadline of 24 hours for ‘obviously illegal’ hate speech in its 2017 ‘NetzDG’ law sparked considerable controversy on the basis of the risks outlined above. The Commission’s proposal brings a whole new level of risk. Ultimately, the 60-minute takedown deadline in the Terrorist Content regulation is likely to undermine the ability for new and smaller internet services to compete in the marketplace, and creates the enabling environment for interference with user rights. Worse, there is nothing to suggest that it will help reduce the terrorist threat or the problem of radicalisation in Europe. From our perspective, the deadline should be replaced by a principle-based approach, which ensures the notice & action process is scaled according to different companies’ exposure to terrorist content and their resources. For that reason, we welcome amendments that have been suggested in some European Parliament committees that call for terrorist content to be removed ‘expeditiously’ or ‘without undue delay’ upon notification. This approach would ensure that online intermediaries make the removal of terrorist content from their services a key operational objective, but in a way which is reflective of their exposure, the technical architecture, their resources, and the risk such content is likely to pose. As we’ve argued consistently, one of the EU Terrorist Content regulation’s biggest flaws is its lack of any proportionality criterion. Replacing the hard 60-minute takedown deadline with a principle-based approach would go a long way towards addressing that. While this won’t fix everything – there are still major concerns with regard to upload filtering, the unconstrained role of government agencies, and the definition of terrorist content – it would be an important step in the right direction. The post One hour takedown deadlines: The wrong answer to Europe’s content regulation question appeared first on Open Policy & Advocacy. [Less]
Posted about 5 years ago by Tina Hsieh
I had a brilliant idea! How do I get stakeholders to understand whether the market sees it in the same way?People in startups have tried so hard to avoid spending time and money on building a product that doesn’t achieve the product/ market fit, so ... [More] do tech companies. Resources are always limited. Making right decisions on where to put their resources are serious in organizations, and sometimes, it’s even harder to make one than in a startup.ChecknShare, an experimental product idea from Mozilla Taipei for improving Taiwanese seniors’ online sharing experience, has learned a lot after doing several rounds of validations. In our retrospective meeting, we found the process can be polished to be more efficient when we both validate our ideas and communicate with our stakeholders at the same time.Here are 3 steps that I suggest for validating your idea:Step 1: Define hypotheses with stakeholdersHaving hypotheses in the planning stage is essential, but never forget to include stakeholders when making your beautiful list of hypotheses. Share your product ideas with stakeholders, and ask them if they have any questions. Take their questions into consideration to plan for a method which can cover them all.Your stakeholders might be too busy to participate in the process of defining the hypotheses. It’s understandable, you just need to be sure they all agree on the hypotheses before you start validating.Step 2: Identify the purpose of validating your ideaAre you just trying to get some feedback for further iteration? Or do you need to show some results to your stakeholders in order to get some engagement/ resources from them? The purpose might influence how you select the validation methods.There are two types of validation methods, qualitative and quantitative. Quantitative methods focus on finding “what the results look like”, while qualitative methods focus on “why/ how these results came about”. If you’re trying to get some insights for design iteration, knowing “why users have trouble falling in love with your idea” could be your first priority in the validation stage. Nevertheless, things might be different when you’re trying to get your stakeholders to agree.From the path that ChecknShare has gone through, quantitative results were much easier to influence stakeholders as concrete numbers were interpreted as a representation of a real world situation. I’m not saying quantitative methods are “must-dos” during the validation stage, but be sure to select a method that speaks your stakeholders’ language.Step 3: Select validation methods that validate the hypotheses preciselyWith the hypotheses that were acknowledged by your stakeholders and the purpose behind the validation, you can select methods wisely without wasting time on inconsequential work.In the following, I’m going to introduce the 5 validation methods that we conducted for ChecknShare and the lessons we’ve learned from each of them. I hope these shared lessons can help you find your perfect one. Starting with the qualitative methods:Qualitative Validation Methods1. Participatory WorkshopThe participatory workshop was an approach for us to validate the initial ideas generated from the design sprint. During the co-design process, we had 6 participants who matched with our target user criteria. We prioritized the scenario, got first-hand feedback for the ideas, and did quick iterations with our participants. (For more details on how we hosted the workshop, please look at the blog I wrote previously.)Although hosting a workshop externally can be challenging due to some logistic works like recruiting relevant participants and finding a large space for accommodating people, we see participatory workshop as a fast and effective approach for having early interactions with our target users.2. Physical pitching surveyThe pitching session in a local learning centerIn order to see how our target market reacts to the idea in the early stage, we hosted a pitching session in a local learning center that offered free courses for seniors to learn how to use smartphones. During the pitching session, we handed out paper questionnaires to investigate their smartphone behaviors, interests of the idea, and their willingness to participate in our future user testings.It was our first time experimenting with a physical survey instead of sitting in the office and deploying surveys through virtual platforms. A physical survey isn’t the best approach to get a massive number of responses in a short time. However, we got a chance to talk to real people, saw their emotional expressions when pitching an idea, recruited user testing participants, and pilot tested a potential channel for our future go-to-market strategy.Moreover, we invited our stakeholders to attend the pitching session. It provided a chance for them to be immersed in the environment and feel more empathy around our target users. The priceless experience made our post conversations with stakeholders more realistic when we were evaluating the risk and potential of our target users who the team wasn’t quite familiar with.Our stakeholders were chatting with seniors during the pitching session3. User TestingDuring user testing, we were focusing on the satisfaction level of the product features and the usability of the UI flow. For the usability testing, we provided several pairs of paper prototypes for A/B testing participants’ understanding of the copy and UI design, and an interactive prototype to see if they could accomplish the tasks we assigned. The feedback indicated the areas that needed to be tweaked in the following iteration.A/B Testing the product feature by using paper prototypesUser testing can get various results as it depends on how you design it. From our experience of conducting a user testing that combined concept testing and usability testing, we learned that the usability testing could be postponed to the production stage since the detailed design polishment was too early before the production stage was officially kicked off by stakeholders.Quantitative Validation MethodsWhen we realized that qualitative results didn’t speak our stakeholders’ language, we started to recollect our stakeholders’ questions holistically and applied quantitative methods to answer them. Here are the following 2 methods we applied:4. Online SurveyTo understand the potential market size and the product value proposition which our stakeholders consider of great importance, we designed an online survey that investigated the current sharing behavior and the preference of the features among different ages. It helped us to see if there were any other user segments that were similar with seniors and the priority of the features.The pie chart and bar chart reveal the portion of our target users.The EDM we sent out for spreading the online surveyThe challenge of conducting an online survey is to find an efficient deployment channel with less bias. Since the age range of our target responses were quite wide (from age 21 to 65, 9 segments), conducting an online survey became time-consuming and was beyond our expectations. To get at least 50 responses from each age bracket, we delivered survey invitations through Mozilla Taiwan’s social account, sent out EDM by collaborating with our media partner, and also bought responses from Survey Monkey.When we reviewed the entire survey results with our stakeholders, we had a constructive discussion and progressed on defining our target audience and the value proposition based on solid numbers. An online survey can be an easier approach if the survey scope uses a narrower age range. For making constructive discussions happen earlier, we’d suggest running a quick survey once the product concept is settled.5. Landing Page TestWe couldn’t just use a survey to investigate a participant’s app download willingness since it’s very hard to avoid leading questions. Therefore, the team decided to run a landing page test and see how the real market reacted to the product concept. We designed a landing page which contained a key message, product introduction of the top 3 features, several CTA buttons for email signup, and a hidden email collecting section that only showed when a participant clicked on the CTA button. We intentionally kept the page structure similar to a common landing page. (Have no idea what a landing page test is? Scott McLeod published a thorough landing page test guide which might be very helpful for you :)) Along with the landing page, we had an Ad banner which is consistent with our landing page design.We ran our ad on Google Display Network for 5 days and got 10x more visitors than the previous online survey responses, which is the largest number of participants compared to the other validations we conducted. The CTR and conversion rate was quite persuasive, so ChecknShare finally got support from our stakeholders and the team was able to start thinking about more details around design implementation.Landing page test is uncommon in Taiwan’s software industry, not to mention testing product concepts for seniors. We weren’t quite confident with getting reliable results at the beginning, but it ended up reaching out to the most seniors we’ve never had in our long validation journey. Here I summarized some suggestions for running a landing page test: Set success criteria with stakeholders before running the test. Finding a reasonable benchmark target is essential. There’s no such thing as an absolute number for setting a KPI because it can vary depending on the region, acquiring channels, and the product category. Make sure your copy can deliver the key product values in 5–10 secs read. The copy on both ad and landing page should be simple, clear, and touching. Simply pilot testing the copy with fresh eyes can be very insightful for copy iterations. Reduce any factors that might influence the reading experience.Don’t let the website design ruin your test results. Remember to check the accessibility of your website (especially text size and contrast ratio). Pairing comprehensible illustrations, UI screens or even some animation of the UI flow with your copy can be very helpful in making it easier to understand. The endless quantitative-qualitative dilemma“What if I don’t have sufficient time to do both qualitative and quantitative testing?” you might ask.We believe that having both qualitative and quantitative results are important. One supports each other. If you don’t have time to do both, take a step back, talk with your stakeholders, and think about what are the most important criteria that have to be true for becoming a successful product.There’s no perfect method to validate all types of hypotheses precisely. Keep asking yourself why you need to do this validation, and be creative.References: 1. 8 tips for hosting your first participatory workshop — Tina Hsieh2. How to setup a landing page for testing a business or product idea. — Scott McLeod3. How to Test and Validate Startup Ideas — Mitch RobinsonHow to validate an idea when you’re not working in a startup. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted about 5 years ago by Fabien Benetou
This is the story of our lucky encounter at FOSDEM, the largest free and open source software event in Europe. We are two developers, focused on different domains, who saw an opportunity to continue our technical conversation by building a proof of ... [More] concept. Fabien Benetou is a developer focused on virtual reality and augmented reality. Phillipe Coval works on the Internet of Things. Creating a prototype gave the two of us a way to explore some ideas we’d shared at the conference. WebXR meets the Web of Things Today we’ll report on the proof-of-concept we built in half a day, after our lucky meeting-of-minds at FOSDEM. Our prototype applies 3D visualisation to power an IoT interface. It demonstrates how open, accessible web technologies make it possible to combine software from different domains to create engaging new interactive experiences. Our proof of concept, illustrated in the video below, shows how a sensor connected to the Internet brings data from the real world to the virtual world. The light sensor reads colors from cardboard cards and changes the color of entities in virtual reality. The second demo shows how actions in the virtual world can affect the real world. In this next video, we turn on LEDs with colors that match their virtual reality counterparts. We’ll show you how to do a similar experiment yourself: Build a demo that goes from IoT to WoT, showing the value of connecting things to the web. Connect your first thing and bring it online. Make a connection between the Web of Things and WebXR. Once your thing is connected, you’ll be able to display it, and interact with it in VR and AR. Here’s a bit more of the context: Fabien Benetou organized of the JavaScript devroom track at FOSDEM, and presented High end augmented reality using JavaScript. Philippe Coval from the Samsung OpenSource group joined his colleague Ziran Sun to present Bring JavaScript to the Internet of Things on the same track. Philippe demonstrated a remote “SmartHome in a Box”, using a live webcam stream. It was a demo he’d shared the day before in the Mozilla devroom, in a joint presentation with Mozilla Tech Speaker Dipesh Monga. The demo showed interactions of different kinds of sensors, including a remote sensor from the OpenSenseMap project, a community website that lets contributors upload real-time sensor data. The Followup to FOSDEM In Rennes, a city in Brittany, in the northwest of France, the Ambassad’Air project is doing community air-quality tracking using luftdaten software on super cheap microcontrollers. Fabien had already made plans to visit Rennes the following week (to breathe fresh air and enjoy local baked delicacies like the delightful kouign amann). So we decided to meet again in Rennes, and involve the local community. We proposed a public workshop bridging “Web of Things” and “XR” using FLOSS. Big thanks to Gulliver, the local GNU/Linux Group, who offered to host our last minute hacking session. Thanks also to the participants in Rennes for their curiosity and their valuable input. In the sections ahead we offer an overview of the different concepts that came together in our project. From IoT to the Web of Things The idea of the Internet of Things existed before it got its name. Some fundamental IoT concepts have a lot in common with the way the web works today. As the name suggests, the web of things offers an efficient way to connect any physical object to the world wide web. Let’s start with a light bulb . Usually, we use a physical switch to turn the bulb on or off. Now imagine if your light bulb could have its own web page. If your light bulb or any smart device is web friendly, it would be reachable by a URL like https://mylamp.example.local. The light bulb vendor could implement a web server in the device, and a welcome page for the user. The manufacturer could provide another endpoint for a machine-readable status that would indicate “ON” or “OFF”. Even better, that endpoint could be read using an HTTP GET query or set using an HTTP POST operation with ON or OFF. All this is simply an API to manage a boolean, making it possible to use the mobile browser as a remote control for the light bulb. Although this model works, it’s not the best way to go. A standardized API should respect REST principles and use common semantics to describe Things (TD). The W3C is pushing for standardization — a smooth interoperable web language that can be implemented by any project, such as Mozilla’s Project Things. Newcomers can start with a virtual adapter and play with simulated things. These things appear on the dashboard but do not exist in reality. Actuators or sensors can be implemented using web thing libraries for any language. Useful hint: it’s much simpler to practice on a simulator before working with real hardware and digging into hardware datasheets. For curious readers, check out the IoT.js code in Philippe’s webthings-iotjs guide on GitHub, and explore color sensor code that’s been published to NPM as color-sensor-js. Connect your first thing How do you make a web-friendly smart home? You can start by setting up a basic local IoT network. Here’s how: You’ll need a a computer with a network interface to use as a gateway. Add devices to your network and define a protocol to connect them to the central gateway. Build a user interface to let the user control all connected devices from the gateway. Later you can develop custom web apps that can also connect to the gateway. To avoid reinventing the wheel, look at existing free software. That’s where Mozilla’s Things Gateway comes in. You won’t need network engineering or electronics expertise to get started. You can rely on a low-cost and low-power consumption single board computer, for instance the Raspberry Pi, to install the operating system image provided by Mozilla. Then you can create virtual things like light bulbs, or connect real hardware like sensors onto the gateway itself. You’ll be able to control your device(s) from the web through the tunneling service provided by the “things cloud”. Your data is reachable at a custom domain, stays on your local network, and is never sent to a 3rd party in the cloud. In order to make the process efficient and also safe, the gateway takes care of authentication by generating a token. The gateway can also generate direct code snippets in several languages (including JavaScript) that can be used for other applications: You can build on top of existing code that should just work when you copy/paste it into your application. Developers can focus on exploring novel applications and use cases for the technology. For your next step, we recommend testing the simplest example: list all the things connected to your gateway. In our example, we use a light bulb , a thing composed of several properties. Make sure that the thing displayed on the gateway web interface matches the real world thing. Use the browser’s console with the provided code snippets to check that the behavior matches the device. Get to know your Things Gateway Once this is running, the fun begins. Since you can access the gateway with code, you can: List all things, including the schema, to understand their capabilities (properties, values, available actions). Read a property value (e.g. the current temperature of a sensor). Change a property (e.g. control the actuator or set the light bulb color). Get the coordinates of a thing on a 2D floor plan. And much more! Using a curl command, you can query the whole tree to identify all things registered by the gateway: gateway=”https://sosg.mozilla-iot.org< token=”B4DC0DE..." curl \ -H "Authorization: Bearer $token" \ -H 'Accept: application/json' \ https://sosg.mozilla-iot.org/things \ | jq -M . The result is a JSON structure of all the things. Each thing has a different endpoint like: { "name": "ColorSensor", ... "properties": { "color": { "type": "string", "@type": "ColorProperty", "readOnly": true,    "links": [ {... "href": "/things/http---localhost-58888-/properties/color" ... User devices are private and not exposed to the world wide web, so no one else can access or control your light bulb. Here's a quick look at the REST architecture that makes this possible: From WoT to WebXR Introducing A-Frame for WebVR Once we were able to programmatically get property values using a single HTTP GET request, we could use those values to update the visual scene, e.g. changing the geometry or color of a cube. This is made easier with a framework like A-Frame, which lets you describe simple 3D scenes using HTML. For example, to define that cube in A-Frame, we use the tag. Then we change its color by adding the color attribute. The beauty behind the declarative code is that these 3D objects, or entities, are described clearly, yet their shape and behavior can be extended easily with components. A-Frame has an active community of contributors. The libraries are open source, and built on top of three.js, one of the most popular 3D frameworks on the web. Consequently, scenes that begin with simple shapes can develop into beautiful, complex scenes. This flexibility allows developers to work at the level of the stack where they feel comfortable, from HTML to writing components in JavaScript, to writing complex 3D shaders. By staying within the boundaries of the core of A-Frame you might never even have to write JavaScript. If you want to write JavaScript, documentation is available to do things like manipulating the underlying three.js object. A-Frame itself is framework agnostic. If you are a React developer, you can rely on React. Prefer Vue.js? Not a problem. Vanilla HTML & JS is your thing? These all work. Want to use VR in data visualisation? You can let D3 handle the data bindings. Using a framework like A-Frame which targets WebXR means that your will work on all VR and AR devices which have access to a browser that supports WebXR, from the smartphone in your pocket to high-end VR and professional AR headsets. Connecting the Web of Things to Virtual Reality In our next step we change the color value on the 3D object to the thing’s actual value, derived from its physical color sensor. Voila! This connects the real world to the virtual. Here’s the A-Frame component we wrote that can be applied to any A-Frame entity. var token = 'Bearer SOME_CODE_FOR_AUTH' // The token is used to manage access, granted only to selected users var baseURL = 'https://sosg.mozilla-iot.org/' var debug = false // used to display content in the console AFRAME.registerComponent('iot-periodic-read-values', { // Registering an A-Frame component later used in VR/AR entities init: function () { this.tick = AFRAME.utils.throttleTick(this.tick, 500, this); // check for new value every 500ms }, tick: function(t, dt){ fetch(baseURL + 'things/http---localhost-58888-/properties/color', { headers: { Accept: 'application/json', Authorization: token } }).then(res => { return res.json(); }).then(property => { this.el.setAttribute("color", property.color); // the request went through // update the color of the VR/AR entity }); } }) The short video above shows real world color cards causing colors to change in the virtual display. Here’s a brief description of what we’re doing in the code. We generate a security token (JWT) to gain access to our Things Gateway. Next we register a component that can be used in A-Frame in VR or AR to change the display of a 3D entity. Then we fetch the property value of a Thing and display it on the current entity. In the same way we can get information with an HTTP GET request, we can send a command with an HTTP PUT request. We use A-Frame’s to allow for interaction in VR. Once we look at an entity, such as another cube, the cursor can then send an event. When that event is captured, a command is issued to the Things Gateway. In our example, when we aim at a green sphere (or “look” with our eyes through the VR headset), we toggle the green LED, red sphere (red LED) and blue sphere (blue LED). Going from Virtual Reality to Augmented Reality The objective of our demo was two-fold: to bring real world data into a virtual world, and to act on the real world from the virtual world. We were able to display live sensor data such as temperature and light intensity in VR. In addition, were able to turn LEDs on and off from the VR environment. This validates our proof of concept. Sadly, the day came to an end, and we ran out of time to try our proof of concept in augmented reality (AR) with a Magic Leap device. Fortunately, the end of the day didn’t end our project. Fabien was able to tunnel to Philippe’s demo gateway, registered under the mozilla-iot.org subdomain and access it as if it were on a local network, using Mozilla’s remote access feature. The project was a success! We connected the real world to AR as well as to VR. The augmented reality implementation proved easy. Aside from removing ; so it wouldn’t cover our field of view, we didn’t have to change our code. We opened our existing web page on the MagicLeap ML1 thanks to exokit, a new open-source browser specifically targeting spatial devices (as presented during Fabien’s FOSDEM talk). It just worked! As you can see in the video, we briefly reproduced the gateway’s web interface. We have a few ideas for next steps.  By making those spheres interactive we could activate each thing or get more information about them. Imagine using the gateway floorplan to match the spatial information of a thing to the physical layout of a flat. There are A-Frame components that make it straightforward to generate simplified building parts like walls and doors. You don’t need a Magic Leap device to explore AR with the Web of Things. A smartphone running Mozilla XR Viewer on an iPhone or an Android using the experimental build of Chromium will work with traditional RGB cameras. From the Virtual to the Immersive Web The transition from VR/AR to XR takes two steps. The first step is the technical aspect, which is where relying on A-Frame comes in. Although the specifications for VR and AR on the web are still works in progress by the W3C’s "Immersive Web" standardization process, we can target XR devices today. By using a high-level framework, we can begin development even though the spec is still in progress, because the spec includes a polyfill maintained by browser vendors and the community at large. The promise of having one code base for all VR and AR headsets is one of the most exciting aspects of WebXR. Using A-Frame, we are able to start today and be ready for tomorrow. The second step involves you, as reader and user. What would you like to see? Do you have ideas of use cases that create interactive spatial content for VR and AR? Conclusion The hack session in Rennes was fascinating. We were able to get live data from the real world and interact with it easily in the virtual world. This opens the door to many possibilities: from our simplistic prototype to artistic projects that challenge our perception of reality. We also foresee pragmatic use cases, for instance in hospitals and laboratories filled with sensors and modern instrumentation (IIoT or Industrial IoT). This workshop and the resulting videos and code are simple starting points. If you start work on a similar project, please do get in touch (@utopiah and @[email protected]/@RzrFreeFr). We'll help however we can! There's also work in progress on a webapp to A-Frame. Want to get involved in testing or reviewing code? You're invited to help with the design or suggest some ideas of your own. What Things will YOU bring to the virtual world? We can't wait to hear from you. Resources Meeting page (in French): http://gulliver.eu.org/calendrier:2019:02:09 PoC sources (kind of documented and tested but still very basic): http://gulliver-webxr-iot.glitch.me/ https://glitch.com/edit/#!/gulliver-webxr-iot Video about: WebThing Sensor to VR + VR Actuators (LEDs) https://social.samsunginter.net/@rzr/101564201618024415 Guide to build webthings using IoT.js and more: https://github.com/rzr/webthing-iotjs/wiki Sensor color driver for IoT.js or Node.js: https://www.npmjs.com/package/color-sensor-js https://github.com/rzr/color-sensor-js/ Using objects in the real world from a virtual reality setup: https://youtu.be/sPaSd1eRTGE Activating objects in the real world from a virtual reality setup: https://youtu.be/-kSUS4yeJBk Getting live IoT sensor data straight to augmented reality: https://www.youtube.com/watch?v=5XaD6eS6OUE Reality Editor 2.0 (earlier exploration): https://www.media.mit.edu/projects/reality-editor-20/overview/ Where brain, body, and world collide: https://www.sciencedirect.com/science/article/pii/S1389041799000029 Isolated A-Frame component (to be updated once schema implemented): https://github.com/Utopiah/aframe-webthings-component Webthing-Webapp (experimental PWA supporting various UI toolkits, Tizen etc) https://github.com/samsunginternet/webthings-webapp [Less]
Posted about 5 years ago by Fabien Benetou
This is the story of our lucky encounter at FOSDEM, the largest free and open source software event in Europe. We are two developers, focused on different domains, who saw an opportunity to continue our technical conversation by building a proof of ... [More] concept. Fabien Benetou is a developer focused on virtual reality and augmented reality. Phillipe Coval works on the Internet of Things. Creating a prototype gave the two of us a way to explore some ideas we’d shared at the conference. WebXR meets the Web of Things Today we’ll report on the proof-of-concept we built in half a day, after our lucky meeting-of-minds at FOSDEM. Our prototype applies 3D visualisation to power an IoT interface. It demonstrates how open, accessible web technologies make it possible to combine software from different domains to create engaging new interactive experiences. Our proof of concept, illustrated in the video below, shows how a sensor connected to the Internet brings data from the real world to the virtual world. The light sensor reads colors from cardboard cards and changes the color of entities in virtual reality. The second demo shows how actions in the virtual world can affect the real world. In this next video, we turn on LEDs with colors that match their virtual reality counterparts. We’ll show you how to do a similar experiment yourself: Build a demo that goes from IoT to WoT, showing the value of connecting things to the web. Connect your first thing and bring it online. Make a connection between the Web of Things and WebXR. Once your thing is connected, you’ll be able to display it, and interact with it in VR and AR. Here’s a bit more of the context: Fabien Benetou organized of the JavaScript devroom track at FOSDEM, and presented High end augmented reality using JavaScript. Philippe Coval from the Samsung OpenSource group joined his colleague Ziran Sun to present Bring JavaScript to the Internet of Things on the same track. Philippe demonstrated a remote “SmartHome in a Box”, using a live webcam stream. It was a demo he’d shared the day before in the Mozilla devroom, in a joint presentation with Mozilla Tech Speaker Dipesh Monga. The demo showed interactions of different kinds of sensors, including a remote sensor from the OpenSenseMap project, a community website that lets contributors upload real-time sensor data. The Followup to FOSDEM In Rennes, a city in Brittany, in the northwest of France, the Ambassad’Air project is doing community air-quality tracking using luftdaten software on super cheap microcontrollers. Fabien had already made plans to visit Rennes the following week (to breathe fresh air and enjoy local baked delicacies like the delightful kouign amann). So we decided to meet again in Rennes, and involve the local community. We proposed a public workshop bridging “Web of Things” and “XR” using FLOSS. Big thanks to Gulliver, the local GNU/Linux Group, who offered to host our last minute hacking session. Thanks also to the participants in Rennes for their curiosity and their valuable input. In the sections ahead we offer an overview of the different concepts that came together in our project. From IoT to the Web of Things The idea of the Internet of Things existed before it got its name. Some fundamental IoT concepts have a lot in common with the way the web works today. As the name suggests, the web of things offers an efficient way to connect any physical object to the world wide web. Let’s start with a light bulb . Usually, we use a physical switch to turn the bulb on or off. Now imagine if your light bulb could have its own web page. If your light bulb or any smart device is web friendly, it would be reachable by a URL like https://mylamp.example.local. The light bulb vendor could implement a web server in the device, and a welcome page for the user. The manufacturer could provide another endpoint for a machine-readable status that would indicate “ON” or “OFF”. Even better, that endpoint could be read using an HTTP GET query or set using an HTTP POST operation with ON or OFF. All this is simply an API to manage a boolean, making it possible to use the mobile browser as a remote control for the light bulb. Although this model works, it’s not the best way to go. A standardized API should respect REST principles and use common semantics to describe Things (TD). The W3C is pushing for standardization — a smooth interoperable web language that can be implemented by any project, such as Mozilla’s Project Things. Newcomers can start with a virtual adapter and play with simulated things. These things appear on the dashboard but do not exist in reality. Actuators or sensors can be implemented using web thing libraries for any language. Useful hint: it’s much simpler to practice on a simulator before working with real hardware and digging into hardware datasheets. For curious readers, check out the IoT.js code in Philippe’s webthings-iotjs guide on GitHub, and explore color sensor code that’s been published to NPM as color-sensor-js. Connect your first thing How do you make a web-friendly smart home? You can start by setting up a basic local IoT network. Here’s how: You’ll need a a computer with a network interface to use as a gateway. Add devices to your network and define a protocol to connect them to the central gateway. Build a user interface to let the user control all connected devices from the gateway. Later you can develop custom web apps that can also connect to the gateway. To avoid reinventing the wheel, look at existing free software. That’s where Mozilla’s Things Gateway comes in. You won’t need network engineering or electronics expertise to get started. You can rely on a low-cost and low-power consumption single board computer, for instance the Raspberry Pi, to install the operating system image provided by Mozilla. Then you can create virtual things like light bulbs, or connect real hardware like sensors onto the gateway itself. You’ll be able to control your device(s) from the web through the tunneling service provided by the “things cloud”. Your data is reachable at a custom domain, stays on your local network, and is never sent to a 3rd party in the cloud. In order to make the process efficient and also safe, the gateway takes care of authentication by generating a token. The gateway can also generate direct code snippets in several languages (including JavaScript) that can be used for other applications: You can build on top of existing code that should just work when you copy/paste it into your application. Developers can focus on exploring novel applications and use cases for the technology. For your next step, we recommend testing the simplest example: list all the things connected to your gateway. In our example, we use a light bulb , a thing composed of several properties. Make sure that the thing displayed on the gateway web interface matches the real world thing. Use the browser’s console with the provided code snippets to check that the behavior matches the device. Get to know your Things Gateway Once this is running, the fun begins. Since you can access the gateway with code, you can: List all things, including the schema, to understand their capabilities (properties, values, available actions). Read a property value (e.g. the current temperature of a sensor). Change a property (e.g. control the actuator or set the light bulb color). Get the coordinates of a thing on a 2D floor plan. And much more! Using a curl command, you can query the whole tree to identify all things registered by the gateway: gateway=”https://sosg.mozilla-iot.org< token=”B4DC0DE..." curl \ -H "Authorization: Bearer $token" \ -H 'Accept: application/json' \ https://sosg.mozilla-iot.org/things \ | jq -M . The result is a JSON structure of all the things. Each thing has a different endpoint like: { "name": "ColorSensor", ... "properties": { "color": { "type": "string", "@type": "ColorProperty", "readOnly": true,    "links": [ {... "href": "/things/http---localhost-58888-/properties/color" ... User devices are private and not exposed to the world wide web, so no one else can access or control your light bulb. Here's a quick look at the REST architecture that makes this possible: From WoT to WebXR Introducing A-Frame for WebVR Once we were able to programmatically get property values using a single HTTP GET request, we could use those values to update the visual scene, e.g. changing the geometry or color of a cube. This is made easier with a framework like A-Frame, which lets you describe simple 3D scenes using HTML. For example, to define that cube in A-Frame, we use the tag. Then we change its color by adding the color attribute. The beauty behind the declarative code is that these 3D objects, or entities, are described clearly, yet their shape and behavior can be extended easily with components. A-Frame has an active community of contributors. The libraries are open source, and built on top of three.js, one of the most popular 3D frameworks on the web. Consequently, scenes that begin with simple shapes can develop into beautiful, complex scenes. This flexibility allows developers to work at the level of the stack where they feel comfortable, from HTML to writing components in JavaScript, to writing complex 3D shaders. By staying within the boundaries of the core of A-Frame you might never even have to write JavaScript. If you want to write JavaScript, documentation is available to do things like manipulating the underlying three.js object. A-Frame itself is framework agnostic. If you are a React developer, you can rely on React. Prefer Vue.js? Not a problem. Vanilla HTML & JS is your thing? These all work. Want to use VR in data visualisation? You can let D3 handle the data bindings. Using a framework like A-Frame which targets WebXR means that your will work on all VR and AR devices which have access to a browser that supports WebXR, from the smartphone in your pocket to high-end VR and professional AR headsets. Connecting the Web of Things to Virtual Reality In our next step we change the color value on the 3D object to the thing’s actual value, derived from its physical color sensor. Voila! This connects the real world to the virtual. Here’s the A-Frame component we wrote that can be applied to any A-Frame entity. var token = 'Bearer SOME_CODE_FOR_AUTH' // The token is used to manage access, granted only to selected users var baseURL = 'https://sosg.mozilla-iot.org/' var debug = false // used to display content in the console AFRAME.registerComponent('iot-periodic-read-values', { // Registering an A-Frame component later used in VR/AR entities init: function () { this.tick = AFRAME.utils.throttleTick(this.tick, 500, this); // check for new value every 500ms }, tick: function(t, dt){ fetch(baseURL + 'things/http---localhost-58888-/properties/color', { headers: { Accept: 'application/json', Authorization: token } }).then(res => { return res.json(); }).then(property => { this.el.setAttribute("color", property.color); // the request went through // update the color of the VR/AR entity }); } }) The short video above shows real world color cards causing colors to change in the virtual display. Here’s a brief description of what we’re doing in the code. We generate a security token (JWT) to gain access to our Things Gateway. Next we register a component that can be used in A-Frame in VR or AR to change the display of a 3D entity. Then we fetch the property value of a Thing and display it on the current entity. In the same way we can get information with an HTTP GET request, we can send a command with an HTTP PUT request. We use A-Frame’s to allow for interaction in VR. Once we look at an entity, such as another cube, the cursor can then send an event. When that event is captured, a command is issued to the Things Gateway. In our example, when we aim at a green sphere (or “look” with our eyes through the VR headset), we toggle the green LED, red sphere (red LED) and blue sphere (blue LED). Going from Virtual Reality to Augmented Reality The objective of our demo was two-fold: to bring real world data into a virtual world, and to act on the real world from the virtual world. We were able to display live sensor data such as temperature and light intensity in VR. In addition, were able to turn LEDs on and off from the VR environment. This validates our proof of concept. Sadly, the day came to an end, and we ran out of time to try our proof of concept in augmented reality (AR) with a Magic Leap device. Fortunately, the end of the day didn’t end our project. Fabien was able to tunnel to Philippe’s demo gateway, registered under the mozilla-iot.org subdomain and access it as if it were on a local network, using Mozilla’s remote access feature. The project was a success! We connected the real world to AR as well as to VR. The augmented reality implementation proved easy. Aside from removing ; so it wouldn’t cover our field of view, we didn’t have to change our code. We opened our existing web page on the MagicLeap ML1 thanks to exokit, a new open-source browser specifically targeting spatial devices (as presented during Fabien’s FOSDEM talk). It just worked! As you can see in the video, we briefly reproduced the gateway’s web interface. We have a few ideas for next steps.  By making those spheres interactive we could activate each thing or get more information about them. Imagine using the gateway floorplan to match the spatial information of a thing to the physical layout of a flat. There are A-Frame components that make it straightforward to generate simplified building parts like walls and doors. You don’t need a Magic Leap device to explore AR with the Web of Things. A smartphone running Mozilla XR Viewer on an iPhone or an Android using the experimental build of Chromium will work with traditional RGB cameras. From the Virtual to the Immersive Web The transition from VR/AR to XR takes two steps. The first step is the technical aspect, which is where relying on A-Frame comes in. Although the specifications for VR and AR on the web are still works in progress by the W3C’s "Immersive Web" standardization process, we can target XR devices today. By using a high-level framework, we can begin development even though the spec is still in progress, because the spec includes a polyfill maintained by browser vendors and the community at large. The promise of having one code base for all VR and AR headsets is one of the most exciting aspects of WebXR. Using A-Frame, we are able to start today and be ready for tomorrow. The second step involves you, as reader and user. What would you like to see? Do you have ideas of use cases that create interactive spatial content for VR and AR? Conclusion The hack session in Rennes was fascinating. We were able to get live data from the real world and interact with it easily in the virtual world. This opens the door to many possibilities: from our simplistic prototype to artistic projects that challenge our perception of reality. We also foresee pragmatic use cases, for instance in hospitals and laboratories filled with sensors and modern instrumentation (IIoT or Industrial IoT). This workshop and the resulting videos and code are simple starting points. If you start work on a similar project, please do get in touch (@utopiah and @[email protected]/@RzrFreeFr). We'll help however we can! There's also work in progress on a webapp to A-Frame. Want to get involved in testing or reviewing code? You're invited to help with the design or suggest some ideas of your own. What Things will YOU bring to the virtual world? We can't wait to hear from you. Resources Meeting page (in French): http://gulliver.eu.org/calendrier:2019:02:09 PoC sources (kind of documented and tested but still very basic): http://gulliver-webxr-iot.glitch.me/ https://glitch.com/edit/#!/gulliver-webxr-iot Video about: WebThing Sensor to VR + VR Actuators (LEDs) https://social.samsunginter.net/@rzr/101564201618024415 Guide to build webthings using IoT.js and more: https://github.com/rzr/webthing-iotjs/wiki Sensor color driver for IoT.js or Node.js: https://www.npmjs.com/package/color-sensor-js https://github.com/rzr/color-sensor-js/ Using objects in the real world from a virtual reality setup: https://youtu.be/sPaSd1eRTGE Activating objects in the real world from a virtual reality setup: https://youtu.be/-kSUS4yeJBk Getting live IoT sensor data straight to augmented reality: https://www.youtube.com/watch?v=5XaD6eS6OUE Reality Editor 2.0 (earlier exploration): https://www.media.mit.edu/projects/reality-editor-20/overview/ Where brain, body, and world collide: https://www.sciencedirect.com/science/article/pii/S1389041799000029 Isolated A-Frame component (to be updated once schema implemented): https://github.com/Utopiah/aframe-webthings-component Webthing-Webapp (experimental PWA supporting various UI toolkits, Tizen etc) https://github.com/samsunginternet/webthings-webapp The post Real virtuality: connecting real things to virtual reality using web technologies appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted about 5 years ago
Posted about 5 years ago by Mark Surman
Last year the Mozilla team asked itself: what concrete improvements to the health of the internet do we want to tackle over the next 3–5 years? We looked at a number of different areas we could focus. Making the ad economy more ethical. Combating ... [More] online harassment. Countering the rush to biometric everything. All worthy topics. As my colleague Ashley noted in her November blog post, we settled in the end on the topic of ‘better machine decision making’. This means we will focus a big part of our internet health movement building work on pushing the world of AI to be more human — and more humane. Earlier this year, we looked in earnest at how to get started. We have now mapped out a list of first steps we will take across our main program areas — and we’re digging in. Here are some of the highlights of the tasks we’ve set for ourselves this year: Shape the agenda Bring the ‘better machine decision making’ concept to life by leaning into a focus on AI in the Internet Health Report, MozFest and press pitches about our fellows. Shake up the public narrative about AI by promoting — and funding — artists working on topics like automated censorship, behavioural manipulation and discriminatory hiring. Define a specific (policy) agenda by bringing in senior fellows to ask questions like: ‘how do we use GDPR to push on AI issues?’; or ‘could we turn platforms into info fiduciaries?’ Connect Leaders Highlight the role of AI in areas like privacy and discrimination by widely promoting the work of fellowship, host orgs and MozFest alumni working on these issues. Promote ethics in computer science education through a $3M award fund for professors, knowing we need to get engineers thinking about ethics issues to create better AI. Find allies working on AI + consumer tech issues by heavily focusing our ‘hosted fellowships’ in this area — and then building a loose coalition amongst host orgs. Rally citizens Show consumers how pervasive machine decision making is by growing the number of products that include AI covered in the Privacy Not Included buyers guide. Shine a light on AI, misinformation and tech platforms through a high profile EU election campaign, starting with a public letter to Facebook and political ad transparency. Lend a hand to developers who care about ethics and AI by exploring ideas like the Union of Concern Technologists and an ‘ethics Q+A’ campaign at campus recruiting fairs. We’re also actively refining our definition of ‘better machine decision making’ — and developing a more detailed theory of how we make it happen. A first step in this process was to update the better machine decision making issue brief that we first developed back in November. This process has proven helpful and gives us something crisper to work from. However, we still have a ways to go in setting out a clear impact goal for this work. As a next step, I’m going to post a series of reflections that came to me in writing this document. I’m going to invite other people to do the same. I’m also going to work with my colleague Sam to look closely at Mozilla’s internet health theory of change through an AI lens — poking at the question of how we might change industry norms, government policy and consumer demand to drive better machine decision making. The approach we are taking is: 1. dive in and take action; 2. reflect and refine our thinking as we go; and 3. engage our community and allies as we do these things; 4. rinse and repeat. Figuring out where we go — and where we can make concrete change on how AI gets made and used — has to be an iterative process. That’s why we’ll keep cycling through these steps as we go. With that in mind, myself and others from the Mozilla team will start providing updates and reflections on our blogs. We’ll also be posting invitations to get involved as we go. And, we will track it all on the nascent Mozilla AI wiki. You can can use to follow along — and get involved. The post Mozilla, AI and internet health:an update appeared first on Mark Surman. [Less]
Posted about 5 years ago by Mark Surman
Last year the Mozilla team asked itself: what concrete improvements to the health of the internet do we want to tackle over the next 3–5 years? We looked at a number of different areas we could focus. Making the ad economy more ethical. Combating ... [More] online harassment. Countering the rush to biometric everything. All worthy topics. As my colleague Ashley noted in her November blog post, we settled in the end on the topic of ‘better machine decision making’. This means we will focus a big part of our internet health movement building work on pushing the world of AI to be more human — and more humane. Earlier this year, we looked in earnest at how to get started. We have now mapped out a list of first steps we will take across our main program areas — and we’re digging in. Here are some of the highlights of the tasks we’ve set for ourselves this year: Shape the agenda Bring the ‘better machine decision making’ concept to life by leaning into a focus on AI in the Internet Health Report, MozFest and press pitches about our fellows. Shake up the public narrative about AI by promoting — and funding — artists working on topics like automated censorship, behavioural manipulation and discriminatory hiring. Define a specific (policy) agenda by bringing in senior fellows to ask questions like: ‘how do we use GDPR to push on AI issues?’; or ‘could we turn platforms into info fiduciaries?’ Connect Leaders Highlight the role of AI in areas like privacy and discrimination by widely promoting the work of fellowship, host orgs and MozFest alumni working on these issues. Promote ethics in computer science education through a $3.5M award fund for professors, knowing we need to get engineers thinking about ethics issues to create better AI. Find allies working on AI + consumer tech issues by heavily focusing our ‘hosted fellowships’ in this area — and then building a loose coalition amongst host orgs. Rally citizens Show consumers how pervasive machine decision making is by growing the number of products that include AI covered in the Privacy Not Included buyers guide. Shine a light on AI, misinformation and tech platforms through a high profile EU election campaign, starting with a public letter to Facebook and political ad transparency. Lend a hand to developers who care about ethics and AI by exploring ideas like the Union of Concern Technologists and an ‘ethics Q+A’ campaign at campus recruiting fairs. We’re also actively refining our definition of ‘better machine decision making’ — and developing a more detailed theory of how we make it happen. A first step in this process was to update the better machine decision making issue brief that we first developed back in November. This process has proven helpful and gives us something crisper to work from. However, we still have a ways to go in setting out a clear impact goal for this work. As a next step, I’m going to post a series of reflections that came to me in writing this document. I’m going to invite other people to do the same. I’m also going to work with my colleague Sam to look closely at Mozilla’s internet health theory of change through an AI lens — poking at the question of how we might change industry norms, government policy and consumer demand to drive better machine decision making. The approach we are taking is: 1. dive in and take action; 2. reflect and refine our thinking as we go; and 3. engage our community and allies as we do these things; 4. rinse and repeat. Figuring out where we go — and where we can make concrete change on how AI gets made and used — has to be an iterative process. That’s why we’ll keep cycling through these steps as we go. With that in mind, myself and others from the Mozilla team will start providing updates and reflections on our blogs. We’ll also be posting invitations to get involved as we go. And, we will track it all on the nascent Mozilla AI wiki. You can can use to follow along — and get involved. The post Mozilla, AI and internet health:an update appeared first on Mark Surman. [Less]
Posted about 5 years ago by mkohler
Please join us in congratulating Viswaprasath KS, our Rep of the Month for November 2018! Viswaprasath KS, also know as iamvp7, is a long time Mozillian from India who joined the Mozilla Rep program in June 2013. By profession he works as a software ... [More] developer. He initially started contributing with designs and SUMO (Army of Awesome). He was also part of Firefox Student Ambassador E-Board and helped students to build exciting Firefox OS apps. In May 2014 he became one of the Firefox OS app reviewers.   Currently he is an active Mozilla TechSpeaker and loves to evangelise about WebExtensions and Progressive Web Apps. He has been an inspiration to many and loves to keep working towards a better web. He has worked extensively on Rust and WebExtensions, conducting many informative sessions on these topics recently. Together with other Mozillians he also wrote “Building Browser Extension”. Thanks Viswaprasath, keep rocking the Open Web! To congratulate him, please head over to Discourse! [Less]
Posted about 5 years ago by mconley
Highlights Firefox Account is experimenting with putting an avatar next to the hamburger menu. It will give users visibility on their account, sync status as well as links to manage the account. Targeting landing soon! Take Firefox with you! ... [More] We have added support for blocking known fingerprinters and cryptominers with content blocking! This is currently enabled in Nightly. This is currently enabled in Nightly, and is still experimental. It might break some sites. Lots of DevTools goodies this week! In the DevTools Debugger, the XHR breakpoint type (ANY, GET, PUT, POST, etc.)  can be now specified through new UI. This was done by a volunteer contributor, Jaril! Log points UX has been improved (including syntax highlighting, context menu and markers), thanks to contributors Bomsy and Florens Log points are different from breakpoints – they don’t break JS execution, they just create a log when hit. It is now possible to copy all collected CSS changes done through DevTools UI. Thanks to Razvan Caliman! Auto discovery of layout CSS properties (done by Micah Tigley). Hold shift and mouse over any defined property in the box-model widget (in the Layout sidebar). This will highlight the corresponding CSS property in the rule-view. The Password Manager team has added a “View Saved Logins” footer to the password manager autocomplete popup  (disabled until the follow-up is resolved) Tim Huang and Tom Ritter added letterboxing (an anti-fingerprinting technique) to Firefox Note the gray margin in the content area. Friends of the Firefox team Resolved bugs (excluding employees) Fixed more than one bug Hemakshi Sachdev [:hemakshis] Manish [:manishkk] Oriol Brufau [:Oriol] Rainier Go [:rfgo] Tim Nguyen :ntim Trishul New contributors (🌟 = first patch) 🌟 Adam Czyzewski updated a sync tooltip to make its purpose clearer 🌟 Tony Ross [:antross] made one of our DevTools WebExtension APIs more consistent with Chrome’s implementation, making it easier to port add-ons over The following new contributors switched us from hand-rolled promiseWaitForCondition methods to TestUtils.waitForCondition 🌟 Carolina Jimenez Gomez for one of our WebRTC tests 🌟 Neha for one of our browser UI tests chengy12, one of our MSU students, ported a bunch of printing-related strings to Fluent freychr3, another of our MSU students, ported the Page Info dialog to Fluent 🌟 Ivan Yung made it so that about:policies shows a message when there are no active policies Martin Koroknay fixed a “Failed prop type” error in the DevTools Console 🌟 l.khadka de-duplicated some code for about:telemetry 🌟 msfr-develop fixed a bug where the DevTools inspector wouldn’t scroll to an already-selected element when clicking on its closing tag. 🌟 Maximilian Schütte fixed a 12 year old SessionStore bug, where minimized windows wouldn’t have their un-minimized sizemodes recorded! Rainier Go [:rfgo] Fixed a bug where the DevTools Storage Inspector sometimes didn’t show items after clearing local storage Added a tooltip for the “Refresh” button in the DevTools Storage Inspector Removed some unneeded prefs 🌟 Sonia fleshed out the DevTools contributor documentation with a paragraph on what to do if things go wrong with ./mach bootstrap. Project Updates Activity Stream Landed and uplifted MVP for experiments, Beta smoke test started Monday. Preparing to run 16 layout experiments in Release 66 cycle for better engagement, e.g., large Hero articles vs List of articles The team is helping Pocket engineers transition to increase ownership of new tab CFR for Pinned Tabs will be our next recommendation experiment! First experiment recommends add-ons, e.g., Facebook Container, Google Translate Current experiment will suggest pinning tabs, e.g., Gmail, productivity / messaging sites Add-ons / Web Extensions Work continues on long-term projects: Supporting migration of search engines to WebExtensions Handling extensions in private browsing windows Rewriting about:addons in HTML Rob fixed a bug in which optional permissions were not cleared when an extension was uninstalled. Luca got rid of a synchronous reflow in extension popups. Applications Screenshots Latest server release is on stage environment for testing prior to release (Changelog) We have now exposed server shutdown strings to web localizers. In case anyone asks, Screenshots is not being removed from Firefox, just the ability to upload shots. This upcoming server release will include tools to help users download their saved shots Lockbox This past sprint continued the focus on foundational work: [lorchard] “Reveal Password” functionality (#84) [loines] Define & document telemetry metrics (#82) [lorchard] Expect a complete Login when updating in addon Logins API (#80) [6a68] Re-style the list view on the management page (#76) Our work is tracked as the ‘desktop’ repository within the Lockbox waffle board We don’t yet have any good-first-bugs filed, but swing by #lockbox if you want to contribute ^_^ Services (Firefox Accounts / Sync / Push) New FxA device pairing flow landed in Nightly, but pref’d off for now. You’ll soon be able to sign in to FxA on Android and iOS by scanning a QR code, instead of typing your password! Check out this Lucidchart sketch to see the flow if you’re curious to learn more! Developer Tools Network Resizeable Columns – Our Outreachy #17 intern Lenka Pelechova is finishing support for resizeable Columns in the Network panel. Currently focusing on Performance (bug) Layout Tools Our UX Designer Victoria Wang published survey for CSS Layout Debugging. You can help us build better CSS debugging tools (quick single-page survey) Technical debt Firefox 67 will soon display a removal notice (in the Options panel) about the Shader Editor, Canvas and Web Audio panels, which are going to be removed in 68. Work done by Yulia Startsev. Until the MDN page is up, you can look at the intent to unship post in the mailing list. Remote Debugging Showing backward compatibility warnings in about:debugging (bug) Added a checkbox to enable local addon debugging (bug) Open the Profiler for remote runtimes in about:debugging (bug) Fission MattN’s work to lazy load FormAutofillContent and convert them to actors landed and resulted in a 2.19% base JS memory improvement \o/ M1 is on track, with just a couple of remaining bugs M1’s main focus is displaying an iframe in a separate content proess Performance Firefox Front-end Performance Update #13 posted dthayer Rounding out the end of WebRender document splitting work Reduced how much we paint when restoring sessions with pinned tabs (Backing out for session restore problems) Adding Telemetry for startup cache hits and misses felipe New tab animations (still off by default) have surfaced at least one serious bug, but otherwise all quiet. Reminder: You can test this out by setting browser.tabs.newanimations to true UX has gotten back with some feedback, which will be addressed soon Working on making the Hidden Window lazier for a start-up win on Windows and Linux Improving ContentSearch performance, which sends data from the Search service to about:newtab in the content process Gijs Latest browser adjustment patch was tested by vchin’s team, and found to actually slow perceived page load, since frames were painted later. We might have stumbled on an interesting way of saving battery power though vchin’s team is now testing the original patch, which lowered frame rate globally for low-end hardware Making newtab preloading occur on idle mconley Landed a new Talos test to measure time to initial about:home paint Filed a bug to make PageStyleChild lazier / inert for the about:home case Locally prototyped a patch to launch content process sooner, but no wins yet Performance tools Perf-html.io moved to profiler.firefox.com and perf.html is now called “Firefox Profiler”. I/O markers are now visible in the timeline. I/O marker stacks are visible when hovering them, and in lots of cases the path of the file that was touched is shown. When capturing a profile, to have I/O markers, you need to check the “Main Thread IO” checkbox in the Gecko profiler add-on, or enable the “mainthreadio” feature using the MOZ_PROFILER_STARTUP_FEATURES environment variable when profiling startup. We are investigating optionally collecting markers for off-main thread I/O, and enabling main thread I/O markers by default. A FileIO marker with operation, source, filename and stack information We improved shutdown profiling: it’s now compatible with mainthreadio markers, and shows content process shutdowns. Here’s a profile with startup + shutdown, on a fresh profile, with I/O markers. We have markers for [Less]
Posted about 5 years ago by Amba Kak
On Thursday, the Indian government approved an ordinance — an extraordinary procedure allowing the government to enact legislation without Parliamentary approval — that threatens to dilute the impact of the Supreme Court’s decision last September. ... [More] The Court had placed fundamental limits to the otherwise ubiquitous use of Aadhaar, India’s biometric ID system, including the requirement of an authorizing law for any private sector use. While the ordinance purports to provide this legal backing, its broad scope could dilute both the letter and intent of the judgment. As per the ordinance, companies will now be able to authenticate using Aadhaar as long as the Unique Identification Authority of India (UIDAI) is satisfied that “certain standards of privacy and security” are met. These standards remain undefined, and especially in the absence of a data protection law, this raises serious concerns. The swift movement to foster expanded use of Aadhaar is in stark contrast to the lack of progress on advancing a data protection bill that would safeguard the rights of Indians whose data is implicated in this system. Aadhaar continues to be effectively mandatory for a vast majority of Indian residents, given its requirement for the payment of income tax and various government welfare schemes. Mozilla has repeatedly warned of the dangers of a centralized database of biometric information and authentication logs. The implementation of these changes with no public consultation only exacerbates the lack of public accountability that has plagued the project. We urge the Indian government to consider the serious privacy and security risks from expanded private sector use of Aadhaar. The ordinance will need to gain Parliamentary approval in the upcoming session (and within six months) or else it will lapse. We urge the Parliament not to push through this law which clearly dilutes the Supreme Court’s diktat, and any subsequent proposals must be preceded by wide public consultation and debate.   The post Indian government allows expanded private sector use of Aadhaar through ordinance (but still no movement on data protection law) appeared first on Open Policy & Advocacy. [Less]