I Use This!
Very High Activity

News

Analyzed 1 day ago. based on code collected 1 day ago.
Posted about 5 years ago by Will Kahn-Greene
Summary Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the crash reporter collects data about the crash, generates a crash report, and submits that report to Socorro. Socorro saves the crash ... [More] report, processes it, and provides an interface for aggregating, searching, and looking at crash reports. January was a good month. This blog post summarizes activities. Read more… (5 mins to read) [Less]
Posted about 5 years ago by Mozilla
After calls for increased transparency and accountability from Mozilla and partners in civil society, Facebook announced it would open its Ad Archive API next month. While the details are still limited, this is an important first step to increase ... [More] transparency of political advertising and help prevent abuse during upcoming elections. We’re committed to a new level of transparency for ads on Facebook (1/3) https://t.co/A9WeKGYHXO — Rob Leathern (@robleathern) February 11, 2019 Facebook’s commitment to make the API publicly available could provide researchers, journalists and other organizations the data necessary to build tools that give people a behind the scenes look at how and why political advertisers target them. It is now important that Facebook follows through on these statements and delivers an open API that gives the public the access it deserves. The decision by Facebook comes after months of engagement by the Mozilla Corporation through industry working groups and government initiatives and most recently, an advocacy campaign led by the Mozilla Foundation. This week, the Mozilla Foundation was joined by a coalition of technologists, human rights defenders, academics, journalists demanding Facebook take action and deliver on the commitments made to put users first and deliver increased transparency. “In the short term, Facebook needs to be vigilant about promoting transparency ahead of and during the EU Parliamentary elections,” said Ashley Boyd, Mozilla’s VP of Advocacy. “Their action — or inaction — can affect elections across more than two dozen countries. In the long term, Facebook needs to sincerely assess the role its technology and policies can play in spreading disinformation and eroding privacy.” And in January, Mozilla penned a letter to the European Commission underscoring the importance of a publicly available API. Without the data, Mozilla and other organizations are unable to deliver products designed to pull back the curtain on political advertisements. “Industry cannot ignore its potential to either strengthen or undermine the democratic process,” said Alan Davidson Mozilla’s VP of Global Policy, Trust and Security. “Transparency alone won’t solve misinformation problems or election hacking, but it’s a critical first step. With real transparency, we can give people more accurate information and powerful tools to make informed decisions in their lives.” This is not the first time Mozilla has called on the industry to prioritize user transparency and choice. In the wake of the Cambridge Analytica news, the Mozilla Foundation rallied tens of thousands of internet users to hold Facebook accountable for its post-scandal promises. And Mozilla Corporation took action with a pause on advertising our products on Facebook and provided users with Facebook Container for Firefox, a product that keeps Facebook from tracking people around the web when they aren’t on the platform. While the announcement from Facebook indicates a move towards transparency, it is critical the company follows through and delivers not only on this commitment but the other promises also made to European lawmakers and voters. The post Facebook Answers Mozilla’s Call to Deliver Open Ad API Ahead of EU Election appeared first on The Mozilla Blog. [Less]
Posted about 5 years ago by [email protected] (Rabimba Karanjai)
In our last blog post (part 1), we took a look at how algorithms detect keypoints in camera images. These form the basis of our world tracking and environment recognition. But for Mixed Reality, that alone is not enough. We have to be able to ... [More] calculate the 3d position in the real world. It is often calculated by the spatial distance between itself and multiple keypoints. This is often called Simultaneous Localization and Mapping (SLAM). And this is what is responsible for all the world tracking we see in ARCore/ARKit. What we will cover today: How ARCore and ARKit does it's SLAM/Visual Inertia Odometry Can we D.I.Y our own SLAM with reasonable accuracy to understand the process better Sensing the world: as a computer When we start any augmented reality application in mobile or elsewhere, the first thing it tries to do is to detect a plane. When you first start any MR app in ARKit, ARCore, the system doesn't know anything about the surroundings. It starts processing data from camera and pairs it up with other sensors. Once it has those data it tries to do the following two things Build a point cloud mesh of the environment by building a map Assign a relative position of the device within that perceived environment From our previous article, we know it's not always easy to build this map from unique feature points and maintain that. However, that becomes easy in certain scenarios if you have the freedom to place beacons at different known locations. Something we did at Mozfest 2016 when Mozilla still had the Magnets project which we had utilized as our beacons. A similar approach is used in a few museums for providing turn by turn navigation to point of interests as their indoor navigation system. However Augmented Reality systems don't have this luxury. A little saga about relationships We will start with a map.....about relationships. Or rather "A Stochastic Map For Uncertain Spatial Relationships" by Smith et al.  In the real world, you have precise and correct information about the exact location of every object. However in AR world that is not the case. For understanding the case lets assume we are in an empty room and our mobile has detected a reliable unique anchor (A) (or that can be a stationary beacon) and our position is at (B). In a perfect situation, we know the distance between A and B, and if we want to move towards C we can infer exactly how we need to move. Unfortunately, in the world of AR and SLAM we need to work with imprecise knowledge about the position of A and C. This results in uncertainties and the need to continually correct the locations.  The points have a relative spatial relationship with each other and that allows us to get a probability distribution of every possible position. Some of the common methods to deal with the uncertainty and correct positioning errors are Kalman Filter (this is what we used in Mozfest), Maximum Posteriori Estimation or Bundle Adjustment.  Since these estimations are not perfect, every new sensor update also has to update the estimation model. Aligning the Virtual World To map our surroundings reliably in Augmented Reality, we need to continually update our measurement data. The assumptions are, every sensory input we get contains some inaccuracies. We can take help from Milios et al in their paper "Globally Consistent Range Scan Alignment for Environment Mapping" to understand the issue.  Image credits: Lu, F., & Milios, E. (1997). Globally consistent range scan alignment for environment mapping Here in figure a, we see how going from position P1....Pn accumulates little measurement errors over time until the resulting environment map is wrong. But when we align the scan sin fig b, the result is considerably improved. To do that, the algorithm keeps track of all local frame data and network spatial relations among those. A common problem at this point is how much data to store to keep doing the above correctly. Often to reduce complexity level the algorithm reduces the keyframes it stores. Let's build the map a.k.a SLAM To make Mixed Reality feasible, SLAM has the following challenges to handle Monocular Camera input Real-time Drift Skeleton of SLAM How do we deal with these in a Mixed Reality scene? We start with the principles by Cadena et. al in their "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age" paper. From that paper, we can see the standard architecture of SLAM to be something like Image Credit: Cadena et al If we deconstruct the diagram we get the following four modules Sensor: On mobiles, this is primarily Camera, augmented by accelerometer, gyroscope and depending on the device light sensor. Apart from Project Tango enabled phones, nobody ahd depth sensor for Android. Front End: The feature extraction and anchor identification happens here as we described in previous post. Back End: Does error correction to compensate for the drift and also takes care of localizing pose model and overall geometric reconstruction. SLAM estimate: This is the result containing the tracked features and locations. To better understand it, we can take a look at one of the open source implementations of SLAM. D.I.Y SlAM: Taking a peek at ORB-SLAM To try our hands on to understand how SLAM works let's take a look at a recent algorithm by Montiel et al called ORB-SLAM. We will use the code of its successor ORB-SLAM2. The algorithm is available in Github under GPL3 and I found this excellent blog which goes into nifty details on how we can run ORB-SLAM2 in our computer. I highly encourage you to read that to avoid encountering problems at the setup. His talk is also available here to see and is very interesting ORB-SLAM just uses the camera and doesn't utilize any other gyroscope or accelerometer inputs. But the result is still impressive. Detecting Features: ORB-SLAM, as the name suggests uses ORB to find keypoint and generate binary descriptors. Internally ORB is based on the same method to find keypoint and generating binary descriptors as we discussed in part 1 for BRISK. In short ORB-SLAM analyzes each picture to find keyframe and then store it with a reference to the keyframe in a map. These are utilized in future to correct historical data. Keypoint > 3d landmark: The algorithm looks for new frames from the image and when it finds one it performs keypoint detection on it. These are then matched with the previous frame to get a spatial distance. This so far provides a good idea on where it can find the same key points again in a new frame. This provides the initial camera pose estimation. Refine Camera Pose: The algorithm repeats Step 2 by projecting the estimated initial camera pose into next camera frame to search for more keypoint which corresponds to the one it already knows. If it is certain it can find them, it uses the additional data to refine the pose and correct any spatial measurement error. green squares  = tracked keypoints. Blue boxes: keyframes. Red box = camera view. Red points = local map points.Image credits: ORB-SLAM video by Raúl Mur Artal Returning home a.k.a Loop Closing One of the goals of MR is when you walk back to your starting point it should understand you have returned. The inherent inefficiency and the induced error make it hard to accurately predict this. This is called loop closing for SLAM. ORB-SLAM handles it by defining a threshold. It tries to match keypoints in a frame with next frames and if the previously detected frames matching percentage exceeds a threshold then it knows you have returned. Loop Closing performed by the ORB-SLAM algorithm.Image credits: Mur-Artal, R., Montiel To account for the error, the algorithm has to propagate coordinate correction throughout the whole frame with updated knowledge to know the loop should be closed The reconstructed map before (up) and after (down) loop closure.Image credits: Mur-Artal, R., Montiel SLAM today: Google: ARCore's documentation describes it's tracking method as "concurrent odometry and mapping" which is essentially SLAM+sensor inputs. Their patent also indicates they have included inertial sensors into the design. Apple: Apple also is using Visual Interial Odometry which they acquired by buying Metaio and FlyBy. I learned a lot about what they are doing by having a look at this video at WWDC18. Additional Read: I found this "A comparative analysis of tightly-coupled monocular, binocular, and stereo VINS" paper to be a nice read to see how different IMU's are used and compared. IMU's are the devices that provide all this sensory data to our devices today. And their calibration is supposed to be crazy difficult.  I hope this post along with the previous one provides a better understanding of how our world is tracked inside ARCore/ARKit. In a few days, I will start another blog series on how to build Mixed Reality applications and use experimental as well as some stable WebXR api's to build Mixed Reality application demos. As always feedbacks are welcome. References/Interesting Reads: A stochastic map for uncertain spatial relationships Globally consistent range scan alignment for environment mapping ORB-SLAM Past, Present, and Future of Simultaneous Localization and Mapping: Towards the Robust-Perception Age ORB feature tracking algorithm A Comparative Analysis of Tightly-coupled Monocular, Binocular, and Stereo VINS Semantic Visual Localization (this might just be future) The first part of this lives here: https://blog.rabimba.com/2018/10/arcore-and-arkit-what-is-under-hood.html  [Less]
Posted about 5 years ago by Sylvestre Ledru
Firefox fights for people online: for control and choice, for privacy, for safety. We do this because it is our mission to keep the web open and accessible to all. No other tech company has people’s back like we do. Part of keeping you covered is ... [More] ensuring that our Firefox browser and the other tools and services we offer are running at top performance. When we make an update, or add a new feature the experience should be as seamless and smooth as possible for the user. That’s why Mozilla just partnered with Ubisoft to start using Clever-Commit, an Artificial Intelligence coding assistant developed by Ubisoft La Forge that will make the Firefox code-writing process faster and more efficient. Thanks to Clever-Commit, Firefox users will get to use even more stable versions of Firefox and have even better browsing experiences. We don’t spend a ton of time regaling our users with the ins-and-outs of how we build our products because the most important thing is making sure you have the best experience when you’re online with us. But building a browser is no small feat. A web browser plays audio and video, manages various network protocols, secures communications using advanced cryptographic algorithms, handles content running in parallel multiple processes, all this to render the content that people want to see on the websites they visit. And underneath all of this is a complex body of code that includes millions of lines written in various programming languages: JavaScript, C++, Rust. The code is regularly edited, released and updated onto Firefox users’ machines. Every Firefox release is an investment, with an average of 8,000 software edits loaded into the browser’s code by hundreds of Firefox staff and contributors for each release. It has a huge impact, touching hundreds of millions of internet users. With a new release every 6 to 8 weeks, making sure the code we ship is as clean as possible is crucial to the performance people experience with Firefox. The Firefox engineering team will start using Clever-Commit in its code-writing, testing and release process. We will initially use the tool during the code review phase, and if conclusive, at other stages of the code-writing process, in particular during automation. We expect to save hundreds of hours of bug riskiness analysis and detection. Ultimately, the integration of Clever-Commit into the full Firefox developer workflow could help catch up to 3 to 4 out of 5 bugs before they are introduced into the code. By combining data from the bug tracking system and the version control system (aka changes in the code base), Clever-Commit uses artificial intelligence to detect patterns of programming mistakes based on the history of the development of the software. This allows us to address bugs at a stage when fixing a bug is a lot cheaper and less time-consuming, than upon release. Mozilla will contribute to the development of Clever-Commit by providing programming language expertise in Rust, C++ and Javascript, as well as expertise in C++ code analysis and analysis of bug tracking systems. The post Making the Building of Firefox Faster for You with Clever-Commit from Ubisoft appeared first on Future Releases. [Less]
Posted about 5 years ago by Josh Marinacci
This is part 2 of my series on how I built Jingle Smash, a block smashing WebVR game . The key to a physics based game like Jingle Smash is of course the physics engine. In the Javascript world there are many to choose from. My requirements were for ... [More] fully 3D collision simulation, working with ThreeJS, and being fairly easy to use. This narrowed it down to CannonJS, AmmoJS, and Oimo.js: I chose to use the CannonJS engine because AmmoJS was a compiled port of a C lib and I worried would be harder to debug, and Oimo appeared to be abandoned (though there was a recent commit so maybe not?). CannonJS CannonJS is not well documented in terms of tutorials, but it does have quite a bit of demo code and I was able to figure it out. The basic usage is quite simple. You create a Body object for everything in your scene that you want to simulate. Add these to a World object. On each frame you call world.step() then read back position and orientations of the calculated bodies and apply them to the ThreeJS objects on screen. While working on the game I started building an editor for positioning blocks, changing their physical properties, testing the level, and resetting them. Combined with physics this means a whole lot of syncing data back and forth between the Cannon and ThreeJS sides. In the end I created a Block abstraction which holds the single source of truth and keeps the other objects updated. The blocks are managed entirely from within the BlockService.js class so that all of this stuff would be completely isolated from the game graphics and UI. Physics Bodies When a Block is created or modified it regenerates both the ThreeJS objects and the Cannon objects. Since ThreeJS is documented everywhere I'll only show the Cannon side. let type = CANNON.Body.DYNAMIC if(this.physicsType === BLOCK_TYPES.WALL) { type = CANNON.Body.KINEMATIC } this.body = new CANNON.Body({ mass: 1,//kg type: type, position: new CANNON.Vec3(this.position.x,this.position.y,this.position.z), shape: new CANNON.Box(new CANNON.Vec3(this.width/2,this.height/2,this.depth/2)), material: wallMaterial, }) this.body.quaternion.setFromEuler(this.rotation.x,this.rotation.y,this.rotation.z,'XYZ') this.body.jtype = this.physicsType this.body.userData = {} this.body.userData.block = this world.addBody(this.body) Each body has a mass, type, position, quaternion, and shape. For mass I’ve always used 1kg. This works well enough but if I ever update the game in the future I’ll make the mass configurable for each block. This would enable more variety in the levels. The type is either dynamic or kinematic. Dynamic means the body can move and tumble in all directions. A kinematic body is one that does not move but other blocks can hit and bounce against it. The shape is the actual shape of the body. For blocks this is a box. For the ball that you throw I used a sphere. It is also possible to create interactive meshes but I didn’t use them for this game. An important note about Boxes. In ThreeJS the BoxGeometry takes the the full width, height, and depth in the constructor. In CannonJS you use the extent from the center, which is half of the full width, height, and depth. I didn’t realize this when I started, only to discover my cubes wouldn’t fall all the way to the ground. :) The position and quaternion (orientation) properties use the same values in the same order as ThreeJS. The material refers to how that block will bounce against others. In my game I use only two materials: wall and ball. For each pair of materials you will create a contact material which defines the friction and restitution (bounciness) to use when that particular pair collides. const wallMaterial = new CANNON.Material() // … const ballMaterial = new CANNON.Material() // … world.addContactMaterial(new CANNON.ContactMaterial( wallMaterial,ballMaterial, { friction:this.wallFriction, restitution: this.wallRestitution } )) Gravity All of these bodies are added to a World object with a hard coded gravity property set to match Earth gravity (9.8m/s^2), though individual levels may override this. The last three levels of the current game have gravity set to 0 for a different play experience. const world = new CANNON.World(); world.gravity.set(0, -9.82, 0); Once the physics engine is set up and simulating the objects we need to update the on screen graphics after every world step. This is done by just copying the properties out of the body and back to the ThreeJS object. this.obj.position.copy(this.body.position) this.obj.quaternion.copy(this.body.quaternion) Collision Detection There is one more thing we need: collisions. The engine handles colliding all of the boxes and making them fall over, but the goal of the game is that the player must knock over all of the crystal boxes to complete the level. This means I have to define what knock over means. At first I just checked if a block had moved from its original orientation, but this proved tricky. Sometimes a box would be very gently knocked and tip slightly, triggering a ‘knock over’ event. Other times you could smash into a block at high speed but it wouldn’t tip over because there was a wall behind it. Instead I added a collision handler so that my code would be called whenever two objects collide. The collision event includes a method to get the velocity at the impact. This allows me to ignore any collisions that aren’t strong enough. You can see this in player.html function handleCollision(e) { if(game.blockService.ignore_collisions) return //ignore tiny collisions if(Math.abs(e.contact.getImpactVelocityAlongNormal() < 1.0)) return //when ball hits moving block, if(e.body.jtype === BLOCK_TYPES.BALL) { if( e.target.jtype === BLOCK_TYPES.WALL) { game.audioService.play('click') } if (e.target.jtype === BLOCK_TYPES.BLOCK) { //hit a block, just make the thunk sound game.audioService.play('click') } } //if crystal hits anything and the impact was strong enought if(e.body.jtype === BLOCK_TYPES.CRYSTAL || e.target.jtype === BLOCK_TYPES.CRYSTAL) { if(Math.abs(e.contact.getImpactVelocityAlongNormal() >= 2.0)) { return destroyCrystal(e.target) } } // console.log(`collision: body ${e.body.jtype} target ${e.target.jtype}`) } The collision event handler was also the perfect place to add sound effects for when objects hit each other. Since the event includes which objects were involved I can use different sounds for different objects, like the crashing glass sound for the crystal blocks. Firing the ball is similar to creating the block bodies except that it needs an initial velocity based on how much force the player slingshotted the ball with. If you don’t specify a velocity to the Body constructor then it will use a default of 0. fireBall(pos, dir, strength) { this.group.worldToLocal(pos) dir.normalize() dir.multiplyScalar(strength*30) const ball = this.generateBallMesh(this.ballRadius,this.ballType) ball.castShadow = true ball.position.copy(pos) const sphereBody = new CANNON.Body({ mass: this.ballMass, shape: new CANNON.Sphere(this.ballRadius), position: new CANNON.Vec3(pos.x, pos.y, pos.z), velocity: new CANNON.Vec3(dir.x,dir.y,dir.z), material: ballMaterial, }) sphereBody.jtype = BLOCK_TYPES.BALL ball.userData.body = sphereBody this.addBall(ball) return ball } Next Steps Overall CannonJS worked pretty well. I would like it to be faster as it costs me about 10fps to run, but other things in the game had a bigger impact on performance. If I ever revisit this game I will try to move the physics calculations to a worker thread, as well as redo the syncing code. I’m sure there is a better way to sync objects quickly. Perhaps JS Proxies would help. I would also move the graphics & styling code outside, so that the BlockService can really focus just on physics. While there are some more powerful solutions coming with WASM, today I definitely recommend using CannonJS for the physics in your WebVR games. The ease of working with the API (despite being under documented) meant I could spend more time on the game and less time worrying about math. [Less]
Posted about 5 years ago by Mozilla
Mozilla and our allies are asking four major retailers to adopt our Minimum Security Guidelines   Today, Mozilla, Consumers International, the Internet Society, and eight other organizations are urging Amazon, Target, Walmart, and Best Buy to stop ... [More] selling insecure connected devices. Why? As the Internet of Things expands, a troubling pattern is emerging: [1] Company x makes a “smart” product — like connected stuffed animals — without proper privacy or security features [2] Major retailers sell that insecure product widely [3] The product gets hacked, and consumers are the ultimate loser This has been the case with smart dolls, webcams, doorbells, and countless other devices. And the consequences can be life threatening: “Internet-connected locks, speakers, thermostats, lights and cameras that have been marketed as the newest conveniences are now also being used as a means for harassment, monitoring, revenge and control,” the New York Times reported last year. Compounding this: It is estimated that by 2020, 10 billion IoT products will be active. Last year, in an effort to make connected devices on the market safer for consumers, Mozilla, the Internet Society, and Consumers International published our Minimum Security Guidelines: the five basic features we believe all connected devices should have. They include encrypted communications; automatic updates; strong password requirements; vulnerability management; and an accessible privacy policy. Now, we’re calling on four major retailers to publicly endorse these guidelines, and also commit to vetting all connected products they sell against these guidelines. Mozilla, Consumers International, and the Internet Society have sent a sign-on letter to Amazon, Target, Walmart, and Best Buy. The letter is also signed by 18 Million Rising, Center for Democracy and Technology, ColorOfChange, Consumer Federation of America, Common Sense Media, Hollaback, Open Media & Information Companies Initiative, and Story of Stuff. Currently, there is no shortage of insecure products on shelves. In our annual holiday buyers guide, which ranks popular devices’ privacy and security features, about half the products failed to meet our Minimum Security Guidelines. And in the Valentine’s Day buyers guide we released last week, nine out of 18 products failed. Why are we targeting retailers, and not the companies themselves? Mozilla can and does speak with the companies behind these devices. But by talking with retailers, we believe we can have an outsized impact. Retailers don’t want their brands associated with insecure goods. And if retailers drop a company’s product, that company will be compelled to improve its product’s privacy and security features. We know this approach works. Last year, Mozilla called on Target and Walmart to stop selling CloudPets, an easily-hackable smart toy. Target and Walmart listened, and stopped selling the toys. In the short-term, we can get the most insecure devices off shelves. In the long-term, we can fuel a movement for a more secure, privacy-centric Internet of Things. Read the full letter, here or below. Dear Target, Walmart, Best Buy and Amazon,  The advent of new connected consumer products offers many benefits. However, as you are aware, there are also serious concerns regarding standards of privacy and security with these products. These require urgent attention if we are to maintain consumer trust in this market. It is estimated that by 2020, 10 billion IoT products will be active. The majority of these will be in the hands of consumers. Given the enormous growth of this space, and because so many of these products are entrusted with private information and conversations, it is incredibly important that we all work together to ensure that internet-enabled devices enhance consumers’ trust. Cloudpets illustrated the problem, however we continue to see connected devices that fail to meet the basic privacy and security thresholds. We are especially concerned about how these issues impact children, in the case of connected toys and other devices that children interact with. That’s why we’re asking you to publicly endorse these minimum security and privacy guidelines, and commit publicly to use them to vet any products your company sells to consumers. While many products can and should be expected to meet a high set of privacy and security standards, these minimum requirements are a strong start that every reputable consumer company must be expected to meet. These minimum guidelines require all IoT devices to have: 1) Encrypted communications The product must use encryption for all of its network communications functions and capabilities. This ensures that all communications are not eavesdropped or modified in transit. 2) Security updates The product must support automatic updates for a reasonable period after sale, and be enabled by default. This ensures that when a vulnerability is known, the vendor can make security updates available for consumers, which are verified (using some form of cryptography) and then installed seamlessly. Updates must not make the product unavailable for an extended period. 3) Strong passwords If the product uses passwords for remote authentication, it must require that strong passwords are used, including having password strength requirements. Any non-unique default passwords must also be reset as part of the device’s initial setup. This helps protect the device from vulnerability to guessable password attacks, which could result in device compromise. 4) Vulnerability management The vendor must have a system in place to manage vulnerabilities in the product. This must also include a point of contact for reporting vulnerabilities and a vulnerability handling process internally to fix them once reported. This ensures that vendors are actively managing vulnerabilities throughout the product’s lifecycle. 5) Privacy practices The product must have a privacy policy that is easily accessible, written in language that is easily understood and appropriate for the person using the device or service at the point of sale. At a minimum, users should be notified about substantive changes to the policy. If data is being collected, transmitted or shared for marketing purposes, that should be clear to users and, in line with the EU’s General Data Protection Regulation (GDPR), there should be a way to opt-out of such practices. Users should also have a way to delete their data and account. Additionally, like in GDPR, this should include a policy setting standard retention periods wherever possible. We’ve seen headline after headline about privacy and security failings in the IoT space. And it is often the same mistakes that have led to people’s private moments, conversations, and information being compromised. Given the value and trust that consumers place in your company, you have a uniquely important role in addressing this problem and helping to build a more secure, connected future. Consumers can and should be confident that, when they buy a device from you, that device will not compromise their privacy and security. Signing on to these minimum guidelines is the first step to turn the tide and build trust in this space. Yours, Mozilla, Internet Society, Consumer’s International, ColorOfChange, Open Media & Information Companies Initiative, Common Sense Media, Story of Stuff, Center for Democracy and Technology, Consumer Federation of America, 18 Million Rising, Hollaback The post Retailers: All We Want for Valentine’s Day is Basic Security appeared first on The Mozilla Blog. [Less]
Posted about 5 years ago by Dan Brown
Here at Mozilla, we are big fans of Glitch. In early 2017 we made the decision to host our A-Frame content on their platform. The decision was easy. Glitch makes it easy to explore, and remix live code examples for WebVR. We also love the people ... [More] behind Glitch. They have created a culture and a community that is kind, encouraging, and champions creativity. We share their vision for a web that is creative, personal, and human. The ability to deliver immersive experiences through the browser opens a whole new avenue for creativity. It allows us to move beyond screens, and keyboards. It is exciting, and new, and sometimes a bit weird (but in a good way). Building a virtual reality experience may seem daunting, but it really isn’t. WebVR and frameworks like A-Frame make it really easy to get started. This is why we worked with Glitch to create a WebVR starter kit. It is a free, 5-part video course with interactive code examples that will teach you the fundamentals of WebVR using A-Frame. Our hope is that this starter kit will encourage anyone who has been on the fence about creating virtual reality experiences to dive in and get started. Check out part one of the five-part series below. If you want more, I’d encourage you to check out the full starter kit here, or use the link at the bottom of this post.   In the Glitch viewer embedded below, you can see how to make a WebVR planetarium in just a few easy-to-follow steps. You learn interactively (and painlessly) by editing and remixing the working code in the viewer:     Ready to keep going? Click below to view the full series on Glitch. View WebVR Starter Kit The post Anyone can create a virtual reality experience with this new WebVR starter kit from Mozilla and Glitch appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted about 5 years ago by TWiR Contributors
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust ... [More] or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts Learning Rust in 2019. A quick look at trait objects in Rust. Allocations in Rust: An introduction to the memory model. Custom exit status codes with ? in main. Rust: a unique perspective. Rust on STM32: Blinking an LED. Generators I: Toward a minimum viable product. Aturon retires from the Core Team (but not from Rust). Rewriting stackcollapse-xdebug in Rust. Are you still using 'println' in Rust for debugging? UCG+Miri All-Hands 2019 Recap. Crate of the Week This week's crate is sysinfo, a system handler to get information and interact with processes. Thanks to GuillaumeGomez for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. raft: Convert Storage::entries's max_size argument to Option. TiKV: Convert trait objects to dyn syntax for Rust 2018. TiKV: Remove all the extern crates for Rust 2018. TiKV: Add tcmalloc support to the tikv_alloc crate. rand: Standard should be implemented for NonZero types. Tarpaulin: Test coveralls with other CI services. Inferno: Multiple good first issues. If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 236 pull requests were merged in the last week Initial addition of the Embedded Rust Book. Add const generics to the AST. Error on duplicate matcher bindings. libc: RFC 2235 - Implement PartialEq,Eq,Hash,Debug for all types. Lower constant patterns with ascribed types. Make intern_lazy_const actually intern its argument. Avoid committing to autoderef in object method probing. Add #[must_use] to core::task::Poll. Add #[must_use] message to Fn* traits. Avoid some bounds checks in binary_heap::{PeekMut,Hole}. Make -Zdump-mir dump shims. Cargo: Bail when trying to run "test --doc --no-run". Improve error message and docs for non-UTF-8 bytes in stdio on Windows. Move privacy checking later in the pipeline and make some passes run in parallel. Overhaul syntax::fold::Folder. Factor out error reporting from smart_resolve_path_fragment fn. Do not ICE in codegen when using a extern_type static. hir: add more HirId methods. Implement more detailed self profiling. Add a forever unstable opt-out of const qualification checks. Initial implementation of rustfixable unused_imports lint. Add a query type which is always marked as red if it runs. Don't try to clean predicates involving ReErased. Deduplicate mismatched delimiter errors. Add suggestion for duplicated import. Allow #[repr(align(x))] on enums. Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: No RFCs were approved this week. Final Comment Period Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs [disposition: postpone] Generic integers. [disposition: postpone] Accept semicolons as item-like. Tracking Issues & PRs [disposition: merge] Stabilize TryFrom and TryInto with a convert::Infallible empty enum. [disposition: merge] Tracking issue for str::as_mut_ptr. [disposition: merge] Stabilize slice_sort_by_cached_key. [disposition: merge] deprecate before_exec in favor of unsafe pre_exec. [disposition: merge] Stabilize linker-plugin based LTO (aka cross-language LTO). [disposition: merge] Tracking issue for std::iter::successors. [disposition: merge] Tracking issue for Option::copied. [disposition: merge] Tracking issue for std::ptr::hash. [disposition: merge] Rename MaybeUninit to MaybeUninitialized. [disposition: merge] Tracking issue for std::iter::from_fn. New RFCs Changing the overflow behavior for usize in release builds to panic. #[ffi_returns_twice]. Upcoming Events Online Feb 20. Rust Community Team Meeting on Discord. Feb 25. Rust Community Content Subteam Meeting on Discord. Feb 27. Rust Events Team Meeting on Telegram. Asia Pacific Feb 16. Chennai, IN - Rust Chennai meetup. Europe Feb 18. Karlsruhe, DE - Karlsruhe Rust Hack and Learn. Feb 20. Berlin, DE - Berlin Rust Hack and Learn. North America Feb 14. Columbus, US - Columbus Rust Society. Feb 20. Sacramento, US - Sacramento Rust Inaugural Meetup. Feb 20. Chicago, US - Chicago Rust Meetup - Property-Based Testing in Rust. Feb 20. Vancouver, CA - Vancouver Rust meetup. Feb 21. San Diego, US - San Diego Rust. Feb 21. Arlington, US - Rust DC—Learn+Try: Custom Redis Datastructures. Feb 25. Durham, US - Triangle Rustaceans. Feb 27. Ann Arbor, US - Ann Arbor Rust Meetup. If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Rust Jobs Software Developer at Finhaven, Vancouver, CA. Software Engineer at Discord, San Francisco, US. Network Engineer at NearProtocol, San Francisco, US. Navitia Software Engineer at Kisio Digital, Paris, FR. Rust web developer at Impero, Denmark/remote. Tweet us at @ThisWeekInRust to get your job offers listed here! Quote of the Week Once again, we have two quotes for the price of one: I love Rust because it reduces bugs by targeting it’s biggest source… me. – ObliviousJD on Twitter Say the same thing about seatbelts in a car. If you don’t plan to have accidents, why do you need seatbelts? Car accidents, like mistakes in programming are a risk that has a likelihood that is non-zero. A seatbelt might be a little bit annoying when things go well, but much less so when they don’t. Rust is there to stop you in most cases when you try to accidentally shot yourself into the leg, unless you deliberately without knowing what you are doing while yelling “hold my beer” (unsafe). And contrary to popular belief even in unsafe blocks many of Rust’s safety guarantees hold, just not all. … Just like with the seatbelt, there will be always those that don’t wear one for their very subjective reasons (e.g. because of edge cases where a seatbelt could trap you in a burning car, or because it is not cool, or because they hate the feeling and think accidents only happen to people who can’t drive). – atoav on HN comparing Rust's safety guarantees with seat-belts. Thanks to Kornel and pitdicker for the suggestion! Please submit your quotes for next week! This Week in Rust is edited by: nasa42, llogiq, and Flavsditz. Discuss on r/rust. [Less]
Posted about 5 years ago by Mozilla
Mozilla, Access Now, Reporters Without Borders, and 35 other organizations have published an open letter to Facebook. Our ask: make good on your promises to provide more transparency around political advertising ahead of the 2019 EU Parliamentary ... [More] Elections   Is Facebook making a sincere effort to be transparent about the content on its platform? Or, is the social media platform neglecting its promises? Facebook promised European lawmakers and users it would increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, they took measures to block access to transparency tools that let users see how they are being targeted. With the 2019 EU Parliamentary Elections on the horizon, it is vital that Facebook take action to address this problem. So today, Mozilla and 37 other organizations — including Access Now and Reporters Without Borders — are publishing an open letter to Facebook. “We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections,” the letter reads. “Promises and press statements aren’t enough; instead, we’ll be watching for real action over the coming months and will be exploring ways to hold Facebook accountable if that action isn’t sufficient,” the letter continues. Individuals may sign their name to the letter, as well. Sign here. Read the full letter, here or below. The letter will also appear in the Thursday print edition of POLITICO Europe. Lire cette lettre en français     Diesen Brief auf Deutsch lesen The letter urges Facebook to make good on its promise to EU lawmakers. Last year, Facebook signed the EU’s Code of Practice on disinformation and pledged to increase transparency around political advertising. But since then, Facebook has made political advertising more opaque, not more transparent. The company recently blocked access to third-party transparency tools. Specifically, our open letter urges Facebook to: Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU   Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries   Cease all harassment of good faith researchers who are building tools to provide greater transparency into the advertising on Facebook’s platform. To safeguard the integrity of the EU Parliament elections, Facebook must be part of the solution. Users and voters across the EU have the right to know who is paying to promote the political ads they encounter online; if they are being targeted; and why they are being targeted. The full letter Dear Facebook: We are writing you today as a group of technologists, human rights defenders, academics, journalists and Facebook users who are deeply concerned about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections. You have promised European lawmakers and users that you will increase the transparency of political advertising on the platform to prevent abuse during the elections. But in the very same breath, you took measures to block access to transparency tools that let your users see how they are being targeted. In the company’s recent Wall Street Journal op-ed, Mark Zuckerberg wrote that the most important principles around data are transparency, choice and control. By restricting access to advertising transparency tools available to Facebook users, you are undermining transparency, eliminating the choice of your users to install tools that help them analyse political ads, and wielding control over good faith researchers who try to review data on the platform. Your alternative to these third party tools provides simple keyword search functionality and does not provide the level of data access necessary for meaningful transparency. Actions speak louder than words. That’s why you must take action to meaningfully deliver on the commitments made to the EU institutions, notably the increased transparency that you’ve promised. Promises and press statements aren’t enough; instead, we need to see real action over the coming months, and we will be exploring ways to hold Facebook accountable if that action isn’t sufficient. Specifically, we ask that you implement the following measures by 1 April 2019 to give developers sufficient lead time to create transparency tools in advance of the elections: Roll out a functional, open Ad Archive API that enables advanced research and development of tools that analyse political ads served to Facebook users in the EU   Ensure that all political advertisements are clearly distinguished from other content and are accompanied by key targeting criteria such as sponsor identity and amount spent on the platform in all EU countries   Cease harassment of good faith researchers who are building tools to provide greater transparency into the advertising on your platform We believe that Facebook and other platforms can be positive forces that enable democracy, but this vision can only be realized through true transparency and trust. Transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable. We look forward to the swift and complete implementation of these transparency measures that you have promised to your users. Sincerely, Mozilla Foundation and also signed by: Access Now AlgorithmWatch All Out Alto Data Analytics ARTICLE 19 Aufstehn Bits of Freedom Bulgarian Helsinki Committee BUND – Friends of the Earth Germany Campact Campax Center for Democracy and Technology CIPPIC Civil Liberties Union for Europe Civil Rights Defenders Declic doteveryone Estonian Human Rights Center Free Press Unlimited GONG Croatia Greenpeace Italian Coalition for Civil Liberties and Rights (CILD) Mobilisation Lab Open Data Institute Open Knowledge International OpenMedia Privacy International PROVIDUS Reporters Without Borders Skiftet SumOfUs The Fourth Group Transparent Referendum Initiative Uplift Urgent Action Fund for Women’s Human Rights WhoTargetsMe Wikimedia UK Note: This blog post has been updated to reflect additional letter signers. The post Open Letter: Facebook, Do Your Part Against Disinformation appeared first on The Mozilla Blog. [Less]
Posted about 5 years ago
When building websites or web apps, creating a “Download as file” link is quite useful. For example if you want to allow user to export some data as JSON, CSV or plain text files so they can open them in external programs or load them back later. ... [More] Usually this requires a web server to format the file and serve it. But actually you can export arbitrary JavaScript variable to file entirely on the client side. I have implemented that function in one of my project, MozApoy, and here I’ll explain how I did that. First, we create a link in HTML Download as Text File The download attribute will be the filename for your file. It will look like this: Notice that we keep the href attribute blank. Traditionally we fill this attribute with a server-generated file path, but this time we’ll assign it dynamically generate the link using JavaScript. Then, if we want to export the content of the text variable as a text file, we can use this JavaScript code: var text = 'Some data I want to export'; var data = new Blob([text], {type: 'text/plain'}); var url = window.URL.createObjectURL(data); document.getElementById('download_link').href = url; The magic happens on the third line, the window.URL.createObjectURL() API takes a Blob and returns an URL to access it. The URL lives as long as the document in the window on which it was created. Notice that you can assign the type of the data in the new Blob() constructor. If you assign the correct format, the browser can better handle the file. Other commonly seen formats include application/json and text/csv. For example, if we name the file as *.csv and give it type: 'text/csv', Firefox will recognize it as “CSV document” and suggest you open it with LibreOffice Calc. And in the last line we assign the url to the element’s href attribute, so when the user clicks on the link, the browser will initiate an download action (or other default action for the specific file type.) Everytime you call createObjectURL(), a new object URL will be created, which will use up the memory if you call it many times. So if you don’t need the old URL anymore, you should call the revokeObjectURL() API to free them. var url = window.URL.createObjectURL(data); window.URL.revokeObjectURL(url); This is a simple trick to let your user download files without setting up any server. If you want to see it in action, you can check out this CodePen. [Less]