I Use This!
Activity Not Available


Analyzed 5 months ago. based on code collected 7 months ago.
Posted 1 day ago by Luis Cuende
In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What ... [More] they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all. While many of the projects we’ve covered build on the web as we know it or operate like the browsers we’re familiar with, the Aragon project has a broader vision: Give people the tools to build their own autonomous organizations with social mores codified in smart contracts. I hope you enjoy this introduction to Aragon from project co-founder Luis Cuende. – Dietrich Ayala Introducing Aragon I’m Luis. I cofounded Aragon, which allows for the creation of decentralized organizations. The principles of Aragon are embodied in the Aragon Manifesto, and its format was inspired by the Mozilla Manifesto! Here’s a quick summary. We are in a key moment in history: Technology either oppresses or liberates us. That outcome will depend on common goods being governed by the community, and not just nation states or corporate conglomerates. For that to happen, we need technology that allows for decentralized governance. Thanks to crypto, decentralized governance can provide new means of organization that don’t entail violence or surveillance, therefore providing more freedom to the individual and increasing fairness. With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos. Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure. Aragon is at the intersection of social, app platform, and blockchain. Architecture The Aragon app is one of few truly decentralized apps. Its smart contracts and front end are upgrade-able thanks to aragonOS and Aragon Package Manager (APM). You can think of APM as a fully decentralized and community-governed NPM. The smart contracts live on the Ethereum blockchain, and APM takes care of storing a log of their versions. APM also keeps a record of arbitrary data blobs hosted on decentralized storage platforms like IPFS, which in our case we use for storing the front end for the apps. The Aragon app allows users to install new apps into their organization, and those apps are embedded using sandboxed iframes. All the apps use Aragon UI, therefore users don’t even know they are interacting with apps made by different developers. Aragon has a very rich permission system that allows users to set what each app can do inside their organization. An example would be: Up to $1 can be withdrawn from the funds if there’s a vote with 51% support. Hello World To create an Aragon app, you can go to the Aragon Developer portal. Getting started is very easy. First, install IPFS if you don’t have it already installed. Second, run the following commands: $ npm i -g @aragon/cli $ aragon init foo.aragonpm.eth $ cd foo $ aragon run Here we will show a basic counter app, which allows members of an organization to count up or down if a democratic vote happens, for example. This would be the smart contract (in Solidity) that keeps track of the counter in Ethereum: contract Counter is AragonApp { /** * @notice Increment the counter by 1 */ function increment() auth(INCREMENT_ROLE) external { // ... } /** * @notice Decrement the counter by 1 */ function decrement() auth(DECREMENT_ROLE) external { // ... } } This code runs in a web worker, keeping track of events in the smart contract and caching the state in the background: // app/script.js import Aragon from '@aragon/client' // Initialize the app const app = new Aragon() // Listen for events and reduce them to a state const state$ = app.store((state, event) => { // Initial state if (state === null) state = 0 // Build state switch (event.event) { case 'Decrement': state-- break case 'Increment': state++ break } return state }) Some basic HTML (not using Aragon UI, for simplicity): - ... + And the JavaScript that updates the UI: // app/app.js import Aragon, { providers } from '@aragon/client' const app = new Aragon( new providers.WindowMessage(window.parent) ) const view = document.getElementById('view') app.state().subscribe( function(state) { view.innerHTML = `The counter is ${state || 0}` }, function(err) { view.innerHTML = 'An error occurred, check the console' console.log(err) } ) aragon run takes care of updating your app on APM and uploading your local webapp to IPFS, so you don’t need to worry about it! Learn More You can go to Aragon’s website or the Developer Portal to learn more about Aragon. If you are interested in decentralized governance, you can also check out our research forum. If you would like to contribute, you can look at our good first issues. If you have any questions, please join the Aragon community chat! The post Dweb: Creating Decentralized Organizations with Aragon appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted 1 day ago by Owen Bennett
The US Supreme Court recently released a landmark ruling in Carpenter vs. United States, which held that law enforcement authorities must secure a warrant in order to access citizens’ cell-site location data. At the upcoming 40th Conference of Data ... [More] Protection and Privacy Commissioners, we’re hosting a panel discussion to unpack what Carpenter means in a globalised world. Event blurb: The Court’s judgement in Carpenter rested on the understanding that communications metadata can reveal sensitive information about individuals, and that citizens had a reasonable expectation of privacy with respect to that  metadata. This panel discussion will seek to unpack what Carpenter says about users’ expectations of privacy in the fully-connected world. It will make this assessment through both a legal and ethical lens, and compare the notion of expectation of privacy in Carpenter to other jurisdictions where data protection legislation is currently debated. Finally, the panel will examine the types of metadata implicated by the Carpenter ruling; how sensitive is that data and what legal standards should be applied given that sensitivity. Speakers: Pam Dixon, Founder and Executive Director, World Privacy Forum Malavika Jayaram, Executive Director, Digital Asia Hub Marshall Erwin, Director Trust & Security, Mozilla Corporation European Commission, TBC Moderator: Owen Bennett, Mozilla Logistics: Thursday 25 October, 14:30-15:50 The Stanhope Hotel,  Rue du Commerce 9, 1000 Brussels, Belgium   The post Lessons from Carpenter – Mozilla panel discussion at ICDPPC appeared first on Open Policy & Advocacy. [Less]
Posted 1 day ago by Raegan MacDonald
At the upcoming 40th International Conference of Data Protection and Privacy Commissioners, we’re convening a timely high-level panel discussion on the future of advertising in an open and sustainable internet ecosystem. Event title: Online ... [More] advertising is broken: Can ethics fix it? Description: There’s no doubt that advertising is the dominant business model online today – and it has allowed a plethora of platforms, services, and publishers to operate without direct payment from end users. However, there is clearly a crisis of trust among these end users, driving skepticism of advertising, annoyance, and a sharp increase in adoption of content blockers. Ad fraud, adtech centralization, and [bad] practices like cryptojacking and pervasive tracking have made the web a difficult – and even hostile – environment for users and publishers alike. While advertising is not the only contributing factor, it is clear that the status quo is crumbling. This workshop will bring together stakeholders from across the online ecosystem to examine the role that ethics, policy, and legislation (including the GDPR) play in increasing online trust, improving end user experience, and bolstering sustainable economic models for the web. Speakers: Katharina Borchert, Chief Innovation Officer, Mozilla Catherine Armitage, Head of Digital Policy, World Federation of Advertisers David Gehring, Co-founder and CEO, Distributed Media Lab Matt Rogerson, Head of Public Policy, the Guardian Moderator: Raegan MacDonald, Head of EU Public Policy, Mozilla Logistics: Tuesday 23 October 2018, 16:15-17:30 The Hotel, Boulevard de Waterloo 38, 1000 Bruxelles Register here. The post The future of online advertising – Mozilla panel discussion at ICDPPC appeared first on Open Policy & Advocacy. [Less]
Posted 2 days ago by Daniel.Pocock
Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this ... [More] post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here) Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused. The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand. When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston. To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly. FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari? Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime) The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further. FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me. Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue. FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name. In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations. People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script? Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today. That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are. If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly. [Less]
Posted 2 days ago by nore...@blogger.com (K Lars Lohn)
A smart home is a lot more than just lights, switches and thermostats that you can control remotely from your phone.  To truly make a Smart Home, the devices must be reactive and work together.  This is generally done with a Rule System: a set of ... [More] maxims that automate actions based on conditions.  It is automation that makes a home smart. There are a couple options for a rule system with the Things Gateway from Mozilla.  First, there is a rule system built into the Web GUI, accessed via the Rules option in the drop down menu.  Second, there is the Web Things API that allows programs external to the Things Gateway to automate the devices that make up a smart home.  Most people will gravitate to the former built-in system, as it is the most accessible to those without predilection to writing software.   This blog post is going to focus on the this rules system native to the Things Gateway. The Rule System is as example of a graphical programming system.  Icons representing physical devices in the Smart Home are dragged onto a special display area and attached together to form a rule.  Rules are composed of two parts: a predicate and an action.The predicates are logical conditions like "the bedroom light is on" or "somebody pushed the button marked 'do not press'".  These logical conditions can be chained together using operations like "AND" and "OR":  "somebody is in the room AND the television is on".The actions consist of telling a set of devices to take on specific states.  These actions can be as simple as "turn the heater on" or "turn the light on and set the color to red".Throughout the history of graphical user interfaces, there have been many attempts to create graphical, drag and drop, programming environments.  Unfortunately, most fail when the programming goal rises above a certain threshold of complexity.  From the perspective of a programmer, that threshold is depressingly low.  As such, the Things Gateway Rules System doesn't try to exceed that threshold and is suitable only for simple rules with a restricted compound predicate and a set of actions.  Other more complex programming constructs such as loops, variables, and functions are intentionally omitted.If a desired predicate/action is more complex than the Rules System GUI can do, there is always the option of falling back to the Web Thing API and any programming language that can speak HTTP and/or Web Sockets.  Some of my previous blog posts use that Web Thing API: see the Tide Light or Bonding Lights Together for examples.Let's start with a simple example: we've got four Philips HUE light bulbs. We'll create a rule that designates bulb #1 as the leader and bulbs #2, #3, and #4 as followers.We start by navigating to the rules page (≡ ⇒ Rules) and making a new rule by pressing the "+" button on the rules page.  Then drag and drop the first bulb to left side of the screen.  This is where the predicates will live.  Then select the property "ON".   Notice that on the top of the screen in the red area, a sentence is forming.  "If Philips HUE 01 is on, ???".  This is an English translation of the selections that you've made to create your rule.  As you create your rule, use this sentence as a sanity check to make sure that your rule does what you want it to do.  Next, drag each of the three other lights on the right half of the screen and select their "ON" properties. Notice how the sentence in the upper area changes to read out the rule in an understandable sentence.Finally, give your rule a name.  I'm choosing "01 leads, others follow".  Make sure you hit "Enter" after typing the name.  Now click the "←" to return to the rules page.  Then return to the "Things" page (≡ ⇒ Things). Turn on "Philips HUE 01" by clicking on the bulb.  All four of the bulbs will light up. Now click on the "Philips HUE 01" bulb again to turn the light off and watch what happens.The other lights stayed on.  If you've used older versions of the Things Gateway rules, this will surprise you.  With the latest release (0.5.x), there are now two types of rules: "If" rules and "While" rules.  The "If" rules are just a single shot - if the predicate is true, do the action once.  There is no automatic undo when the predicate no longer is true."While" rules, on the other hand, will automatically undo the action when the predicate is no longer true.  This can be best understood by reading the rule sentence out loud and imagine giving it as a command to a servant.  "If light 01 is on, turn on the other lights" implies doing a single thing with no follow up.  A "While" rule, though, implies sticking around to undo the action when the predicate is no longer true.   Say it out loud and the difference becomes clear immediately.  Paraphrasing: "While light 01 is on, turn on the other lights".  The word "While" implies passing time.The Things Gateway rules system can do both kinds of rules.  Let's go back and make our rule into a "While" rule.   Return to the Rules page (≡ ⇒ Rules) and move your mouse over the rule then press the "Edit Rule" Button.Take a close look at the sentence at the top of the screen.  The symbol under the word "If" is an indication of a word selection drop down menu.  Click on the word "If" and you'll see that you can change the word "If" to "While".  Do it.Exit from the rule and go back to Things page.  Turn all the lights off.  Then turn on the leader light, in my case, "Philips HUE 01".  All the lights turn on.  Turn off the leader, and the action is undone: the rest of the lights go off.Here's a video demonstrating the difference in behavior between the "If" and "While" forms of the rules. Earlier I stated, the Things Gateway Rule System doesn't try to exceed the complexity threshold where visual programming paradigms start to fail.  However, the system as it stands right now is not without some troubles.Consider a rule that uses the clock in the predicate.  That could result in a rule that reads like this: "If the time of day is 10:00, turn Philips HUE 01 on".  The interpretation of this is straight forward.However, what if you change the "If" to "While"?  "While the time of day is 10:00, turn Philips HUE 01 on."  Since the resolution of the clock is only to the minute, the clock considers it to be 10:00 for sixty seconds.  The light stays on for one minute.  It is not particularly useful to use the clock with the "While" form of a rule.   The Rule System needs some sort of timer object so a duration can be set on a rule.How would you make a rule to turn a light on at dusk and then turn it off at 11PM?  Currently, the clock does not support the concepts of dawn and dusk, so that rule just can't be done within the Things Gateway.  However, with some programming, it would be possible to add a NightThing that could accomplish the task.In many of these blog posts, I predict what I'm going to talk about in the next posting.  I've got a completely rotten record of actually following through with my predictions.  However, I hope in my next posting to write about how to implement the rules above using the Web Things API and the Python language. [Less]
Posted 2 days ago by Sandy Sage
Today Firefox Lockbox 1.3 gives you the ability to automatically fill your username and password into apps and websites. This is available to anyone running the latest iOS 12 operating system.How do I set it up?If you just downloaded Firefox Lockbox ... [More] , you’ll start with a screen which includes “Set Up Autofill”, which takes you directly to your device settings.Here you can select Firefox Lockbox to autofill logins for you. You also want to make sure that “AutoFill Passwords” is green and toggled on.If you’re already using Firefox Lockbox, you can set Lockbox to autofill your logins by navigating through the device: Settings > Passwords & Accounts > AutoFill PasswordsWhile you’re here, unselect iCloud Keychain as an AutoFill provider. If you leave this enabled, it may be confusing when signing into apps and web forms.If you haven’t yet signed in to Lockbox, you will be prompted to do so in order to authenticate the app to automatically fill passwords.Your setup is now complete. You can now start using your saved logins in Lockbox.NOTE: You can only have one third-party AutoFill provider enabled, in addition to iCloud Keychain.How does it work?When you need to log into an app or an online account in a browser, tap in one of the entry fields. This will display the username and password you have saved in Lockbox.From there, you can tap the information to enter it into the app or website’s login form.If you can’t find the saved login you need, tap on the key icon. Then select Lockbox. There you can see all the accounts you have saved and can choose your desired entry to populate the login form.How do I know this is secure?Every time you invoke Lockbox to fill a form, you will need to confirm your identity with either Face ID or Touch ID to enter a password. This is to ensure that you are in fact asking Lockbox to fill in the username and password and unlocking the app to do so.Where can I autofill passwords?You can now easily use a Firefox saved login to get into a third-party app like Twitter or Instagram. Or you can use those Firefox saved logins to fill in website forms. You may recognize this but it’s something that used to only be available to iCloud Keychain users until today!AutoFill your passwords with Firefox Lockbox in iOS was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
Posted 3 days ago by Reuben Morais
The Machine Learning team at Mozilla Research continues to work on an automatic speech recognition engine as part of Project DeepSpeech, which aims to make speech technologies and trained models openly available to developers. We’re hard at work ... [More] improving performance and ease-of-use for our open source speech-to-text engine. The upcoming 0.2 release will include a much-requested feature: the ability to do speech recognition live, as the audio is being recorded. This blog post describes how we changed the STT engine’s architecture to allow for this, achieving real-time transcription performance. Soon, you’ll be able to transcribe audio at least as fast as it’s coming in. When applying neural networks to sequential data like audio or text, it’s important to capture patterns that emerge over time. Recurrent neural networks (RNNs) are neural networks that “remember” — they take as input not just the next element in the data, but also a state that evolves over time, and use this state to capture time-dependent patterns. Sometimes, you may want to capture patterns that depend on future data as well. One of the ways to solve this is by using two RNNs, one that goes forward in time and one that goes backward, starting from the last element in the data and going to the first element. You can learn more about RNNs (and about the specific type of RNN used in DeepSpeech) in this article by Chris Olah. Using a bidirectional RNN The current release of DeepSpeech (previously covered on Hacks) uses a bidirectional RNN implemented with TensorFlow, which means it needs to have the entire input available before it can begin to do any useful work. One way to improve this situation is by implementing a streaming model: Do the work in chunks, as the data is arriving, so when the end of the input is reached, the model is already working on it and can give you results more quickly. You could also try to look at partial results midway through the input. This animation shows how the data flows through the network. Data flows from the audio input to feature computation, through three fully connected layers. Then it goes through a bidirectional RNN layer, and finally through a final fully connected layer, where a prediction is made for a single time step. In order to do this, you need to have a model that lets you do the work in chunks. Here’s the diagram of the current model, showing how data flows through it. As you can see, on the bidirectional RNN layer, the data for the very last step is required for the computation of the second-to-last step, which is required for the computation of the third-to-last step, and so on. These are the red arrows in the diagram that go from right to left. We could implement partial streaming in this model by doing the computation up to layer three as the data is fed in. The problem with this approach is that it wouldn’t gain us much in terms of latency: Layers four and five are responsible for almost half of the computational cost of the model. Using a unidirectional RNN for streaming Instead, we can replace the bidirectional layer with a unidirectional layer, which does not have a dependency on future time steps. That lets us do the computation all the way to the final layer as soon as we have enough audio input. With a unidirectional model, instead of feeding the entire input in at once and getting the entire output, you can feed the input piecewise. Meaning, you can input 100ms of audio at a time, get those outputs right away, and save the final state so you can use it as the initial state for the next 100ms of audio. An alternative architecture that uses a unidirectional RNN in which each time step only depends on the input at that time and the state from the previous step. Here’s code for creating an inference graph that can keep track of the state between each input window: import tensorflow as tf def create_inference_graph(batch_size=1, n_steps=16, n_features=26, width=64): input_ph = tf.placeholder(dtype=tf.float32, shape=[batch_size, n_steps, n_features], name='input') sequence_lengths = tf.placeholder(dtype=tf.int32, shape=[batch_size], name='input_lengths') previous_state_c = tf.get_variable(dtype=tf.float32, shape=[batch_size, width], name='previous_state_c') previous_state_h = tf.get_variable(dtype=tf.float32, shape=[batch_size, width], name='previous_state_h') previous_state = tf.contrib.rnn.LSTMStateTuple(previous_state_c, previous_state_h) # Transpose from batch major to time major input_ = tf.transpose(input_ph, [1, 0, 2]) # Flatten time and batch dimensions for feed forward layers input_ = tf.reshape(input_, [batch_size*n_steps, n_features]) # Three ReLU hidden layers layer1 = tf.contrib.layers.fully_connected(input_, width) layer2 = tf.contrib.layers.fully_connected(layer1, width) layer3 = tf.contrib.layers.fully_connected(layer2, width) # Unidirectional LSTM rnn_cell = tf.contrib.rnn.LSTMBlockFusedCell(width) rnn, new_state = rnn_cell(layer3, initial_state=previous_state) new_state_c, new_state_h = new_state # Final hidden layer layer5 = tf.contrib.layers.fully_connected(rnn, width) # Output layer output = tf.contrib.layers.fully_connected(layer5, ALPHABET_SIZE+1, activation_fn=None) # Automatically update previous state with new state state_update_ops = [ tf.assign(previous_state_c, new_state_c), tf.assign(previous_state_h, new_state_h) ] with tf.control_dependencies(state_update_ops): logits = tf.identity(logits, name='logits') # Create state initialization operations zero_state = tf.zeros([batch_size, n_cell_dim], tf.float32) initialize_c = tf.assign(previous_state_c, zero_state) initialize_h = tf.assign(previous_state_h, zero_state) initialize_state = tf.group(initialize_c, initialize_h, name='initialize_state') return { 'inputs': { 'input': input_ph, 'input_lengths': sequence_lengths, }, 'outputs': { 'output': logits, 'initialize_state': initialize_state, } } The graph created by the code above has two inputs and two outputs. The inputs are the sequences and their lengths. The outputs are the logits and a special “initialize_state” node that needs to be run at the beginning of a new sequence. When freezing the graph, make sure you don’t freeze the state variables previous_state_h and previous_state_c. Here’s code for freezing the graph: from tensorflow.python.tools import freeze_graph freeze_graph.freeze_graph_with_def_protos( input_graph_def=session.graph_def, input_saver_def=saver.as_saver_def(), input_checkpoint=checkpoint_path, output_node_names='logits,initialize_state', restore_op_name=None, filename_tensor_name=None, output_graph=output_graph_path, initializer_nodes='', variable_names_blacklist='previous_state_c,previous_state_h') With these changes to the model, we can use the following approach on the client side: Run the “initialize_state” node. Accumulate audio samples until there’s enough data to feed to the model (16 time steps in our case, or 320ms). Feed through the model, accumulate outputs somewhere. Repeat 2 and 3 until data is over. It wouldn’t make sense to drown readers with hundreds of lines of the client-side code here, but if you’re interested, it’s all MPL 2.0 licensed and available on GitHub. We actually have two different implementations, one in Python that we use for generating test reports, and one in C++ which is behind our official client API. Performance improvements What does this all mean for our STT engine? Well, here are some numbers, compared with our current stable release: Model size down from 468MB to 180MB Time to transcribe: 3s file on a laptop CPU, down from 9s to 1.5s Peak heap usage down from 4GB to 20MB (model is now memory-mapped) Total heap allocations down from 12GB to 264MB Of particular importance to me is that we’re now faster than real time without using a GPU, which, together with streaming inference, opens up lots of new usage possibilities like live captioning of radio programs, Twitch streams, and keynote presentations; home automation; voice-based UIs; and so on. If you’re looking to integrate speech recognition in your next project, consider using our engine! Here’s a small Python program that demonstrates how to use libSoX to record from the microphone and feed it into the engine as the audio is being recorded. import argparse import deepspeech as ds import numpy as np import shlex import subprocess import sys parser = argparse.ArgumentParser(description='DeepSpeech speech-to-text from microphone') parser.add_argument('--model', required=True, help='Path to the model (protocol buffer binary file)') parser.add_argument('--alphabet', required=True, help='Path to the configuration file specifying the alphabet used by the network') parser.add_argument('--lm', nargs='?', help='Path to the language model binary file') parser.add_argument('--trie', nargs='?', help='Path to the language model trie file created with native_client/generate_trie') args = parser.parse_args() LM_WEIGHT = 1.50 VALID_WORD_COUNT_WEIGHT = 2.25 N_FEATURES = 26 N_CONTEXT = 9 BEAM_WIDTH = 512 print('Initializing model...') model = ds.Model(args.model, N_FEATURES, N_CONTEXT, args.alphabet, BEAM_WIDTH) if args.lm and args.trie: model.enableDecoderWithLM(args.alphabet, args.lm, args.trie, LM_WEIGHT, VALID_WORD_COUNT_WEIGHT) sctx = model.setupStream() subproc = subprocess.Popen(shlex.split('rec -q -V0 -e signed -L -c 1 -b 16 -r 16k -t raw - gain -2'), stdout=subprocess.PIPE, bufsize=0) print('You can start speaking now. Press Control-C to stop recording.') try: while True: data = subproc.stdout.read(512) model.feedAudioContent(sctx, np.frombuffer(data, np.int16)) except KeyboardInterrupt: print('Transcription:', model.finishStream(sctx)) subproc.terminate() subproc.wait() Finally, if you’re looking to contribute to Project DeepSpeech itself, we have plenty of opportunities. The codebase is written in Python and C++, and we would love to add iOS and Windows support, for example. Reach out to us via our IRC channel or our Discourse forum. The post Streaming RNNs in TensorFlow appeared first on Mozilla Hacks - the Web developer blog. [Less]
Posted 3 days ago by Sean White
Earlier this year, we shared that we are building a completely new browser called Firefox Reality. The mixed reality team at Mozilla set out to build a web browser that has been designed from the ground up to work on stand-alone virtual and augmented ... [More] reality (or mixed reality) headsets. Today, we are pleased to announce that the first release of Firefox Reality is available in the Viveport, Oculus, and Daydream app stores. At a time when people are questioning the impact of technology on their lives and looking for leadership from independent organizations like Mozilla, Firefox Reality brings to the 3D web and immersive content experiences the level of ease of use, choice, control and privacy they’ve come to expect from Firefox. But for us, the ability to enjoy the 2D web is just table stakes for a VR browser. We built Firefox Reality to move seamlessly between the 2D web and the immersive web. Designed from the virtual ground up The Mixed Reality team here at Mozilla has invested a significant amount of time, effort, and research into figuring out how we can design a browser for virtual reality: We had to rethink everything, including navigation, text-input, environments, search and more. This required years of research, and countless conversations with users, content creators, and hardware partners. The result is a browser that is built for the medium it serves. It makes a big difference, and we think you will love all of the features and details that we’ve created specifically for a MR browser. – Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla   Among these features is the ability to search the web using your voice. Text input is still a chore for virtual reality, and this is a great first step towards solving that. With Firefox Reality you can choose to search using the microphone in your headset. Content served fresh   We spent a lot of time talking to early VR headset owners. We asked questions like: “What is missing?” “Do you love your device?” And “If not, why?” The feedback we heard the most was that users were having a hard time finding new games and experiences. This is why we built a feed of amazing content into the home screen of Firefox Reality. – Andre Vrignaud, Head of Mixed Reality Platform Strategy at Mozilla   From the moment you open the browser, you will be presented with immersive experiences that can be enjoyed on a VR headset directly from the Firefox Reality browser. We are working with creators around the world to bring an amazing collection of games, videos, environments, and experiences that can be accessed directly from the home screen. A new dimension of Firefox We know a thing or two about making an amazing web browser. Firefox Reality is using our new Quantum engine for mobile browsers. The result is smooth and fast performance that is crucial for a VR browser. We also take things like privacy and transparency very seriously. As a company, we are dedicated to fighting for your right to privacy on the web. Our values have guided us through this creation process, just as they do with every product we build. We are just getting started We are in this for the long haul. This is version 1.0 of Firefox Reality and version 1.1 is right around the corner. We have an always-growing list of ideas and features that we are working to add to make this the best browser for mixed reality. We will also be listening and react quickly when we need to provide bug fixes and other minor updates. If you notice a few things are missing (“Hey! Where are the bookmarks”), just know that we will be adding features at a steady pace. In the coming months, we will be adding support for bookmarks, 360 videos, accounts, and more. We intend to quickly prove our commitment to this product and our users. Built in the open Here at Mozilla, we make it a habit to work in the open because we believe in the power of transparency, community and collaboration. If you have an idea, or a bug report, or even if you just want to geek out, we would love to hear from you. You can follow @mozillareality on twitter, file an issue on GitHub, or visit our support site. Calling all creators Are you creating immersive content for the web? Have you built something using WebVR? We would love to connect with you about featuring those experiences in Firefox Reality. Are you building a mixed reality headset that needs a best-in-class browser? Let’s chat. Firefox Reality is available right now. Download for Oculus (supports Oculus Go) Download for Daydream (supports all-in-one devices) Download for Viveport (Search for “Firefox Reality” in Viveport store) (supports all-in-one devices running Vive Wave) The post Explore the immersive web with Firefox Reality. Now available for Viveport, Oculus, and Daydream appeared first on The Mozilla Blog. [Less]
Posted 3 days ago by TWiR Contributors
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust ... [More] or send us a pull request. Want to get involved? We love contributions. This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR. Updates from Rust Community News & Blog Posts 🎈🎉 Announcing Rust 1.29. 🎉🎈 Ripgrep is available as a package in Ubuntu 18.10. WebRender is now enabled by default in Firefox Nightly on Windows 10 with Nvidia GPUs. RustConf 2018 closing keynote (blog post). Rising Tide: building a modular web framework in the open. You can’t “turn off the borrow checker” in Rust. Measuring SmallVec footprint with Smallvectune. How we organize a complex Rust codebase. Desktop apps with Rust (Electorn + WebAssembly). Postgres over TLS with the postgres crate, r2d2_postgres and openssl. The Networking WG newsletter 1. Crate of the Week This week's crate is mtpng, a parallelized PNG encoder. Thanks to Willi Kappler for the suggestion! Submit your suggestions and votes for next week! Call for Participation Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started! Some of these tasks may also have mentors available, visit the task page for more information. Rust office hours with Niko Matsakis. rust: Panic in Receiver::recv(). If you are a Rust project owner and are looking for contributors, please submit tasks here. Updates from Rust Core 131 pull requests were merged in the last week temporarily prohibit proc macro attributes placed after derives add target thumbv7a-pc-windows-msvc PowerPC: fix the calling convention for i1 arguments on PPC32 allow for opting out of ThinLTO and clean up LTO related cli flag handling resolve: allow only core, std, meta and --extern in Rust 2018 paths resolve: do not error on access to proc macros imported with #[macro_use] add inspection and setter methods to proc_macro::Diagnostic support ascription for patterns in NLL allow named lifetimes in async functions suggest && and || instead of 'and' and 'or' use structured suggestion for "missing mut" label de-overlap the lifetimes of flow_inits and flow_{un,ever_}inits don't compute padding of braces unless they are unmatched don't suggest extra clone when converting cloned slice to Vec reexport CheckLintNameResult miri: keep around some information for dead allocations miri loop detector hashing fix some uses of pointer intrinsics with invalid pointers first step towards u128 instead of Const in PatternKind::Range stabilize outlives requirements stabilize #[used] stabilize slice_align_to implement tuple_struct_self_ctor (RFC #2302) implement map_or_else for Result add a implementation of From for converting &'a Option into Option cargo: add empty ctrlc handler on Windows Approved RFCs Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: RFC 2361: Simpler alternative dbg!() macro. Final Comment Period Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. RFCs [disposition: merge] Deny the overflowing_literals lint for the 2018 edition. Tracking Issues & PRs [disposition: merge] Fix camel case type warning for types with trailing underscores. [disposition: merge] Support an explicit annotation for marker traits. New RFCs Elide array size. Make the turbofish syntax redundant. Use T: ToString for thread::Builder::name. Upcoming Events Online Sep 25. Rust Community Content Subteam Meeting in Discord. Sep 26. Rust Events Team Meeting in Telegram. Sep 26. Rust Community Team Meeting in Discord. Oct 3. Rust Community Team Meeting in Discord. Africa Oct 2. Johannesburg, SA - Monthly Meetup of the Johannesburg Rustaceans. Asia Oct 3. Kuala Lumpur, MY - Rust Lang Meetup - Project X. Europe Sep 27. Helsinki, FI - Rust is back with Embedded topics. Oct 1. Barcelona, ES - BcnRust Meetup. Oct 3. Vilnius, LT - Vilnius Rust Meetup #3 - Network Simulation and WebAssembly. Oct 3. Berlin, DE - Berlin Rust Hack and Learn. North America Sep 23. Mountain View, US - Rust Dev in Mountain View!. Sep 24. Durham, US - Triangle Rustaceans. Sep 25. Dallas, US - Dallas Rust - Last Tuesday. Sep 30. Mountain View, US - Rust Dev in Mountain View!. Oct 3. Indianopolis, US - Indy.rs. Oct 3. Atlanta, US - Grab a beer with fellow Rustaceans. Oct 3. Vancouver, CA - Vancouver Rust meetup. Oct 19 & 20. Ann Arbor, US - Rust Belt Rust 2018. If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access. Rust Jobs RustBelt is looking for postdocs and PhD students. Tweet us at @ThisWeekInRust to get your job offers listed here! Quote of the Week Sometimes bad designs will fail faster in Rust – Catherine West @ Rustconf. Thanks to kornel for the suggestion! Please submit your quotes for next week! This Week in Rust is edited by: nasa42, llogiq, and Flavsditz. Discuss on r/rust. [Less]
Posted 3 days ago by Wayne Thayer
Mozilla has sent a CA Communication to inform Certification Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA ... [More] Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items: Mozilla recently published version 2.6.1 of our Root Store Policy. The first action confirms that CAs have read the new version of the policy. The second action asks CAs to ensure that their CP/CPS complies with the changes that were made to domain validation requirements in version 2.6.1 of Mozilla’s Root Store Policy. CAs must confirm that they will comply with the new requirement for intermediate certificates issued after 1-January 2019 to be constrained to prevent use of the same intermediate certificate to issue both SSL and S/MIME certificates. CAs are reminded in action 4 that Mozilla is now rejecting audit reports that do not comply with section 3.1.4 of Mozilla’s Root Store Policy. CAs must confirm that they have complied with the 1-August 2018 deadline to discontinue use of BR domain validation methods 1 “Validating the Applicant as a Domain Contact” and 5 “Domain Authorization Document” CAs are reminded of their obligation to add new intermediate CA certificates to CCADB within one week of certificate creation, and before any such subordinate CA is allowed to issue certificates. Later this year, Mozilla plans to begin preloading the certificate database shipped with Firefox with intermediate certificates disclosed in the CCADB, as an alternative to “AIA chasing”. This is intended to reduce the incidence of “unknown issuer” errors caused by server operators neglecting to include intermediate certificates in their configurations. In action 7 we are gathering information about the Certificate Transparency (CT) logging practices of CAs. Later this year, Mozilla is planning to use CT logging data to begin testing a new certificate validation mechanism called CRLite which may reduce bandwidth requirements for CAs and increase performance of websites. Note that CRLite does not replace OneCRL which is a revocation list controlled by Mozilla. The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB. With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve. The post September 2018 CA Communication appeared first on Mozilla Security Blog. [Less]