Very High Activity
I Use This!

News

Analyzed 4 days ago. based on code collected about 1 month ago.
Posted 6 days ago by yoric
You can find my new blog on github. Still rough around the edges, but I’m planning to improve this as I go.
Posted 6 days ago by Wil Clouser
I installed Ubuntu 16.04.1 this week and decided to try out Unity, the default window manager. After I installed Nightly I assumed it would be simple to get the icon to stay in the dock, but Unity seemed confused about Nightly vs the built-in Firefox ... [More] (I assume because the executables have the same name). It took some doing to get Nightly to stick to the Dock with its own icon. I retraced my steps and wrote them down below. My goal was to be able to run a couple versions of Firefox with several profiles. I thought the easiest way to accomplish that would be to add a new icon for each version+profile combination and a single left click on the icon would run the profile I want. After some research, I think the Unity way is to have a single icon for each version of Firefox, and then add Actions to it so you can right click on the icon and launch a specific profile from there. Installing Nightly If you don’t have Nighly yet, download Nightly (these steps should work fine with Aurora or Beta also). Open a terminal: $ mkdir /opt/firefox $ tar -xvjf ~/Downloads/firefox-51.0a1.en-US.linux-x86_64.tar.bz2 /opt You may need to chown some directories to get that in /opt which is fine. At the end of the day, make sure your regular user can write to the directory or else you won’t be able to install Nightly’s updates. Adding the icon to the dock Then create a file in your home directory named nightly.desktop and paste this into it: [Desktop Entry] Version=1.0 Name=Nightly Comment=Browse the World Wide Web Icon=/opt/firefox/browser/icons/mozicon128.png Exec=/opt/firefox/firefox %u Terminal=false Type=Application Categories=Network;WebBrowser; Actions=Default;Mozilla;ProfileManager; [Desktop Action Default] Name=Default Profile Exec=/opt/firefox/firefox --no-remote -P minefield-default [Desktop Action Mozilla] Name=Mozilla Profile Exec=/opt/firefox/firefox --no-remote -P minefield-mozilla [Desktop Action ProfileManager] Name=Profile Manager Exec=/opt/firefox/firefox --no-remote --profile-manager Adjust anything that looks like it should change, the main callout being the Exec line should have the names of the profiles you want to use (in the above file mine are called minefield-default and minefield-mozilla). If you have more profiles just make more copies of that section and name them appropriately. If you think you’ve got it, run this command: $ desktop-file-validate nightly.desktop No output? Great — it passed the validator. Now install it: $ desktop-file-install --dir=.local/share/applications nightly.desktop Two notes on this command: If you leave off –dir it will write to /usr/share/applications/ and affect all users of the computer. You’ll probably need to sudo the command if you want that. Something is weird with the parsing. Originally I passed in –dir=~/.local/… and it literally made a directory named ~ in my home directory, so, if the menu isn’t updating, double check the file is getting copied to the right spot. Some people report having to run unity again to get the change to appear, but it showed up for me. Now left-clicking runs Nightly and right-clicking opens a menu asking me which profile I want to use. Right-click menu for Nightly in Unity’s Dock. Modifying the Firefox Launcher I also wanted to launch profiles off the regular Firefox icon in the same way. The easiest way to do that is to copy the built-in one from /usr/share/applications/firefox.desktop and modify it to suit you. Conveniently, Unity will override a system-wide .desktop file if you have one with the same name in your local directory so installing it with the same commands as you did for Nightly will work fine. Postscript I should probably add a disclaimer that I’ve used Unity for all of two days and there may be a smoother way to do this. I saw a couple of 3rd-party programs that will generate .desktop files but I didn’t want to install more things I’d rarely use. Please leave a comment if I’m way off on these instructions! This post originally appeared on Wil’s blog. [Less]
Posted 6 days ago by Henrik Skupin
Over the last weekend I was reinstalling my older MacBookPro (late 2011 model) again after replacing its hard drive with a fresh and modern SSD drive from Crucial 512GB. That change was really necessary given that simple file operations took about a ... [More] minute, and every system tools claimed that the HDD was fine. So after installing Mavericks I moved my home folder to another partition to make it easier later to reinstall OS X again. But as it turned out it is not that easy, especially not given that OS X doesn’t support mounting of other encrypted partitions beside the system partition during start-up yet. If you had a single user only, you will be busted after the home dir move and a reboot. That’s what I experienced. As fix under such a situation put back OS X into the “post install” state, and create a new administrator account via single-user mode. With this account you can at least sign-in again, and after unlocking the other encrypted partition you will have access to your original account again. Having to first login via an account which data is still hosted on the system partition is not a workable solution for me. So I was continuing to find a solution which let me unlock the second encrypted partition during startup. After some search I finally found a tool which actually let me do this. It’s called Unlock and can be found on Github. To make it work it installs a LaunchDaemon which retrieves the encryption password via the System keychain, and unlocks the partition during start-up. To actually be on the safe side I compiled the code myself with Xcode and got it installed with some small modifications to the install script (I may want to contribute those modifications back into the repository for sure :). In case you have similar needs, I hope this post will help you to avoid those hassles as I have experienced. [Less]
Posted 6 days ago by Denelle Dixon-Thayer
Last week, we wrote about the shared responsibility of protecting Internet security. Today, we want to dive deeper into this issue and focus on one very important obligation governments have: proper disclosure of security vulnerabilities. Software ... [More] vulnerabilities are at the root of so much of today’s cyber insecurity. The revelations of recent attacks on the DNC, the state electoral systems, the iPhone, and more, have all stemmed from software vulnerabilities. Security vulnerabilities can be created inadvertently by the original developers, or they can be developed or discovered by third parties. Sometimes governments acquire, develop, or discover vulnerabilities and use them in hacking operations (“lawful hacking”). Either way, once governments become aware of a security vulnerability, they have a responsibility to consider how and when (not whether) to disclose the vulnerability to the affected company so that developer can fix the problem and protect their users. We need to work with governments on how they handle vulnerabilities to ensure they are responsible partners in making this a reality today. In the U.S., the government’s process for reviewing and coordinating the disclosure of vulnerabilities that it learns about or creates is called the Vulnerabilities Equities Process (VEP). The VEP was established in 2010, but not operationalized until the Heartbleed vulnerability in 2014 that reportedly affected two thirds of the Internet. At that time, White House Cybersecurity Coordinator Michael Daniel wrote in a blog post that the Obama Administration has a presumption in favor of disclosing vulnerabilities. But, policy by blog post is not particularly binding on the government, and as Daniel even admits, “there are no hard and fast rules” to govern the VEP. It has now been two years since Heartbleed and the U.S. government’s blog post, but we haven’t seen improvement in the way that vulnerabilities disclosure is being handled. Just one example is the alleged hack of the NSA by the Shadow Brokers, which resulted in the public release of NSA “cyberweapons”, including “zero day” vulnerabilities that the government knew about and apparently had been exploiting for years. Companies like Cisco and Fortinet whose products were affected by these zero day vulnerabilities had just that, zero days to develop fixes to protect users before the vulnerabilities were possibly exploited by hackers. The government may have legitimate intelligence or law enforcement reasons for delaying disclosure of vulnerabilities (for example, to enable lawful hacking), but these same vulnerabilities can endanger the security of billions of people. These two interests must be balanced, and recent incidents demonstrate just how easily stockpiling vulnerabilities can go awry without proper policies and procedures in place. Cybersecurity is a shared responsibility, and that means we all must do our part – technology companies, users, and governments. The U.S. government could go a long way in doing its part by putting transparent and accountable policies in place to ensure it is handling vulnerabilities appropriately and disclosing them to affected companies. We aren’t seeing this happen today. Still, with some reforms, the VEP can be a strong mechanism for ensuring the government is striking the right balance. More specifically, we recommend five important reforms to the VEP: All security vulnerabilities should go through the VEP and there should be public timelines for reviewing decisions to delay disclosure. All relevant federal agencies involved in the VEP must work together to evaluate a standard set of criteria to ensure all relevant risks and interests are considered. Independent oversight and transparency into the processes and procedures of the VEP must be created. The VEP Executive Secretariat should live within the Department of Homeland Security because they have built up significant expertise, infrastructure, and trust through existing coordinated vulnerability disclosure programs (for example, US CERT). The VEP should be codified in law to ensure compliance and permanence. These changes would improve the state of cybersecurity today. We’ll dig into the details of each of these recommendations in a blog post series from the Mozilla Policy team over the coming weeks – stay tuned for that. Today, you can watch Heather West, Mozilla Senior Policy Manager, discuss this issue at the New America Open Technology Institute event on the topic of “How Should We Govern Government Hacking?” The event can be viewed here. [Less]
Posted 7 days ago by Wraithan McCarroll
A couple weeks ago I started writing a game and i-can-management is the directory I made for the project so that’ll be the codename for now. I’m going to write these updates to journal the process of making this game. As I’m going through this ... [More] process alone, you’ll see all aspects of the game development process as I go through them. That means some weeks may be art heavy, while others game rules, or maybe engine refactoring. I also want to give a glance how I’m feeling about the project and rules I make for myself. Speaking of rules, those are going to be a central theme on how I actually keep this project moving forward. Optimize only when necessary. This seems obvious, but folks define necessary differently. 60 frames per second with 750×750 tiles on the screen is my current benchmark for whether I need to optimize. I’ll be adding numbers for load times and other aspects once they grow beyond a size that feels comfortable. Abstractions are expensive, use them sparingly.This is something I learned from a Jonathan Blow talk I mention in my previous post. Abstractions can increase or remove flexibility. On one hand reusing components may allow more rapid iteration. On the other hand it may take considerable effort to make systems communicate that weren’t designed to pass messages.I’m making it clear in each effort whether I’m in exploration mode so I work mostly with just 1 function, or if I’m in architect mode where I’m trying to make the next feature a little easier to implement. This may mean 1000 line functions and lots of global like use for a while until I understand how the data will be used. Or it may mean abstracting a concept like the camera to a struct because the data is always used together. Try the easier to implement answer before trying the better answer.I have two goals with this. First, it means I get to start trying stuff faster so I know if I want to pursue it or if I’m kinda off on the idea. Maybe this first implementation will show some other subsystem needs features first so I decide to delay the more correct answer. So in short quicker to test and expose unexpected requirements.The other goal is to explore building games in a more holistic way. Knowing a quick and dirty way to implement something may help when trying to get an idea thrown together really quick. Then knowing how to evolve that code into a better long term solution means next games or ideas that cross pollinate are faster to compose because the underlying concepts are better known. The last couple weeks have been an exploration of OpenGL via glium the library I’m using to access OpenGL from Rust as well as abstract away the window creation. I’d only ever ran the example before this dive into building a game. From what I remember of doing this in C++ the abstraction it provides for the window creation and interaction, using the glutin library is pretty great. I was able to create a window of whatever size, hook up keyboard and mouse events, and render to the screen pretty fast after going through the tutorial in the glium book. This brings me to one of the first frustrating points in this project. So many things are focused on 3d these days that finding resources for 2d rendering is harder. If you find them, they are for old versions of OpenGL or use libraries to handle much of the tile rendering. I was hoping to find an article like “I built a 2d tile engine that is pretty fast and these are the techniques I used!” but no such luck. OpenGL guides go immediately into 3d space after getting past basic polygons. But it just means I get to explore more which is probably a good thing. I already had a deterministic map generator built to use as the source of the tiles on the screen. So, I copy and pasted some of the matrices from the glium book and then tweak the numbers I was using for my tiles until they show up on the screen and looked ok. From here I was pretty stoked. I mean if I have 25×40 tiles on the screen what more could someone ask for. I didn’t know how to make the triangle strips work well for the tiles to be drawn all at once, so I drew each tile to the screen separately, calculating everything on every frame. I started to add numbers here and there to see how to adjust the camera in different directions. I didn’t understand the math I was working with yet so I was mostly treating it like a black box and I would add or multiply numbers and recompile to see if it did anything. I quickly realized I needed it to be more dynamic so I added detection for the mouse scrolling. Since I’m on my macbook most of the time I’m doing development I can scroll vertically as well as horizontally, making a natural panning feeling. I noticed that my rendering had a few quirks, and I didn’t understand any of the math that was being used, so I went seeking more sources of information on how these transforms work. At first I was directed to the OpenGL transformations page which set me on the right path, including a primer on the linear algebra I needed. Unfortunately, it quickly turned toward 3d graphics and I didn’t quite understand how to apply it to my use case. In looking for more resources I found Solarium Programmers’ OpenGL 101 page which took some more time with orthographic projects, what I wanted for my 2d game. Over a few sessions I rewrote all the math to use a coordinate system I understood. This was greatly satisfying, but if I hadn’t started with ignoring the math, I wouldn’t have had a testbed to see if I actually understood the math. A good lesson to remember, if you can ignore a detail for a bit and keep going, prioritize getting something working, then transforming it into something you understand more thoroughly. I have more I learned in the last week, but this post is getting quite long. I hope to write a post this week about changing from drawing individual tiles to using a single triangle strip for the whole map. In the coming week my goal is to have mouse clicks interacting with the map working. This involves figuring out what tile the mouse has clicked which I’ve learned isn’t trivial. In parallel I’ll be developing the first set of tiles using Pyxel Edit and hopefully integrating them into the game. Then my map will become richer than just some flat colored tiles. Here is a screenshot of the game so far for posterity’s sake. It is showing 750×750 tiles with deterministic weighted distribution between grass, water, and dirt:: [Less]
Posted 7 days ago
In the last week, we landed 68 PRs in the Servo organization’s repositories. Planning and Status Our overall roadmap is available online and now includes the initial Q3 plans. From now on, we plan to include the quarterly plan with a high-level ... [More] breakdown in the roadmap page. This week’s status updates are here. Special thanks to canaltinova for their work on implementing the matrix transition algorithms for CSS3 transform animation. This allows (both 2D and 3D) rotate(), perspective() and matrix() functions to be interpolated, as well as interpolations between arbitrary transformations, though the last bit is yet to be implemented. In the process of implementation, we had to deal with many spec bugs, as well as implementation bugs in other browsers, which complicated things immensely – it’s very hard to tell if your code has a mistake or if the spec itself is wrong in complicated algorithms like these. Great work, canaltinova! Notable Additions glennw added support for scrollbars canaltinova implemented the matrix decomposition/interpolation algorithm nox landed a rustup to the 9/14 rustc nightly ejpbruel added a websocket server for use in the remote debugging protocol creativcoder implemented the postMessage() API for ServiceWorkers ConnorGBrewster made Servo recycle session entries when reloading mrobinson added support for transforming rounded rectangles glennw improved webrender startup times by making shaders compile lazily canaltinova fixed a bug where we don’t normalize the axis of rotate() CSS transforms peterjoel added the DOMMatrix and DOMMatrixReadOnly interfaces Ms2ger corrected an unsound optimization in event dispatch tizianasellitto made DOMTokenList iterable aneeshusa excised SubpageId from the codebase, using PipelineId instead gilbertw1 made the HTTP authentication cache use origins intead of full URLs jmr0 fixed the event suppression logic for pages that have navigated zakorgy updated some WebBluetooth APIs to match new specification changes New Contributors Arthur Marble Bryan Gilbert Julien Enselme Leonardo Santagada Taryn Hill Tiziana Sellitto Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors! Screenshot Some screencasts of matrix interpolation at work: This one shows all the basic transformations together (running a tweaked version of this page. The 3d rotate, perspective, and matrix transformation were enabled by the recent change. Servo’s new scrollbars! [Less]
Posted 7 days ago by Karl Dubost
Busy week without much things done for bugs. W3C is heading to Lisbon for the TPAC, so tune of the week: Amalia Rodrigues. I'll be there in spirit. Webcompat Life Progress this week: 326 open issues ---------------------- needsinfo 12 ... [More] needsdiagnosis 106 needscontact 8 contactready 28 sitewait 158 ---------------------- You are welcome to participate 21 lines for the new devtools debugger A lot of administrative tasks this last week: Job interviews for the Web Compatibility Engineer position, Meeting with a Yahoo! Japan about Yahoo! homepage Web Compatibility issue, account transition for work and catching up with internal video announcements. It doesn't feel productive but they are necessary and sometimes very useful. Webcompat issues (a selection of some of the bugs worked on this week). yet another appearance: none implemented in Blink. This time for meter. Webcompat.com development Pull Request Review Detailed review for issues.db Reading List Git cheatsheet Follow Your Nose Previous Worklog Bugzilla Activity Github Activity TODO Document how to write tests on webcompat.com using test fixtures. ToWrite: Amazon prefetching resources with for Firefox only. Otsukare! [Less]
Posted 8 days ago
The TFSA is a savings account for Canadians that was introduced in 2009. As a quick check I wanted to see how much or little my TFSA had changed against what it should be. That meant a double check of how much room I had in the TFSA each year. So ... [More] this is a quick cacluation the theoretical case: that you are able to invest the maximum amount each year, at the beginning of the year and get 5% return (after fees) on that. Year Maximum Total invested Compounded 2009 $5,000.00 $5,000.00 $5,250.00 2010 $5,000.00 $10,000.00 $10,762.50 2011 $5,000.00 $15,000.00 $16,550.63 2012 $5,000.00 $20,000.00 $22,628.16 2013 $5,500.00 $25,500.00 $29,534.56 2014 $5,500.00 $31,000.00 $36,786.29 2015 $10,000.00 $41,000.00 $49,125.61 2016 $5,500.00 $46,500.00 $57,356.89 Which always raises the question for me of what is a reasonable rate to calculate at these days. It always used to be 10%, but that's very hard to get these days. Since 2006 the annualized return on the S&P 500 is 5.158% for example. Perhaps 5% represents too conversative a number. [Less]
Posted 8 days ago by Matěj Cepl
I don’t understand. CyanogenMod 13 introduced new Weather widget and lock screen support. Great! Unfortunately, the widget requires specific providers for weather services and CM does not provide any in the default installation. There exists Weather ... [More] Underground provider, which works, but only other provider I found (Yahoo! Weather provider) does not work with my CM without Google Play!. I would a way prefer OpenWeatherMap provider, but although CyanogenMod has the GitHub repository for one , but no APK anywhere (and certainly not one for F-Droid). Fortunately, I have found a blogpost which describes how simple it is build the APK from the given code. Unfortunately, author did not provide APK on his site. I am not sure, whether there is not some hook, but here is mine. [Less]
Posted 8 days ago by Wraithan McCarroll
Alternate Title: How I changed my mental model to be a more effective game developer and human. Back in February 2016, I started my journey as a professional game developer. I joined Sparkypants to work on the backend for Dropzone. This was about 7 ... [More] months ago at time of writing. I didn’t enter the game development world in the standard ways. I wasn’t at one of the various schools with game dev programs, I didn’t intern at a studio, I haven’t spent much of my personal development time building my own indie games. I had on the other hand, spent years building backend services, writing dev tools, competing in AI competitions, and building a slew of half finished open source projects. In short, I was not a game developer when I started. My stark contrast in background works to my advantage in many parts of my job. Most of our engineers haven’t worked on backend services and haven’t needed to scale that sort of infrastructure. My lead and friend Johannes has been instrumental in many of my successes so far in the company. He has background in backend development as well as game development and has often been a translator and guide to me as I learn what being a game developer means. At first, I assumed my contrast would work itself out naturally and I’d just become a game developer by osmosis. If I am surrounded by folks doing this and I’m actively developing a game, I will become a game developer. But that presupposes success, which was only coming to me in limited amounts. The other conclusion would be leaving game development because I wasn’t compatible with it, something I’m unwilling to accept at this time. I shared my concerns around not fitting the culture at Sparkypants with Johannes, as well as some productivity worries. I’ve learned over the years that if I’m feeling problems like this, my boss may be as well. Johannes with his typical wonderful encouraging personality reminded me that there are large aspects of my personality that fit in with the culture, just maybe my development style and conflict resolution needed work and recommended this talk by Jonathan Blow to show me the mental model that is closer to how many of the other developers operate, among some other advice. That talk by Jonathan Blow spends a fair amount of its time on the topic of optimization. Whether it is using data oriented techniques to make data series processing faster or drawing in a specific way to make the graphics card use less memory or any number of topics, optimization comes up in nearly every game development talk or article at some point. His point though was that we often spend too much time optimizing the wrong things. If you’ve been in computer science for a bit you’ve inevitably heard at least a fragment of the following quote from Donald Knuth, if not you’re in for a treat, this is a good one: Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. The bolded text is the part most folks quote, implying the rest. I had heard this, quoted it, and used it as justification for doing or not doing things many times in the past. But, I’d also forgotten it, I’d apply it when it was convenient for me, but not generally to my software development. Blow starts with the more traditional overthinking algorithms and code in general that most bring up when they speak on premature optimization. Then he followed on with the idea that selecting data structures is a form of optimization. That follow on was a segue to point out that any time you are thinking about a problem, you should keep in mind if it is the most important or urgent problems for you to think about. The end of the day, your job as a game developer is not to optimize for speed or correctness, but to optimize for fun. This means trying a lot of ideas and throwing many of them out. If you spent a lot of time optimizing for a million users of a feature and only some folks in the company use it before you decide to remove it, you’ve wasted a lot of effort. Maybe not completely, since you’ve probably learned during the process, but that effort could have been put into other features or parts of the system that may actually need attention. This shift in thinking has me letting go of details in more cases, spend less time on projects and focusing on “functional” over “correct and scalable.” The next day after watching that talk and discussing with Johannes, I attended RustConf and saw a series of amazing talks on Rust and programming in general. Of particular note for changing my mental model was Julia Evan’s closing keynote about learning systems programming with Rust. There were so many things that struck me during that talk, but I’ll just focus on the couple that were most relevant. First and foremost was the humility in the talk. Julia’s self described experience level was “intermediate developer” while having about as many years of experience as I have and I considered myself a more “senior developer.” At many points over the last couple years I’ve wrestled with this, considering myself senior then seeing evidence that I’m not. As more confident person, it is an easy trap for me to fall into. I’m in my first year as a game developer, regardless of other experience I’m a junior game developer at best. Starting to internalize this humility has resulted in fighting my coworkers less when they bring up topics that I think I have enough knowledge to weigh in on. The more experienced folks at work have decades of building games behind them. I’m not saying my input to these discussions is worthless, I still have a lot to contribute, but I’ve been able to check my ego at the door more easily and collaborate through topics instead of being contrary. The humility in the talk makes another major concept from it, life long learning, take on a new light. I’ve always been striving for more knowledge in the computer science space, so life long learning isn’t new to me, but like the optimization discussion above there is more nuance to be discovered. Having humility when trying to learn makes the experience so much richer for all parties. Teachers being humble will not over explain a topic and recognize that their way is not the only way. Learners being humble will be more receptive to ideas that don’t fit their current mental model and seek more information about them. This post has become quite long, so I’ll try to wrap things up and use further blog posts to explore these ideas with more concrete examples. Writing this has been a mechanism for me to understand some of this change in myself as well as help others who may end up in similar shoes. If this blog post were a tweet, I think it’d be summarized into “Pay attention to the important things, check your ego at the door, and keep learning.” which I’m sure would get me some retweets and stars or hearts or whatever. And if someone else said it, I’d go “of course, yeah folks mess this up all the time!” But, there is so much more nuance in those ideas. I now realize I’m just a very junior game developer with some other sometimes relevant experience, I’ve so much to learn from my peers and am extremely excited to do so. If you have additional resources that you’d think I or others who read this would find valuable, please comment below or send me at tweet. [Less]