I Use This!
Activity Not Available


Analyzed 3 months ago. based on code collected 6 months ago.
Posted about 15 hours ago
It’s Episode Thirty-Five of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Paul Tansom are connected and speaking to your brain. We are four, made whole by a new guest presenter. In this week’s show: We discuss the ... [More] news: Security firm Bishop Fox have claimed that they successfully hijacked an implantable medical device Mercedez-Benz has said that its driverless cars will prioritise safety of the occupants in an accident The first commercial shipment has been made by a self-driving lorry Microsoft has announced that it will raise business prices by up to 22% in the UK Sweden has banned general use of cameras on drones The iPod is 15 years old We discuss the community news: Ubuntu 16.10 has been released, along with all the flavours Ubuntu Budgie Remix 16.10 has been Released Ubuntu 17.04 is named “Zesty Zapus” A security vulnerability involving an unkempt bovine Canonical has announced live kernel patching for Ubuntu Happy 12th birthday to Ubuntu! Happy 20th birthday to KDE! Artist Mohamed A. Latheef has made an Ubuntu Timeline Wallpaper, showing the progression of Ubuntu’s default wallpaper throughout the years The Prime Indicator Plus app indicator makes it easier to switch between Intel and Nvidia graphics and so will GNOME 3.24 We mention some events: Ubuntu Online Summit – November 15th – 16th 2016 Paris Open Source Summit 2016 – 16-17 November 2016 – Paris, France UbuCon Europe – 18-20 November 2016 – Unperfekthaus, Essen, Germany FOSDEM 2017 – 4-5 February 2017 – Brussels, Belgium We also discussed getting an Arduboy and getting a nextcloud box. This weeks cover image is taken from Flickr. That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit. Join us in the Ubuntu Podcast Chatter group on Telegram [Less]
Posted about 17 hours ago
This is a guest post by Ryan Sipes, community manager at System76. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com We would like to introduce you to the newest version of the extremely portable Lemur ... [More] laptop. Like all System76 laptops the Lemur ships with Ubuntu, and you can choose between 16.04 LTS or the newest 16.10 release. About System76 System76 is based out of Denver, Colorado and has been making Ubuntu computers for ten years. Creating great machines born to run Linux is our sole purpose. Members of our team are contributors to many different open source projects and we send our work enabling hardware on our computers upstream, to the benefit of everyone running our favorite operating system. Our products have been praised as the best machines born to run Linux by fans including Chris Fisher of The Linux Action Show and Leo Laporte of This Week in Tech. We pride ourselves in offering fantastic products and providing first-class support to our users. Our support staff themselves are Linux/Ubuntu users and open source contributors, like Emma Marshall who is a host on the Ubuntu podcast. About the Lemur This is our 7th generation release of the Lemur, and it’s now 10% faster with the 7th gen Intel processor (Kaby Lake). Loaded with the newest Intel graphics, up to 32GB of DDR4 memory, and USB type-C port, this Lemur enables more powerful multitasking on the go. Weighing in at 3.6 lbs, this beauty is light enough to carry from meeting to meeting, or across campus. The Lemur design is thin, built with a handle grip at the back of the laptop, allowing you to easily grasp your Lemur and rush off to your next location. The Lemur retains its reputation, as the perfect option for those who want a high-quality portable Linux laptop at an affordable price (starting at only $699 USD). You can see the full tech specs and other details about the Lemur here. About the author Ryan Sipes is the Community Manager at System76. He is a regular guest on podcasts over at Jupiter Broadcasting, like The Linux Action Show and Linux Unplugged. He helped organize the first Kansas Linux Fest and the Lawrence Linux User Group. Ryan is also a longtime Ubuntu user (since Warty Warthog), and an enthusiastic open source evangelist. [Less]
Posted about 18 hours ago
This is a guest post by James Tait, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com I’m a father of two pre-teens, and like many kids their age (and many adults, for that ... [More] matter) they got caught up in the craze that is Minecraft. In our house we adopted Minetest as a Free alternative to begin with, and had lots of fun and lots of arguments! Somewhere along the way, they decided they’d like to run their own server and share it with their friends. But most of those friends were using Windows and there was no Windows client for Minetest at the time. And so it came to pass that I would trawl the internet looking for Free Minecraft server software, and eventually stumble upon Cuberite (formerly MCServer), “a lightweight, fast and extensible game server for Minecraft”. Cuberite is an actively developed project. At the time of writing, there are 16 open pull requests against the server itself, of which five are from the last week. Support for protocol version 1.10 has recently been added, along with spectator view and a steady stream of bug fixes. It is automatically built by Jenkins on each commit to master, and the resulting artefacts are made available on the website as .tar.gz and .zip files. The server itself runs in-place; that is to say that you just unpack the archive and run the Cuberite binary and the data files are created alongside it, so everything is self-contained. This has the nice side-effect that you can download the server once, copy or symlink a few files into a new directory and run a separate instance of Cuberite on a different port, say for testing. All of this sounds great, and mostly it is. But there are a few wrinkles that just made it a bit of a chore: No formal releases. OK, while there are official build artifacts, there are no milestones, no version numbers No package management! No version numbers means no managed package. We just get an archive with a self-contained build directory No init scripts. When I restart my server, I want the Minecraft server to be ready to play, so I need init scripts Now none of these problems is insurmountable. We can put the work in to build distro packages for each distribution from git HEAD. We can contribute upstart and systemd and sysvinit scripts. We can run a cron job to poll for new releases. But, frankly, it just seems like a lot of work. In truth I’d done a lot of manual work already to build Cuberite from source, create a couple of independent instances, and write init scripts. I’d become somewhat familiar with the build process, which basically amounted to something like: $ cd src/cuberite $ git pull $ git submodule update --init $ cd Release $ cmake -DCMAKE_BUILD_TYPE=RELEASE -DNO_NATIVE_OPTIMIZATION=ON .. $ make This builds the release binaries and copies the plugins and base data files into the Server subdirectory, which is what the Jenkins builds then compress and make available as artifacts. I’d then do a bit of extra work: I’ve been running this in a dedicated lxc container, and keeping a production and a test instance running so we could experiment with custom plugins, so I would: $ cd ../Server $ sudo cp Cuberite /var/lib/lxc/miners/rootfs/usr/games/Cuberite $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/production $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/testing $ sudo cp -r favicon.png lang Plugins Prefabs webadmin /var/lib/lxc/miners/rootfs/usr/share/games/cuberite Then in the container, /srv/cuberite/production and /srv/cuberite/testing contain symlinks to everything we just copied, and some runtime data files under /var/lib/cuberite/production and /var/lib/cuberite/testing, and we have init scripts to chdir to each of those directories and run Cuberite. All this is fine and could no doubt be moulded into packages for the various distros with a bit of effort. But wouldn’t it be nice if we could do all of that for all the most popular distros in one fell swoop? Enter snaps and snapcraft. Cuberite is statically linked and already distributed as a run-in-place archive, so it’s inherently relocatable, which means it lends itself perfectly to distribution as a snap. This is the part where I confess to working on the Ubuntu Store and being more than a little curious as to what things looked like coming from the opposite direction. So in the interests of eating my own dogfood, I jumped right in. Now snapcraft makes getting started pretty easy: $ mkdir cuberite $ cd cuberite $ snapcraft init And you have a template snapcraft.yaml with comments to instruct you. Most of this is straightforward, but for the version here I just used the current date. With the basic metadata filled in, I moved onto the snapcraft “parts”. Parts in snapcraft are the basic building blocks for your package. They might be libraries or apps or glue, and they can come from a variety of sources. The obvious starting point for Cuberite was the git source, and as you may have noticed above, it uses CMake as its build system. The snapcraft part is pretty straightforward: parts: cuberite: plugin: cmake source: https://github.com/cuberite/cuberite.git configflags: - -DCMAKE_BUILD_TYPE=RELEASE - -DNO_NATIVE_OPTIMIZATION=ON build_packages: - gcc - g++ snap: - -include - -lib That last section warrants some explanation. When I built Cuberite at first, it included some library files and header files from some of the bundled libraries that are statically linked. Since we’re not interested in shipping these files, they just add bloat to the final package, so we specify that they are excluded. That gives us our distributable Server directory, but it’s tucked away under the snapcraft parts hierarchy. So I added a release part to just copy the full contents of that directory and locate them at the root of the snap: release: after: [cuberite] plugin: dump source: parts/cuberite/src/Server filesets: "*": "." Some projects let you specify the output directory with a –prefix flag to a configure script or similar methods, and won’t need this little packaging hack, but it seems to be necessary here. At this stage I thought I was done with the parts and could just define the Cuberite app – the executable that gets run as a daemon. So I went ahead and did the simplest thing that could work: apps: cuberite: command: Cuberite daemon: forking plugs: - network - network-bind But I hit a snag. Although this would work with a traditional package, the snap is mounted read-only, and Cuberite writes its data files to the current directory. So instead I needed to write a wrapper script to switch to a writable directory, copy the base data files there, and then run the server: 1 #!/bin/bash 2 for file in brewing.txt crafting.txt favicon.png furnace.txt items.ini 3 monsters.ini README.txt; do 4 if [ ! -f "$SNAP_USER_DATA/$file" ]; then 5 cp --preserve=mode "$SNAP/$file" "$SNAP_USER_DATA" 6 fi 7 done 8 9 for dir in lang Plugins Prefabs webadmin; do 10 if [ ! -d "$SNAP_USER_DATA/$dir" ]; then 11 cp -r --preserve=mode "$SNAP/$dir" "$SNAP_USER_DATA" 12 fi 13 done 14 15 cd "$SNAP_USER_DATA" 16 exec "$SNAP"/Cuberite -d Then add the wrapper as a part: wrapper: plugin: dump source: . organize: Cuberite.wrapper: bin/Cuberite.wrapper And update the snapcraft app: cuberite: command: bin/Cuberite.wrapper daemon: forking plugs: - network - network-bind And with that we’re done! Right? Well, not quite…. While this works in snap’s devmode, in strict mode it results in the server being killed. A little digging in the output from snappy-debug.security scanlog showed that seccomp was taking exception to Cuberite using the fchown system call. Applying some Google-fu turned up a bug with a suggested workaround, which was applied to the two places (both in sqlite submodules) that used the offending system call and the snap rebuilt – et voilà! Our Cuberite server now happily runs in strict mode, and can be released in the stable channel. My build process now looks like this: $ vim snapcraft.yaml $ # Update version $ snapcraft pull cuberite $ # Patch two fchown calls $ snapcraft I can then push it to the edge channel: $ snapcraft push cuberite_20161023_amd64.snap --release edge Revision 1 of cuberite created. And when people have had a chance to test and verify, promote it to stable: $ snapcraft release cuberite 1 stable There are a couple of things I’d like to see improved in the process: It would be nice not to have to edit the snapcraft.yaml on each build to change the version. Some kind of template might work for this It would be nice to be able to apply patches as part of the pull phase of a part With those two wishlist items fixed, I could fully automate the Cuberite builds and have a fresh snap released to the edge channel on each commit to git master! I’d also like to make the wrapper a little more advanced and add another command so that I can easily manage multiple instances of Cuberite. But for now, this works – my boys have never had it so good! Download the Cuberite Snap [Less]
Posted about 23 hours ago
Since my last article, lots of things happened in the container world! Instead of using LXC, I find myself using the next great thing much much more now, namely LXC's big brother, LXD. As some people asked me, here's my trick to make containers use ... [More] my host as an apt proxy, significantly speeding up deployment times for both manual and juju-based workloads. Setting up a cache on the host First off, we'll want to setup an apt cache on the host. As is usually the case in the Ubuntu world, it all starts with an apt-get: sudo apt-get install squid-deb-proxy This will setup a squid caching proxy on your host, with a specific apt configuration listening on port 8000. Since it is tuned for larger machines by default, I find myself wanting to make it use a slightly smaller disk cache, using 2Gb instead of the default 40Gb is way more reasonable on my laptop. Simply editing the config file takes care of that: $EDITOR /etc/squid-deb-proxy # Look for the "cache_dir aufs" line and replace with: cache_dir aufs /var/cache/squid-deb-proxy 2000 16 256 # 2 gb Of course you'll need to restart the service after that: sudo service squid-deb-proxy restart Setting up LXD Compared to the similar procedure on LXC, setting up LXD is a breeze! LXD comes with configuration templates, and so we can conveniently either create a new template if we want to use the proxy selectively, or simply add the configuration to the "default" template, and all our containers will use the proxy, always! In the default template Since I never turn the proxy off on my laptop I saw no reason to apply the proxy selectively, and simply added it to the default profile: export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set default user.user-data - Of course the first part of the first command line automates the discovery of your IP address, conveniently, as long as your LXD bridge is called "lxdbr0". Once set in the default template, all LXD containers you start now have an apt proxy pointing to your host set up! In a new template Should you not want to alter the default template, you can easily create a new one: export PROFILE_NAME=proxy lxc profile create $PROFILE_NAME Then substitute the newly created profile in the previous command line. It becomes: export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set $PROFILE_NAME user.user-data - Launching a new container needs to add this configuration template, so that the container benefits form the proxy configuration: lxc launch ubuntu:xenial -p $PROFILE_NAME -p default Reverting If for some reason you don't want to use your host as a proxy anymore, it is quite easy to revert the changes to the template: lxc profile set user.user-data That's it! As you can see it is trivial to set an apt proxy on LXD, and using squid-deb-proxy on the host makes that configuration trivial. Hope this helps! [Less]
Posted 1 day ago
This is the eleventh blog post in this series about LXD 2.0. Introduction First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number ... [More] of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly. I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked! So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!). Requirements This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM. Remember, we’re running a full OpenStack here, this thing isn’t exactly light! Setting up the container OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container. We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed). Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features. lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch" lxc config device add openstack mem unix-char path=/dev/mem There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with: lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going. lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y lxc exec openstack -- apt-add-repository ppa:juju/stable -y lxc exec openstack -- apt update lxc exec openstack -- apt dist-upgrade -y lxc exec openstack -- apt install conjure-up -y And the last setup step is to configure LXD networking inside the container. Answer with the default for all questions, except for: Use the “dir” storage backend (“zfs” doesn’t work in a nested container) Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it) lxc exec openstack -- lxd init And that’s it for the container configuration itself, now we can deploy OpenStack! Deploying OpenStack with conjure-up As mentioned earlier, we’ll be using conjure-up to deploy OpenStack. This is a nice, user friendly, tool that interfaces with Juju to deploy complex services. Start it with: lxc exec openstack -- sudo -u ubuntu -i conjure-up Select “OpenStack with NovaLXD” Then select “localhost” as the deployment target (uses LXD) And hit “Deploy all remaining applications” This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected. Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard. Access the dashboard and spawn a container The dashboard runs inside a container, so you can’t just hit it from your web browser. The easiest way around this is to setup a NAT rule with: lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to Where “” is the dashboard IP address conjure-up gave you at the end of the installation. You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http:///horizon This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard! You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned. Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container. Conclusion OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine. Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves. It’s also one of the very few cases where multiple level of container nesting actually makes sense! Extra information The conjure-up website can be found at: http://conjure-up.io The Juju website can be found at: http://www.ubuntu.com/cloud/juju The main LXD website is at: https://linuxcontainers.org/lxd Development happens on Github at: https://github.com/lxc/lxd Mailing-list support happens on: https://lists.linuxcontainers.org IRC support happens in: #lxcontainers on irc.freenode.net Try LXD online: https://linuxcontainers.org/lxd/try-it [Less]
Posted 1 day ago
I was the sole editor and contributor of new content for Ubuntu Unleashed 2017 Edition. This book is intended for intermediate to advanced users.
Posted 2 days ago
FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium. This email contains information about: Real-Time ... [More] communications dev-room and lounge, speaking opportunities, volunteering in the dev-room and lounge, related events around FOSDEM, including the XMPP summit, social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities), the Planet aggregation sites for RTC blogs Call for participation - Real Time Communications (RTC) The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge. The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days. To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list. To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list. Speaking opportunities Note: if you used FOSDEM Pentabarf before, please use the same account/username Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission. Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form. You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track. First-time speaking? FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it. Submission guidelines The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one. In the "Submission notes", please tell us about: the purpose of your talk any other talk applications (dev-rooms, lightning talks, main track) availability constraints and special needs You can use HTML and links in your bio, abstract and description. If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work. We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics. Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate. Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used. Volunteers needed To make the dev-room and lounge run successfully, we are looking for volunteers: FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February participation in the Real-Time lounge helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses circulating this Call for Participation (text version) to other mailing lists See the mailing list discussion for more details about volunteering. Related events - XMPP and RTC summits The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details. We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event. Social events and dinners The traditional FOSDEM beer night occurs on Friday, 3 February. On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat. Spread the word and discuss If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk. If you regularly blog about RTC topics, please send details about your blog to the planet site administrators: Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community. Contact For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list. The dev-room administration team: Saúl Ibarra Corretgé (email) Iain R. Learmonth (email) Ralph Meijer (email) Daniel-Constantin Mierla (email) Daniel Pocock (email) [Less]
Posted 3 days ago
I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement. DNS66 creates a local VPN service on your Android device, and ... [More] diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked. You can find DNS66 here: on GitHub: https://github.com/julian-klode/dns66 on F-Droid: https://f-droid.org/app/org.jak_linux.dns66 F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version. Implementation Notes DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit: All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe). We literally redirect your DNS servers. Meaning if your DNS server is, all traffic to is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN. We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.  Filed under: Android, Uncategorized [Less]
Posted 3 days ago
Ubuntu Advantage is the commercial support package from Canonical. It includes Landscape, the Ubuntu systems management tool, and the Canonical Livepatch Service, which enables you to apply kernel fixes without restarting your Ubuntu 16.04 LTS ... [More] systems. Ubuntu Advantage gives the world’s largest enterprises the assurance they need to run mission-critical workloads such as enterprise databases, virtual/cloud hosts or infrastructural services on Ubuntu. The infographic below gives an overview of Ubuntu Advantage, it explains the business benefits, why Ubuntu is #1 in the cloud for many organisations and includes a selection of Ubuntu Advantage customers.   Download infographic     OR     Find out more about Ubuntu Advantage [Less]
Posted 4 days ago
Canonical will be taking part in Microsoft and IDC’s Enterprise Open Source Roadshow this autumn and winter.  This roadshow will pass through many western European countries and showcase a number of Open Source technologies that are driving change in ... [More] the software-defined datacentre. IDC predicts that by 2017, over 70% of enterprise companies will embrace Open Source and open APIs as the underpinnings for cloud integration strategies. This is already visible as developers search for flexible and agnostic platforms that enable them to work quickly and easily, even as the scale and complexity of software increases. Canonical’s Linux-based operating system, Ubuntu, delivers the platform of choice for many of these software developers.  And Canonical’s Juju enables them to model & deploy open source technologies on endpoints such as Microsoft Azure with just a few clicks.   Canonical will be demonstrating how to apply model driven operations to address the current phase change in software operations at the event.  In addition to live demonstrations of open source technologies, attendees of the 2016 Enterprise Open Source roadshow will learn how the IT industry is:  Adopting open source technologies with a focus on cloud-first datacenter modernization initiatives, Big Data projects, and DevOps oriented methodologies Focusing on governance, security, licensing and hybrid environment management for enterprise ready technologies Formulating a reassessment of IT skills and key competencies to develop talent for a new era Join Canonical at an upcoming event to learn how open source tooling such as Juju can help developers build next generation Big Software.  To register, or for more information, please visit the event website.   [Less]