I Use This!
Low Activity

News

Analyzed 28 days ago. based on code collected 2 months ago.
Posted 2 days ago
Kate Drane is a bit of an enigma. She helped launch hundreds of crowdfunding projects at Indiegogo (in fact, I worked with her on the Ubuntu Edge and Global Learning XPRIZE campaigns). She has helped connect hundreds of startups to expertise ... [More] , capital, and customers at Techstars, and is a beer fan who co-founded a canning business called The Can Van. There is one clear thread through her career: providing more efficient and better access for innovators, no matter what background they come from or what they want to create. Oh, and drinking great beer. She is fantastic and does great work. In this episode of Conversations With Bacon we unpack her experiences of getting started in this work, her work facilitating broader access to information, funding, and people, what it was like to be at Indiegogo through the teenage years of crowdfunding, how she works to support startups, the experience of entrepreneurship from different backgrounds, and more. Listen         Watch Click here subscribe to the show on YouTube The post Conversations With Bacon: Kate Drane, Techstars appeared first on Jono Bacon. [Less]
Posted 3 days ago
The KDE Applications website was a minimal possible change to move it from an unmaintained and incomplete site to a self-maintaining and complete site.  It’s been fun to see it get picked up in places like Ubuntu Weekly News, Late Night Linux and ... [More] chatting to people in real life they have seen it get an update. So clearly it’s important to keep our websites maintained.  Alas the social and technical barriers are too high in KDE.  My current hope is that the Promo team will take over the kde-www stuff giving it communication channels and transparancy that doesn’t currently exist.  There is plenty more work to be done on kde.org/applications website to make it useful, do give me a ping if you want to help out. In the mean time I’ve updated the kde.org front page text box where there is a brief description of KDE.  I remember a keynote from Aaron around 2010 at Akademy where he slagged off the description that was used on kde.org.  Since then we have had Visions and Missions and Goals and whatnot defined but nobody has thought to put them on the website.  So here’s the new way of presenting KDE to the world: Thanks to Carl and others for review.   [Less]
Posted 3 days ago
Like each month, here comes a report about the work of paid contributors to Debian LTS. Individual reports In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available: Abhijith PA did 17 hours (out of 14 ... [More] hours allocated plus 10 extra hours from April, thus carrying over 7h to June). Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June). Ben Hutchings did 18 hours (out of 18 hours allocated). Brian May did 10 hours (out of 10 hours allocated). Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June). Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June). Hugo Lefeuvre did 18 hours (out of 18 hours allocated). Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June). Markus Koschany did 18 hours (out of 18 hours allocated). Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April). Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June). Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April). Sylvain Beucler did 18 hours (out of 18 hours allocated). Thorsten Alteholz did 18 hours (out of 18 hours allocated). Evolution of the situation May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor. The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update. Thanks to our sponsors New sponsors are in bold. Platinum sponsors: TOSHIBA (for 44 months) GitHub (for 35 months) Civil Infrastructure Platform (CIP) (for 12 months) Gold sponsors: The Positive Internet (for 60 months) Blablacar (for 59 months) Linode (for 49 months) Babiel GmbH (for 38 months) Plat’Home (for 38 months) Silver sponsors: Domeneshop AS (for 60 months) Nantes Métropole (for 54 months) Dalenys (for 50 months) Univention GmbH (for 45 months) Université Jean Monnet de St Etienne (for 45 months) Ribbon Communications, Inc. (for 39 months) maxcluster GmbH (for 33 months) Exonet B.V. (for 29 months) Leibniz Rechenzentrum (for 23 months) Vente-privee.com (for 20 months) CINECA (for 12 months) Ministère de l’Europe et des Affaires Étrangères (for 7 months) Bronze sponsors: Evolix (for 60 months) Seznam.cz, a.s. (for 60 months) MyTux (for 59 months) Intevation GmbH (for 57 months) Linuxhotel GmbH (for 57 months) Daevel SARL (for 56 months) Bitfolk LTD (for 54 months) Megaspace Internet Services GmbH (for 54 months) NUMLOG (for 54 months) Greenbone Networks GmbH (for 53 months) WinGo AG (for 53 months) Ecole Centrale de Nantes – LHEEA (for 49 months) Sig-I/O (for 47 months) Entr’ouvert (for 44 months) Adfinis SyGroup AG (for 42 months) GNI MEDIA (for 36 months) Laboratoire LEGI – UMR 5519 / CNRS (for 36 months) Quarantainenet BV (for 36 months) Bearstech (for 28 months) LiHAS (for 28 months) People Doc (for 24 months) Catalyst IT Ltd (for 22 months) Supagro (for 18 months) Demarcq SAS (for 16 months) TrapX Security (for 13 months) NCC Group (for 10 months) Université Grenoble Alpes No comment | Liked this article? Click here. | My blog is Flattr-enabled. [Less]
Posted 3 days ago
Most of my development is done in LXD containers. I love this for a few reasons. It takes all of my development dependencies and makes it so that they're not installed on my host system, reducing the attack surface there. It means that I can do ... [More] development on any Linux that I want (or several). But it also means that I can migrate my development environment from my laptop to my desktop depending on whether I need more CPU or whether I want it to be closer to where I'm working (usually when travelling). When I'm traveling I use my Pagekite SSH setup on a Raspberry Pi as the SSH gateway. So when I'm at home I want to connect to the desktop directly, but when away connect through the gateway. To handle this I set up SSH to connect into the container no matter where it is. For each container I have an entry in my .ssh/config like this: Host container-name User user IdentityFile ~/.ssh/id_container-name CheckHostIP no ProxyCommand ~/.ssh/if-home.sh desktop-local desktop.pagekite.me %h You'll notice that I use a different SSH key for each container. They're easy to generate and it is worth not reusing them, this is a good practice. Then for the ProxyCommand I have a shell script that'll setup a connection depending on where the container is running, and what network my laptop is on. #!/bin/bash set -e CONTAINER_NAME=$3 SSH_HOME_HOST=$1 SSH_OUT_HOST=$2 ROUTER_IP=$( ip route get to 8.8.8.8 | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" ) ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' ) HOME_ROUTER_MAC="▒▒:▒▒:▒▒:▒▒:▒▒:▒▒" IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1" NC_COMMAND="nc -6 -q0" IP=$( bash -c "${IP_COMMAND}" ) if [ "${IP}" != "" ] ; then # Local exec ${NC_COMMAND} ${IP} 22 fi SSH_HOST=${SSH_OUT_HOST} if [ "${HOME_ROUTER_MAC}" == "${ROUTER_MAC}" ] ; then SSH_HOST=${SSH_HOME_HOST} fi IP=$( echo ${IP_COMMAND} | ssh ${SSH_HOST} bash -l -s ) exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" What this script does it that it first tries to see if the container is running locally by trying to find its IP: IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1" If it can find that IP, then it just sets up nc command to connect to the SSH port on that IP directly. If not, we need to see if we're on my home network or out and about. To do that I check to see if the MAC address of the default router matches the one on my home network. This is a good way to check because it doesn't require sending additional packets onto the network or otherwise connecting to other services. To get the router's IP we look at which router is used to get to an address on the Internet: ROUTER_IP=$( ip route get to 8.8.8.8 | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" ) We can then find out the MAC address for that router using the ARP table: ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' ) If that MAC address matches a predefined value (redacted in this post) I know that it's my home router, else I'm on the Internet somewhere. Depending on which case I know whether I need to go through the proxy or whether I can connect directly. Once we can connect to the desktop machine, we can then look for the IP address of the container off of there using the same IP command running on the desktop. Lastly, we setup an nc to connect to the SSH daemon using the desktop as a proxy. exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" What all this means so that I just type ssh contianer-name anywhere and it just works. I can move my containers wherever, my laptop wherever, and connect to my development containers as needed. [Less]
Posted 3 days ago
In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure ... [More] intruders can’t read our precious data! Architecture The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short: postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much. And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier. Encrypting an email with PGP/MIME PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information. Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original! And when I say easy, I mean easy - the function to encrypt the email is just a few lines long: def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str: """Encrypt given message""" encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients) if not encrypted_content: raise ValueError(encrypted_content.status) # Build the parts enc = email.mime.application.MIMEApplication( _data=str(encrypted_content).encode(), _subtype='octet-stream', _encoder=email.encoders.encode_7or8bit) control = email.mime.application.MIMEApplication( _data=b'Version: 1\n', _subtype='pgp-encrypted; name="msg.asc"', _encoder=email.encoders.encode_7or8bit) control['Content-Disposition'] = 'inline; filename="msg.asc"' # Put the parts together encmsg = email.mime.multipart.MIMEMultipart( 'encrypted', protocol='application/pgp-encrypted') encmsg.attach(control) encmsg.attach(enc) # Copy headers headers_not_to_override = {key.lower() for key in encmsg.keys()} for key, value in message.items(): if key.lower() not in headers_not_to_override: encmsg[key] = value return encmsg.as_string() Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :) def decrypt(message: email.message.Message) -> str: """Decrypt the given message""" return str(gnupg.GPG().decrypt(message.as_string())) (now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you). Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads). Pretty Easy privacy (p≥p) Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata! Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version: Subject: =?utf-8?Q?p=E2=89=A1p?= X-Pep-Version: 2.0 A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message. We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader. Putting it together Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this: def main() -> None: """Program entry""" parser = argparse.ArgumentParser( description="Encrypt/Decrypt mail using GPG/MIME") parser.add_argument('-d', '--decrypt', action="store_true", help="Decrypt rather than encrypt") parser.add_argument('recipient', nargs='*', help="key id or email of keys to encrypt for") args = parser.parse_args() msg = email.message_from_file(sys.stdin) if args.decrypt: sys.stdout.write(decrypt(msg)) else: sys.stdout.write(encrypt(msg, args.recipient)) if __name__ == '__main__': main() (don’t forget to add missing imports, or see the end of the blog post for links to full source code) Then, all we have to is edit our .dovecot.sieve to add filter "gpgmymail" "myemail@myserver.example"; and all incoming emails are automatically encrypted. Outgoing emails To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent). Encrypt or not Encrypt? Now do you actually want to encrypt? The disadvantages are clear: Server-side search becomes useless, especially if you use p≥p with encrypted Subject. Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot! You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from There are probably other things I have not thought about, so let me know on mastodon, email, or IRC! More source code You can find the source code of the script, and the setup for dovecot in my git repository. [Less]
Posted 4 days ago
This week we’ve been playing with tiling window managers, we “meet the forkers”, bring you some command line love and go over all your feedback. It’s Season 12 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are ... [More] connected and speaking to your brain. In this week’s show: We discuss what we’ve been up to recently: Alan has been playing with i3wm. We “meet the forkers”; when projects end, forks are soon to follow. We share a command line lurve: firejail – Firejail Security Sandbox And we go over all your amazing feedback – thanks for sending it – please keep sending it! “Steambox“ Image taken from Salamander arcade machine manufactured in 1986 by Konami. That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit. Join us in the Ubuntu Podcast Telegram group. [Less]
Posted 4 days ago
Over the past year, we’ve been working hard to bring you the next release of Vanilla framework: version 2.0, our most stable release to date. Since our last significant release, v1.8.0 back in July last year, we’ve been working hard to bring you ... [More] new features, improve the framework and make it the most stable version we’ve released. You can see the full list of new and updated changes in the framework in the full release notes . New to the Framework Features The release has too many changes to list them all here but we’ve outlined a list of the high-level changes below. The first major change was removing the Shelves grid, which has been in the framework since the beginning, and reimplementing the functionality with CSS grid. A Native CSS solution has given us more flexibility with layouts. While working on the grid, we also upped the grid max width base value from 990px to 1200px, following trends in screen sizes and resolutions. We revisited vertical spacing with a complete overhaul of what we implemented in our previous release. Now, most element combinations correctly fit the baseline vertical grid without the need to write custom styles. To further enforce code quality and control we added a prettier dependency with a pre-commit hook, which led to extensive code quality updates following running it for the first time. And in regards to dependencies, we’ve added renovate to the project to help to keep dependencies up-to-date. If you would like to see the full list of features you can look at our release notes, but below we’ve captured quick wins and big changes to Vanilla. Added a script for developers to analyse individual patterns with Parker Updated the max-width of typographic elements Broke up the large _typography.scss file into smaller files Standardised the naming of spacing variables to use intuitive (small/medium/large) naming where possible Increased the allowed number of media queries in the project to 50 in the parker configuration Adjusted the base font size so that it respects browser accessibility settings Refactored all *.scss files to remove sass nesting when it was just being used to build class names – files are now flatter and have full class names in more places, making the searching of code more intuitive Components and utilities Two new components have been added to Vanilla in this release: `p-subnav` and `p-pagination`. We’ve also added a new `u-no-print` utility to exclude web-only elements from printed pages. New components to the framework: Sub navigation and Pagination. Removed deprecated components As we extend the framework, we find that some of our older patterns are no longer needed or are used very infrequently. In order to keep the framework simple and to reduce the file size of the generated CSS, we try to remove unneeded components when we can. As core patterns improve, it’s often the case that overly-specific components can be built using more flexible base components. p-link–strong: this was a mostly-unused link variant which added significant maintenance overhead for little gain p-footer: this component wasn’t flexible enough for all needs and its layout is achievable with the much more flexible Vanilla grid p-navigation–sidebar: this was not widely used and can be easily replicated with other components Documentation updates Content During this cycle we improved content structure per component, each page now has a template with hierarchy and grouping of component styles, do’s and don’ts of using and accessible rules. With doing so we also updated the examples to showcase real use-case examples used across our marketing sites and web applications. Updated Colorpage on our documentation site. As well as updating content structure across all component pages, we also made other minor changes to the site listed below: Added new documentation for the updated typographic spacing Documented pull-quote variants Merged all “code” component documentation to allow easier comparison Changed the layout of the icons page Website In addition to framework and documentation content, we still managed to make time for some updates on vanillaframework.io, below is a list of high-level items we completed to users navigate when visiting our site: Updated the navigation to match the rest of the website Added Usabilla user feedback widget Updated the “Report a bug” link Updated mobile nav to use two dropdown menus grouped by “About” and “Patterns” rather than having two nav menus stacked Restyled the sidebar and the background to light grey Bug fixes As well as bringing lots of new features and enhancements, we continue to fix bugs to keep the framework up-to-date. Going forward we plan to improve our release process by pushing our more frequent patch releases to help the team with bugs that may be blocking feature deliverables. Getting Vanilla framework To get your hands on the latest release, follow the getting started instuctions which include all options for using Vanilla. The post New release: Vanilla framework 2.0 appeared first on Ubuntu Blog. [Less]
Posted 4 days ago
Drones, and their wide-ranging uses, have been a constant topic of conversation for some years now, but we’re only just beginning to move away from the hypothetical and into reality. The FAA estimates that there will be 2 million drones in the ... [More] United States alone in 2019, as adoption within the likes of distribution, construction, healthcare and other industries accelerates. Driven by this demand, Ubuntu – the most popular Linux operating system for the Internet of Things (IoT) – is now available on the Manifold 2, a high-performance embedded computer offered by leading drone manufacturer, DJI. The Manifold 2 is designed to fit seamlessly onto DJI’s drone platforms via the onboard SDK and enables developers to transform aerial platforms into truly smarter drones, performing complex computing tasks and advanced image processing, which in-turn creates rapid flexibility for enterprise usage. As part of the offering, the Manifold 2 is planning to feature snaps. Snaps are containerised software packages, designed to work perfectly across cloud, desktop, and IoT devices – with this the first instance of the technology’s availability on drones. The ability to add multiple snaps means a drone’s functionality can be altered, updated, and expanded over time. Depending on the desired use case, enterprises can ensure the form a drone is shipped in does not represent its final iteration or future worth. Snaps also feature enhanced security and greater flexibility for developers. Drones can receive automatic updates in the field, which will become vital as enterprises begin to deploy large-scale fleets. Snaps also support roll back functionality in the event of failure, meaning developers can innovate with more confidence across this growing field. Designed for developers, having the Manifold 2 pre-installed with Ubuntu means support for Linux, CUDA, OpenCV, and ROS. It is ideal for the research and development of professional applications, and can access flight data and perform intelligent control and data analysis. It can be easily mounted to the expansion bay of DJI’s Matrice 100, Matrice 200 Series V2 and Matrice 600, and is also compatible with the A3 and N3 flight controller. DJI has now counted at least 230 people have been rescued with the help of a drone since 2013. As well as being used by emergency services, drones are helping to protect lives by eradicating the dangerous elements of certain occupations. Apellix is one such example; supplying drones which run on Ubuntu to alleviate the need for humans to be at the forefront of work in elevated, hazardous environments, such as aircraft carriers and oil rigs. Utilising the freedom brought by snaps, it is exciting to see how developers drive the drone industry forward. Software is allowing the industrial world to move from analog to digital, and mission-critical industries will continue to evolve based on its capabilities. The post Customisable for the enterprise: the next-generation of drones appeared first on Ubuntu Blog. [Less]
Posted 4 days ago
Over the past couple months I have been trying to participate in the Monday morning net run by the SDF Amateur Radio Club from SDF.org. It has been pretty hard for me to catch up with any of the local amateur radio clubs. There is no local club ... [More] associated with the American Radio Relay League in Ashtabula County but it must be remembered that land-wise Ashtabula County is fairly large in terms of land area. For reference, the state of Rhode Island and Providence Plantations has a dry land area of 1,033.81 square miles. Ashtabula County has a dry land area of 702 square miles. Ashtabula County is 68% the size of the state of Rhode Island in terms of land area even though population-wise Ashtabula County has 9.23% of Rhode Island's equivalent population. Did I hear mooing off in the distance somewhere? For British readers, it is not only safe to say I'm not just in a fairly isolated area but that it may resemble Ambridge a bit too much. Now, the beautiful part about the SDF Amateur Radio Club net is that it takes place via the venerable EchoLink system. The package known as qtel allows for access to the repeater-linking network from your Ubuntu desktop. Unlike normal times, the Wikipedia page about EchoLink actually provides a fairly nice write-up for the non-specialist. Now, there is a relatively old article on the American Radio Relay League's website about Ubuntu. If you look at the Ubuntu Wiki, there is talk about Ubuntu Hams having their own net but the last time that page was edited was 2012. While there is talk of an IRC channel, a quick look at irclogs.ubuntu.com shows that it does not look like the log bot has been in the channel this month. E-mail to the Launchpad Team's mailing list hosted on Launchpad itself is a bit sporadic. I have been a bit MIA myself due to work pressures. That does not mean I am unwilling to act as the Net Control Station if there is a group willing to hold a net on EchoLink perhaps. It would be a good way to get hams from across the Ubuntu realms to have some fellowship with each other. For now, I am going to make a modest proposal. If anybody is interested in such an Ubuntu net could you please check in on the SDF ARC net on June 17 at 0000 UTC? To hear what the most recent net sounded like, you can listen to the recorded archive of that net's audio in MP3 format. Just check in on June 17th at 0000 UTC and please stick around until after the net ends. We can talk about possibilities after the SDF net ends. All you need to do is be registered to use EchoLink and have appropriate software to connect to the appropriate conference. I will cause notice of this blog post to be made to the Launchpad Team's mailing list. A Modest Ham-Related Proposal by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. [Less]
Posted 5 days ago
Hello Ubuntu Server The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on ... [More] Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion. Spotlight: Bryce Harrington Keeping with the theme of “bringing them back into the fold”, we are proud to announce that Bryce Harrington has rejoined Canonical on the Ubuntu Server team. In his former tenure at Canonical, he maintained the X.org stack for Ubuntu and helped bridge us from the old ‘edit your own xorg.org’ days, swatted GPU hang bugs on Intel, and contributed to Launchpad development. Home-based in Oregon, with around 20 years of open source development experience. Bryce created the Inkscape project, and he is currently a board member of the X.org Foundation. He joins us most recently from Samsung Research America where he was a Senior Open Source Developer and the release manager for the Cairo and Wayland projects. Bryce will be helping us tackle the development and maintenance of Ubuntu Server packages. We are thrilled to have his additional expertise to help spread the wealth of software and packaging improvements that help make Ubuntu great. When he’s not building software, he is building things in his woodworking shop. Welcome (back) Bryce (bryce on Freenode)! cloud-init Allow identification of OpenStack by Asset Tag [Mark T. Voelker] (LP: #1669875) Fix spelling error making ‘an Ubuntu’ consistent. [Brian Murray] run-container: centos: comment out the repo mirrorlist [Paride Legovini] netplan: update netplan key mappings for gratuitous-arp [Ryan Harper] (LP: #1827238) curtin vmtest: dont raise SkipTest in class definition [Ryan Harper] vmtests: determine block name via dname when verifying volume groups [Ryan Harper] vmtest: add Centos66/Centos70 FromBionic release and re-add tests [Ryan Harper] block-discover: add cli/API for exporting existing storage to config [Ryan Harper] vmtest: refactor test_network code for Eoan [Ryan Harper] curthoooks: disable daemons while reconfiguring mdadm [Michael Hudson-Doyle] (LP: #1829325.) mdadm: fix install to existing raid [Michael Hudson-Doyle] (LP: #1830157) Contact the Ubuntu Server team Chat on #ubuntu-server on Freenode Email the ubuntu-server mailing list Find us on the Ubuntu Community Hub – server channel Bug Work and Triage 278 bugs in the backlog Notes on daily bug triage Ubuntu Server Packages Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report. Proposed Uploads to the Supported Releases Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed. Total: 10 exim4, disco, 4.92-4ubuntu1.1, bryce libvirt, cosmic, 4.6.0-2ubuntu3.6, paelzer open-vm-tools, bionic, 2:10.3.10-1~ubuntu0.18.04.1, paelzer open-vm-tools, cosmic, 2:10.3.10-1~ubuntu0.18.10.1, paelzer openvpn, bionic, 2.4.4-2ubuntu1.3, paelzer openvpn, cosmic, 2.4.6-1ubuntu2.2, paelzer python-tornado, bionic, 4.5.3-1ubuntu0.1, xnox qemu, bionic, 1:2.11+dfsg-1ubuntu7.15, paelzer qemu, cosmic, 1:2.12+dfsg-3ubuntu8.9, paelzer qemu, disco, 1:3.1+dfsg-2ubuntu3.2, paelzer Uploads Released to the Supported Releases Total: 26 containerd, xenial, 1.2.6-0ubuntu1~16.04.2, mwhudson corosync, bionic, 2.4.3-0ubuntu1.1, leosilvab corosync, xenial, 2.3.5-3ubuntu2.3, leosilvab exim4, cosmic, 4.91-6ubuntu1.1, mdeslaur exim4, bionic, 4.90.1-1ubuntu1.2, mdeslaur ipvsadm, xenial, 1:1.28-3ubuntu0.16.04.1, paelzer ipvsadm, bionic, 1:1.28-3ubuntu0.18.04.1, paelzer keepalived, cosmic, 1:1.3.9-1ubuntu1.1, mdeslaur keepalived, bionic, 1:1.3.9-1ubuntu0.18.04.2, mdeslaur keepalived, xenial, 1:1.2.24-1ubuntu0.16.04.2, mdeslaur libseccomp, disco, 2.4.1-0ubuntu0.19.04.3, jdstrand libseccomp, cosmic, 2.4.1-0ubuntu0.18.10.3, jdstrand libseccomp, bionic, 2.4.1-0ubuntu0.18.04.2, jdstrand libseccomp, xenial, 2.4.1-0ubuntu0.16.04.2, jdstrand libvirt, disco, 5.0.0-1ubuntu2.2, paelzer libvirt, disco, 5.0.0-1ubuntu2.2, paelzer openvpn, xenial, 2.3.10-1ubuntu2.2, j-latten openvpn, bionic, 2.4.4-2ubuntu1.2, j-latten openvpn, cosmic, 2.4.6-1ubuntu2.1, j-latten php7.0, xenial, 7.0.33-0ubuntu0.16.04.5, mdeslaur php7.2, disco, 7.2.19-0ubuntu0.19.04.1, mdeslaur php7.2, cosmic, 7.2.19-0ubuntu0.18.10.1, mdeslaur php7.2, bionic, 7.2.19-0ubuntu0.18.04.1, mdeslaur python-cryptography, bionic, 2.1.4-1ubuntu1.3, xnox ruby2.5, bionic, 2.5.1-1ubuntu1.4, xnox runc, xenial, 1.0.0~rc7+git20190403.029124da-0ubuntu1~16.04.3, mwhudson Uploads to the Development Release Total: 9 apache2, 2.4.38-3ubuntu1, xnox backuppc, 3.3.2-2, team+pkg-backuppc byobu, 5.128-0ubuntu1, kirkland curtin, 19.1-7-g37a7a0f4-0ubuntu1, chad.smith php7.2, 7.2.19-0ubuntu1, mdeslaur python-markdown, 3.1.1-1, None qemu, 1:3.1+dfsg-2ubuntu5, paelzer ruby2.5, 2.5.5-3ubuntu1, costamagnagianfranco walinuxagent, 2.2.40-0ubuntu1, cyphermox The post Ubuntu Server development summary – 11 June 2019 appeared first on Ubuntu Blog. [Less]