312
I Use This!
Activity Not Available

News

Posted about 9 years ago
Portability has always been very important to Samba. Nowadays Samba is mostly used on top of Linux, but Tridge developed the early versions of his SMB implementation on a Sun workstation. A few years later, when the project was being picked up, it ... [More] was ported to Linux and eventually to a large number of other free and non-free Unix-like operating systems. Initially regression testing on different platforms was done manually and ad-hoc. Once Samba had support for a larger number of platforms, including numerous variations and optional dependencies, making sure that it would still build and run on all of these became a non-trivial process. To make it easier to find regressions in the Samba codebase that were platform-specific, tridge put together a system to automatically build Samba regularly on as many platforms as possible. So, in Spring 2001, the build farm was born - this was a couple of years before other tools like buildbot came around. The Build Farm The build farm is a collection of machines around the world that are connected to the internet, with as wide a variety of platforms as possible. In 2001, it wasn't feasible to just have a single beefy machine or a cloud account on which we could run virtual machines with AIX, HPUX, Tru64, Solaris and Linux so we needed access to physical hardware. The build farm runs as a single non-privileged user, which has a cron job set up that runs the build farm worker script regularly. Originally the frequency was every couple of hours, but soon we asked machine owners to run it as often as possible. The worker script is as short as it is simple. It retrieves a shell script from the main build farm repository with instructions to run and after it has done so, it uploads a log file of the terminal output to samba.org using rsync and a secret per-machine password. Some build farm machines are dedicated, but there have also been a large number of the years that would just run as a separate user account on a machine that was tasked with something else. Most build farm machines are hosted by Samba developers (or their employers) but we've also had a number of community volunteers over the years that were happy to add an extra user with an extra cron job on their machine and for a while companies like SourceForge and HP provided dedicated porter boxes that ran the build farm. Of course, there are some security usses with this way of running things. Arbitrary shell code is downloaded from a host claiming to be samba.org and run. If the machine is shared with other (sensitive) processes, some of the information about those processes might leak into logs. Our web page has a section about adding machines for new volunteers, with a long list of warnings. Since then, various other people have been involved in the build farm. Andrew Bartlett started contributing to the build farm in July 2001, working on adding tests. He gradually took over as the maintainer in 2002, and various others (Vance, Martin, Mathieu) have contributed patches and helped out with general admin. In 2005, tridge added a script to automatically send out an e-mail to the committer of the last revision before a failed build. This meant it was no longer necessary to bisect through build farm logs on the web to find out who had broken a specific platform when; you'd just be notified as soon as it happened. The web site Once the logs are generated and uploaded to samba.org using rsync, the web site at http://build.samba.org/ is responsible for making them accessible to the world. Initially there was a single perl file that would take care of listing and displaying log files, but over the years the functionality has been extended to do much more than that. Initial extensions to the build farm added support for viewing per-compiler and per-host builds, to allow spotting trends. Another addition was searching logs for common indicators of running out of disk space. Over time, we also added more samba.org-projects to the build farm. At the moment there are about a dozen projects. In a sprint in 2009, Andrew Bartlett and I changed the build farm to store machine and build metadata in a SQLite database rather than parsing all recent build log files every time their results were needed. In a follow-up sprint a year later, we converted most of the code to Python. We also added a number of extensions; most notably, linking the build result information with version control information so we could automatically email the exact people that had caused the build breakage, and automatically notifying build farm owners when their machines were not functioning. autobuild Sometime in 2011 all committers started using the autobuild script to push changes to the master Samba branch. This script enforces a full build and testsuite run for each commit that is pushed. If the build or any part of the testsuite fails, the push is aborted. This alone massively reduced the number of problematic changes that was pushed, making it less necessary for us to be made aware of issues by the build farm. The rewrite also introduced some time bombs into the code. The way we called out to our ORM caused the code to fetch all build summary data from the database every time the summary page was generated. Initially this was not a problem, but as the table grew to 100,000 rows, the build farm became so slow that it was frustrating to use. Analysis tools Over the years, various special build farm machines have also been used to run extra code analysis tools, like static code analysis, lcov, valgrind or various code quality scanners. Summer of Code Of the last couple of years the build farm has been running happily, and hasn't changed much. This summer one of our summer of code students, Krishna Teja Perannagari, worked on improving the look of the build farm - updating it to the current Samba house style - as well as various performance improvements in the Python code. Jenkins? The build farm still works reasonably well, though it is clear that various other tools that have had more developer attention have caught up with it. If we would have to reinvent the build farm today, we would probably end up using an off-the-shelve tool like Jenkins that wasn't around 14 years ago. We would also be able to get away with using virtual machines for most of our workers. Non-Linux platforms have become less relevant in the last couple of years, though we still care about them. The build farm in its current form works well enough for us, and I think porting to Jenkins - with the same level of platform coverage - would take quite a lot of work and have only limited benefits. (Thanks to Andrew Bartlett for proofreading the draft of this post.) [Less]
Posted over 9 years ago
“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m ... [More] inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux – but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that! Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices. Transactional updates. App store. A huge range of hardware. Branding for device manufacturers. In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device. If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build. I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds! [Less]
Posted over 9 years ago
What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all? What if your apps could be isolated from one another ... [More] completely, so there’s no possibility that installing one app could break another, and stronger assurance that a compromise of one app won’t compromise the data from another? When we set out to build the Ubuntu Phone we took on the challenge of raising the bar for reliability and security in the mobile market. And today that same technology is coming to the cloud, in the form of a new “snappy” image called Ubuntu Core, which is in beta today on Azure and as a KVM image you can run on any Linux machine. This is in a sense the biggest break with tradition in 10 years of Ubuntu, because snappy Ubuntu Core doesn’t use debs or apt-get. We call it “snappy” because that’s the new bullet-proof mechanism for app delivery and system updates; it’s completely different to the traditional package-based Ubuntu server and desktop. The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect. Of course, that means that apt-get won’t work, but that’s OK since developers can reuse debs to make their snappy apps, and the core system is exactly the same as any other Ubuntu system – server or desktop. Whenever we make a fix to packages in Ubuntu, we’ll publish the same fix to Ubuntu Core, and systems can get that fix transactionally. In fact, updates to Ubuntu Core are even smaller than package updates because we only need to send the precise difference between the old and new versions, not the whole package. Of course, Ubuntu Core is in addition to all the current members of the Ubuntu family – desktop, server, and cloud images that use apt-get and debs, and all the many *buntu remixes which bring their particular shine to our community. You still get all the Ubuntu you like, and there’s a new snappy Core image on all the clouds for the sort of deployment where precision, specialism and security are the top priority. This is the biggest new thing in Ubuntu since we committed to deliver a mobile phone platform, and it’s very delicious that it’s borne of exactly the same amazing technology that we’ve been perfecting for these last three years. I love it when two completely different efforts find underlying commonalities, and it’s wonderful to me that the work we’ve done for the phone, where carriers and consumers are the audience, might turn out to be so useful in the cloud, which is all about back-end infrastructure. Why is this so interesting? Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what’s running on a particular system, and you can coordinate updates with very high precision across thousands of instances in the cloud. You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image. That’s very nice indeed. There have been interesting developments in the transaction systems field over the past few years. ChromeOS is updated transactionally, when you turn it on, it makes sure it’s running the latest version of the OS. CoreOS brought aspects of Chrome OS and Gentoo to the cloud, Red Hat has a beta of Atomic as a transactional version of RHEL, and of course Docker is a way of delivering apps transactionally too (it combines app and system files very neatly). Ubuntu Core raises the bar for certainty, extensibility and security in the transactional systems game. What I love about Ubuntu Core is the way it embraces transactional updates not just for the base system but for applications on top of the system as well. The system is just one layer that can be updated transactionally, and so are each of the apps on the system. You get an extensible platform that retains the lovely properties of transactionality but lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool. For example, in CoreOS, things like Fleet are built-in, you can’t opt out. In Ubuntu Core, we aim for a much smaller Core, and then enable you to install Docker or any other container system as a framework, with snappy. We’re working with all the different container vendors, and app systems, and container coordination systems, to help them make snappy versions of their tools. That way, you get the transactional semantics you want with the freedom to use whichever tools suit you. And the whole thing is smaller and more secure because we baked fewer assumptions into the core. The snappy system is also designed to provide security guarantees across diverse environments. Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps. Atomic might allow you to roll back, but it’s virtually impossible to customise the system for your own preferences rather than Red Hat’s, and still know you are running the same secure bits as anybody else. Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it. We use strong application isolation to keep data confidential between apps. If you install a bad app, it only has access to the data you create with that app, not to data from other applications. This is a key piece of security that comes from our efforts to bring Ubuntu to the mobile market, where malware is a real problem today. And as a result, we can enable developers to go much faster – they can publish their app on whatever schedule suits them, regardless of the Ubuntu release cadence. Want the very latest app? Snappy makes that easiest. This is also why I think snappy will result in much simpler systems management. Instead of having literally thousands of packages on your Ubuntu server, with tons of dependencies, a snappy system just has a single package for each actual app or framework that’s installed. I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output: $ snappy info release: ubuntu-core/devel frameworks: docker, panamax apps: owncloud That’s much easier to manage and reason about at scale. We recently saw how complicated things can get in the old packaging system, when Owncloud upstream wanted to remove the original packages of Owncloud from an old Ubuntu release. With snappy Ubuntu, Owncloud can publish exactly what they want you to use as a snappy package, and can update that for you directly, in a safe transactional manner with full support for rolling back. I think upstream developers are going to love being in complete control of their app on snappy Ubuntu Core. $ sudo snappy install hello-world Welcome to a snappy new world! Things here are really nice and simple: $ snappy info $ snappy build . $ snappy install foo $ snappy update foo $ snappy rollback foo $ snappy remove foo $ snappy update-versions $ snappy versions Just for fun, download the image and have a play. I’m delighted that Ubuntu Core is today’s Qemu Advent Calendar image too! Or launch it on Azure, coming soon to all the clouds. It’s important for Ubuntu to continue to find new ways to bring free software to a wider audience. The way people think about software is changing, and I think Ubuntu Core becomes a very useful tool for people doing stuff at huge scale in the cloud. If you want crisp, purposeful, tightly locked down systems that are secure by design, Ubuntu Core and snappy packages are the right tool for the job. Running docker farms? Running transcode farms? I think you’ll like this very much! We have the world’s biggest free software community because we find ways to recognise all kinds of contributions and to support people helping one another to bring their ideas to fruition. One of the goals of snappy was to reduce the overhead and bureaucracy of packaging software to make it incredibly easy for anybody to publish code they care about to other Ubuntu users. We have built a great community of developers using this toolchain for the phone, I think it’s going to be even better on the cloud where Ubuntu is already so popular. There is a lot to do in making the most of existing debs in the snappy environment, and I’m excited that there is a load of amazing software on github that can now flow more easily to Ubuntu users on any cloud. Welcome to the family, Ubuntu Core! [Less]
Posted over 9 years ago
What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all? What if your apps could be isolated from one another ... [More] completely, so there’s no possibility that installing one app could break another, and stronger assurance that a compromise of one app won’t compromise the data from another? When we set out to build the Ubuntu Phone we took on the challenge of raising the bar for reliability and security in the mobile market. And today that same technology is coming to the cloud, in the form of a new “snappy” image called Ubuntu Core, which is in beta today on Azure and as a KVM image you can run on any Linux machine. This is in a sense the biggest break with tradition in 10 years of Ubuntu, because Ubuntu Core doesn’t use debs or apt-get. We call it “snappy” because that’s the new bullet-proof mechanism for app delivery and system updates; it’s completely different to the traditional package-based Ubuntu server and desktop. The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect. Of course, that means that apt-get won’t work, but that’s OK since developers can reuse debs to make their snappy apps, and the core system is exactly the same as any other Ubuntu system – server or desktop. Whenever we make a fix to packages in Ubuntu, we’ll publish the same fix to Ubuntu Core, and systems can get that fix transactionally. In fact, updates to Ubuntu Core are even smaller than package updates because we only need to send the precise difference between the old and new versions, not the whole package. Of course, Ubuntu Core is in addition to all the current members of the Ubuntu family – desktop, server, and cloud images that use apt-get and debs, and all the many *buntu remixes which bring their particular shine to our community. You still get all the Ubuntu you like, and there’s a new snappy Core image on all the clouds for the sort of deployment where precision, specialism and security are the top priority. This is the biggest new thing in Ubuntu since we committed to deliver a mobile phone platform, and it’s very delicious that it’s borne of exactly the same amazing technology that we’ve been perfecting for these last three years. I love it when two completely different efforts find underlying commonalities, and it’s wonderful to me that the work we’ve done for the phone, where carriers and consumers are the audience, might turn out to be so useful in the cloud, which is all about back-end infrastructure. Why is this so interesting? Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what’s running on a particular system, and you can coordinate updates with very high precision across thousands of instances in the cloud. You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image. That’s very nice indeed. There have been interesting developments in the transaction systems field over the past few years. ChromeOS is updated transactionally, when you turn it on, it makes sure it’s running the latest version of the OS. CoreOS brought aspects of Chrome OS and Gentoo to the cloud, Red Hat has a beta of Atomic as a transactional version of RHEL, and of course Docker is a way of delivering apps transactionally too (it combines app and system files very neatly). Ubuntu Core raises the bar for certainty, extensibility and security in the transactional systems game. What I love about Ubuntu Core is the way it embraces transactional updates not just for the base system but for applications on top of the system as well. The system is just one layer that can be updated transactionally, and so are each of the apps on the system. You get an extensible platform that retains the lovely properties of transactionality but lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool. For example, in CoreOS, things like Fleet are built-in, you can’t opt out. In Ubuntu Core, we aim for a much smaller Core, and then enable you to install Docker or any other container system as a framework, with snappy. We’re working with all the different container vendors, and app systems, and container coordination systems, to help them make snappy versions of their tools. That way, you get the transactional semantics you want with the freedom to use whichever tools suit you. And the whole thing is smaller and more secure because we baked fewer assumptions into the core. The snappy system is also designed to provide security guarantees across diverse environments. Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps. Atomic might allow you to roll back, but it’s virtually impossible to customise the system for your own preferences rather than Red Hat’s, and still know you are running the same secure bits as anybody else. Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it. We use strong application isolation to keep data confidential between apps. If you install a bad app, it only has access to the data you create with that app, not to data from other applications. This is a key piece of security that comes from our efforts to bring Ubuntu to the mobile market, where malware is a real problem today. And as a result, we can enable developers to go much faster – they can publish their app on whatever schedule suits them, regardless of the Ubuntu release cadence. Want the very latest app? Snappy makes that easiest. This is also why I think snappy will result in much simpler systems management. Instead of having literally thousands of packages on your Ubuntu server, with tons of dependencies, a snappy system just has a single package for each actual app or framework that’s installed. I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output: $ snappy info release: ubuntu-core/devel frameworks: docker, panamax apps: owncloud That’s much easier to manage and reason about at scale. We recently saw how complicated things can get in the old packaging system, when Owncloud upstream wanted to remove the original packages of Owncloud from an old Ubuntu release. With snappy Ubuntu, Owncloud can publish exactly what they want you to use as a snappy package, and can update that for you directly, in a safe transactional manner with full support for rolling back. I think upstream developers are going to love being in complete control of their app on snappy Ubuntu Core. $ sudo snappy install hello-world Welcome to a snappy new world! Things here are really nice and simple: $ snappy info $ snappy build . $ snappy install foo $ snappy update foo $ snappy rollback foo $ snappy remove foo $ snappy update-versions $ snappy versions Just for fun, download the image and have a play. I’m delighted that Ubuntu Core is today’s Qemu Advent Calendar image too! Or launch it on Azure, coming soon to all the clouds. It’s important for Ubuntu to continue to find new ways to bring free software to a wider audience. The way people think about software is changing, and I think Ubuntu Core becomes a very useful tool for people doing stuff at huge scale in the cloud. If you want crisp, purposeful, tightly locked down systems that are secure by design, Ubuntu Core and snappy packages are the right tool for the job. Running docker farms? Running transcode farms? I think you’ll like this very much! We have the world’s biggest free software community because we find ways to recognise all kinds of contributions and to support people helping one another to bring their ideas to fruition. One of the goals of snappy was to reduce the overhead and bureaucracy of packaging software to make it incredibly easy for anybody to publish code they care about to other Ubuntu users. We have built a great community of developers using this toolchain for the phone, I think it’s going to be even better on the cloud where Ubuntu is already so popular. There is a lot to do in making the most of existing debs in the snappy environment, and I’m excited that there is a load of amazing software on github that can now flow more easily to Ubuntu users on any cloud. Welcome to the family, Ubuntu Core! [Less]
Posted over 9 years ago
Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago. The key ideas in ... [More] that draft are: Only call services “core” if the user can detect them. How the cloud is deployed or operated makes no difference to a user. We want app developers to Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible. Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement. Require that “common” services can be self-deployed. Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it. Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service. For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could. Keep the whole set as small as possible. We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed. In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services: Keystone (without identity, nothing) Nova (the basis for any other service is the ability to run processes somewhere) Glance (hard to use Nova without it) Neutron (where those services run) Designate (DNS is a core aspect of the network) Cinder (where they persist data) I would consider these to be common OpenStack services: SWIFT (widely deployed, can be self-provisioned with Cinder block backends) Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block) Horizon (widely deployed, but we want to encourage innovation in the dashboard) And these I would consider neither core nor common, though some of them are clearly on track there: Barbican (not widely implemented) Ceilometer (internal implementation detail, can’t be common because it requires access to other parts) Juju (not widely implemented) Kite (not widely implemented) HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast) MAAS (who cares how the cloud was built?) Manila (not widely implemented, possibly core once solid, otherwise common once, err, common) Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project) Triple-O (user doesn’t care how the cloud was deployed) Trove (not widely implemented, might make it to “common” if widely deployed) Tuskar (see Ironic) Zaqar (not widely implemented) In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete). [Less]
Posted over 9 years ago
Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago. The key ideas in ... [More] that draft are: Only call services “core” if the user can detect them. How the cloud is deployed or operated makes no difference to a user. We want app developers to Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible. Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement. Require that “common” services can be self-deployed. Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it. Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service. For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could. Keep the whole set as small as possible. We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed. In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services: Keystone (without identity, nothing) Nova (the basis for any other service is the ability to run processes somewhere) Glance (hard to use Nova without it) Neutron (where those services run) Designate (DNS is a core aspect of the network) Cinder (where they persist data) I would consider these to be common OpenStack services: SWIFT (widely deployed, can be self-provisioned with Cinder block backends) Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block) Horizon (widely deployed, but we want to encourage innovation in the dashboard) And these I would consider neither core nor common, though some of them are clearly on track there: Barbican (not widely implemented) Ceilometer (internal implementation detail, can’t be common because it requires access to other parts) Juju (not widely implemented) Kite (not widely implemented) HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast) MAAS (who cares how the cloud was built?) Manila (not widely implemented, possibly core once solid, otherwise common once, err, common) Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project) Triple-O (user doesn’t care how the cloud was deployed) Trove (not widely implemented, might make it to “common” if widely deployed) Tuskar (see Ironic) Zaqar (not widely implemented) In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete). [Less]
Posted over 9 years ago
Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. ... [More] Glad to see the unicorn theme went down well, judging from the various desktops I see on G+. And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used. Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team. To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again. This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust. In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet! [Less]
Posted over 9 years ago
Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. ... [More] Glad to see the unicorn theme went down well, judging from the various desktops I see on G+. And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used. Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team. To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again. This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust. In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet! [Less]
Posted over 9 years ago
The South African Supreme Court of Appeal today found in my favour in a case about exchange controls. I will put the returned funds of R250m plus interest into a trust, to underwrite constitutional court cases on behalf of those who’s circumstances ... [More] deny them the ability to be heard where the counterparty is the State. Here is a statement in full: Exchange controls may appear to be targeted at a very small number of South Africans but their consequences are significant for all of us: especially those who are building relationships across Southern Africa such as migrant workers and small businesses seeking to participate in the growth of our continent. It is more expensive to work across South African borders than almost anywhere else on Earth, purely because the framework of exchange controls creates a cartel of banks authorized to act as the agents of the Reserve Bank in currency matters. We all pay a very high price for that cartel, and derive no real benefit in currency stability or security for that cost. Banks profit from exchange controls, but our economy is stifled, and the most vulnerable suffer most of all. Everything you buy is more expensive, South Africans are less globally competitive, and cross-border labourers, already vulnerable, pay the highest price of all – a shame we should work to address. The IMF found that “A study in South Africa found that the comparative cost of an international transfer of 250 rand was the lowest when it went through a friend or a taxi driver and the highest when it went through a bank.” The World Bank found that “remittance fees punish poor Africans“. South Africa scores worst of all, and according to the Payments Association of South Africa and the Reserve Bank, this is “..mostly related to the regulations that South African financial institutions needed to comply with, such as the Financial Intelligence Centre Act (Fica) and exchange-control regulations.” Today’s ruling by the Supreme Court of Appeal found administrative and procedural fault with the Reserve Bank’s actions in regards to me, and returned the fees levied, for which I am grateful. This case, however, was not filed solely in pursuit of relief for me personally. We are now considering the continuation of the case in the Constitutional Court, to challenge exchange control on constitutional grounds and ensure that the benefits of today’s ruling accrue to all South Africans. This is a time in our history when it will be increasingly important to defend constitutional rights. Historically, these are largely questions related to the balance of power between the state and the individual. For all the eloquence of our Constitution, it will be of little benefit to us all if it cannot be made binding on our government. It is expensive to litigate at the constitutional level, which means that such cases are imbalanced – the State has the resources to make its argument, but the individual often does not. For that reason, I will commit the funds returned to me to today by the SCA to a trust run by veteran and retired constitutional scholars, judges and lawyers, that will selectively fund cases on behalf of those unable to do so themselves, where the counterparty is the state. The mandate of this trust will extend beyond South African borders, to address constitutional rights for African citizens at large, on the grounds that our future in South Africa is in every way part of that great continent. This case is largely thanks to the team of constitutional lawyers who framed their arguments long before meeting me; I have been happy to play the role of model plaintiff and to underwrite the work, but it is their determination to correct this glaring flaw in South African government policy which inspired me to support them. For that reason I will ask them to lead the establishment of this new trust and would like to thank them for their commitment to the principles on which our democracy is founded. This case also has a very strong personal element for me, because it is exchange controls which make it impossible for me to pursue the work I am most interested in from within South Africa and which thus forced me to emigrate years ago. I pursue this case in the hope that the next generation of South Africans who want to build small but global operations will be able to do so without leaving the country. In our modern, connected world, and our modern connected country, that is the right outcome for all South Africans. Mark [Less]
Posted over 9 years ago
The South African Supreme Court of Appeal today found in my favour in a case about exchange controls. I will put the returned funds of R250m plus interest into a trust, to underwrite constitutional court cases on behalf of those who’s circumstances ... [More] deny them the ability to be heard where the counterparty is the State. Here is a statement in full: Exchange controls may appear to be targeted at a very small number of South Africans but their consequences are significant for all of us: especially those who are building relationships across Southern Africa such as migrant workers and small businesses seeking to participate in the growth of our continent. It is more expensive to work across South African borders than almost anywhere else on Earth, purely because the framework of exchange controls creates a cartel of banks authorized to act as the agents of the Reserve Bank in currency matters. We all pay a very high price for that cartel, and derive no real benefit in currency stability or security for that cost. Banks profit from exchange controls, but our economy is stifled, and the most vulnerable suffer most of all. Everything you buy is more expensive, South Africans are less globally competitive, and cross-border labourers, already vulnerable, pay the highest price of all – a shame we should work to address. The IMF found that “A study in South Africa found that the comparative cost of an international transfer of 250 rand was the lowest when it went through a friend or a taxi driver and the highest when it went through a bank.” The World Bank found that “remittance fees punish poor Africans“. South Africa scores worst of all, and according to the Payments Association of South Africa and the Reserve Bank, this is “..mostly related to the regulations that South African financial institutions needed to comply with, such as the Financial Intelligence Centre Act (Fica) and exchange-control regulations.” Today’s ruling by the Supreme Court of Appeal found administrative and procedural fault with the Reserve Bank’s actions in regards to me, and returned the fees levied, for which I am grateful. This case, however, was not filed solely in pursuit of relief for me personally. We are now considering the continuation of the case in the Constitutional Court, to challenge exchange control on constitutional grounds and ensure that the benefits of today’s ruling accrue to all South Africans. This is a time in our history when it will be increasingly important to defend constitutional rights. Historically, these are largely questions related to the balance of power between the state and the individual. For all the eloquence of our Constitution, it will be of little benefit to us all if it cannot be made binding on our government. It is expensive to litigate at the constitutional level, which means that such cases are imbalanced – the State has the resources to make its argument, but the individual often does not. For that reason, I will commit the funds returned to me to today by the SCA to a trust run by veteran and retired constitutional scholars, judges and lawyers, that will selectively fund cases on behalf of those unable to do so themselves, where the counterparty is the state. The mandate of this trust will extend beyond South African borders, to address constitutional rights for African citizens at large, on the grounds that our future in South Africa is in every way part of that great continent. This case is largely thanks to the team of constitutional lawyers who framed their arguments long before meeting me; I have been happy to play the role of model plaintiff and to underwrite the work, but it is their determination to correct this glaring flaw in South African government policy which inspired me to support them. For that reason I will ask them to lead the establishment of this new trust and would like to thank them for their commitment to the principles on which our democracy is founded. This case also has a very strong personal element for me, because it is exchange controls which make it impossible for me to pursue the work I am most interested in from within South Africa and which thus forced me to emigrate years ago. I pursue this case in the hope that the next generation of South Africans who want to build small but global operations will be able to do so without leaving the country. In our modern, connected world, and our modern connected country, that is the right outcome for all South Africans. Mark [Less]