Posted 4 months ago
What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all? What if your apps could be isolated from one ... [More] another completely, so there’s no possibility that installing one app could break another, and stronger assurance that a compromise of one app won’t compromise the data from another? When we set out to build the Ubuntu Phone we took on the challenge of raising the bar for reliability and security in the mobile market. And today that same technology is coming to the cloud, in the form of a new “snappy” image called Ubuntu Core, which is in beta today on Azure and as a KVM image you can run on any Linux machine.
This is in a sense the biggest break with tradition in 10 years of Ubuntu, because Ubuntu Core doesn’t use debs or apt-get. We call it “snappy” because that’s the new bullet-proof mechanism for app delivery and system updates; it’s completely different to the traditional package-based Ubuntu server and desktop. The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect. Of course, that means that apt-get won’t work, but that’s OK since developers can reuse debs to make their snappy apps, and the core system is exactly the same as any other Ubuntu system – server or desktop.
Whenever we make a fix to packages in Ubuntu, we’ll publish the same fix to Ubuntu Core, and systems can get that fix transactionally. In fact, updates to Ubuntu Core are even smaller than package updates because we only need to send the precise difference between the old and new versions, not the whole package. Of course, Ubuntu Core is in addition to all the current members of the Ubuntu family – desktop, server, and cloud images that use apt-get and debs, and all the many *buntu remixes which bring their particular shine to our community. You still get all the Ubuntu you like, and there’s a new snappy Core image on all the clouds for the sort of deployment where precision, specialism and security are the top priority.
This is the biggest new thing in Ubuntu since we committed to deliver a mobile phone platform, and it’s very delicious that it’s borne of exactly the same amazing technology that we’ve been perfecting for these last three years. I love it when two completely different efforts find underlying commonalities, and it’s wonderful to me that the work we’ve done for the phone, where carriers and consumers are the audience, might turn out to be so useful in the cloud, which is all about back-end infrastructure.
Why is this so interesting?
Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what’s running on a particular system, and you can coordinate updates with very high precision across thousands of instances in the cloud. You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image. That’s very nice indeed.
There have been interesting developments in the transaction systems field over the past few years. ChromeOS is updated transactionally, when you turn it on, it makes sure it’s running the latest version of the OS. CoreOS brought aspects of Chrome OS and Gentoo to the cloud, Red Hat has a beta of Atomic as a transactional version of RHEL, and of course Docker is a way of delivering apps transactionally too (it combines app and system files very neatly). Ubuntu Core raises the bar for certainty, extensibility and security in the transactional systems game. What I love about Ubuntu Core is the way it embraces transactional updates not just for the base system but for applications on top of the system as well. The system is just one layer that can be updated transactionally, and so are each of the apps on the system. You get an extensible platform that retains the lovely properties of transactionality but lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool.
For example, in CoreOS, things like Fleet are built-in, you can’t opt out. In Ubuntu Core, we aim for a much smaller Core, and then enable you to install Docker or any other container system as a framework, with snappy. We’re working with all the different container vendors, and app systems, and container coordination systems, to help them make snappy versions of their tools. That way, you get the transactional semantics you want with the freedom to use whichever tools suit you. And the whole thing is smaller and more secure because we baked fewer assumptions into the core.
The snappy system is also designed to provide security guarantees across diverse environments. Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps. Atomic might allow you to roll back, but it’s virtually impossible to customise the system for your own preferences rather than Red Hat’s, and still know you are running the same secure bits as anybody else.
Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it. We use strong application isolation to keep data confidential between apps. If you install a bad app, it only has access to the data you create with that app, not to data from other applications. This is a key piece of security that comes from our efforts to bring Ubuntu to the mobile market, where malware is a real problem today. And as a result, we can enable developers to go much faster – they can publish their app on whatever schedule suits them, regardless of the Ubuntu release cadence. Want the very latest app? Snappy makes that easiest.
This is also why I think snappy will result in much simpler systems management. Instead of having literally thousands of packages on your Ubuntu server, with tons of dependencies, a snappy system just has a single package for each actual app or framework that’s installed. I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output:
$ snappy info
frameworks: docker, panamax
That’s much easier to manage and reason about at scale. We recently saw how complicated things can get in the old packaging system, when Owncloud upstream wanted to remove the original packages of Owncloud from an old Ubuntu release. With snappy Ubuntu, Owncloud can publish exactly what they want you to use as a snappy package, and can update that for you directly, in a safe transactional manner with full support for rolling back. I think upstream developers are going to love being in complete control of their app on snappy Ubuntu Core.
$ sudo snappy install hello-world
Welcome to a snappy new world!
Things here are really nice and simple:
$ snappy info
$ snappy build .
$ snappy install foo
$ snappy update foo
$ snappy rollback foo
$ snappy remove foo
$ snappy update-versions
$ snappy versions
Just for fun, download the image and have a play. I’m delighted that Ubuntu Core is today’s Qemu Advent Calendar image too! Or launch it on Azure, coming soon to all the clouds.
It’s important for Ubuntu to continue to find new ways to bring free software to a wider audience. The way people think about software is changing, and I think Ubuntu Core becomes a very useful tool for people doing stuff at huge scale in the cloud. If you want crisp, purposeful, tightly locked down systems that are secure by design, Ubuntu Core and snappy packages are the right tool for the job. Running docker farms? Running transcode farms? I think you’ll like this very much!
We have the world’s biggest free software community because we find ways to recognise all kinds of contributions and to support people helping one another to bring their ideas to fruition. One of the goals of snappy was to reduce the overhead and bureaucracy of packaging software to make it incredibly easy for anybody to publish code they care about to other Ubuntu users. We have built a great community of developers using this toolchain for the phone, I think it’s going to be even better on the cloud where Ubuntu is already so popular. There is a lot to do in making the most of existing debs in the snappy environment, and I’m excited that there is a load of amazing software on github that can now flow more easily to Ubuntu users on any cloud.
Welcome to the family, Ubuntu Core! [Less]
Posted 5 months ago
Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ... [More] ago.
The key ideas in that draft are:
Only call services “core” if the user can detect them.
How the cloud is deployed or operated makes no difference to a user. We want app developers to
Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible.
Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement.
Require that “common” services can be self-deployed.
Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it.
Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service.
For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could.
Keep the whole set as small as possible.
We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed.
In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services:
Keystone (without identity, nothing)
Nova (the basis for any other service is the ability to run processes somewhere)
Glance (hard to use Nova without it)
Neutron (where those services run)
Designate (DNS is a core aspect of the network)
Cinder (where they persist data)
I would consider these to be common OpenStack services:
SWIFT (widely deployed, can be self-provisioned with Cinder block backends)
Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block)
Horizon (widely deployed, but we want to encourage innovation in the dashboard)
And these I would consider neither core nor common, though some of them are clearly on track there:
Barbican (not widely implemented)
Ceilometer (internal implementation detail, can’t be common because it requires access to other parts)
Juju (not widely implemented)
Kite (not widely implemented)
HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast)
MAAS (who cares how the cloud was built?)
Manila (not widely implemented, possibly core once solid, otherwise common once, err, common)
Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project)
Triple-O (user doesn’t care how the cloud was deployed)
Trove (not widely implemented, might make it to “common” if widely deployed)
Tuskar (see Ironic)
Zaqar (not widely implemented)
In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete). [Less]
Posted 5 months ago
Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to ... [More] users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.
And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.
Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.
To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.
This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.
In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet! [Less]
Mark Shuttleworth: Exchange controls in SA provide no economic guarantees of stability, but drive up the cost of cross-border relationships for everyone
Posted 6 months ago
The South African Supreme Court of Appeal today found in my favour in a case about exchange controls. I will put the returned funds of R250m plus interest into a trust, to underwrite constitutional court cases on behalf of those who’s ... [More] circumstances deny them the ability to be heard where the counterparty is the State. Here is a statement in full:
Exchange controls may appear to be targeted at a very small number of South Africans but their consequences are significant for all of us: especially those who are building relationships across Southern Africa such as migrant workers and small businesses seeking to participate in the growth of our continent. It is more expensive to work across South African borders than almost anywhere else on Earth, purely because the framework of exchange controls creates a cartel of banks authorized to act as the agents of the Reserve Bank in currency matters.
We all pay a very high price for that cartel, and derive no real benefit in currency stability or security for that cost.
Banks profit from exchange controls, but our economy is stifled, and the most vulnerable suffer most of all. Everything you buy is more expensive, South Africans are less globally competitive, and cross-border labourers, already vulnerable, pay the highest price of all – a shame we should work to address. The IMF found that “A study in South Africa found that the comparative cost of an international transfer of 250 rand was the lowest when it went through a friend or a taxi driver and the highest when it went through a bank.” The World Bank found that “remittance fees punish poor Africans“. South Africa scores worst of all, and according to the Payments Association of South Africa and the Reserve Bank, this is “..mostly related to the regulations that South African financial institutions needed to comply with, such as the Financial Intelligence Centre Act (Fica) and exchange-control regulations.”
Today’s ruling by the Supreme Court of Appeal found administrative and procedural fault with the Reserve Bank’s actions in regards to me, and returned the fees levied, for which I am grateful. This case, however, was not filed solely in pursuit of relief for me personally. We are now considering the continuation of the case in the Constitutional Court, to challenge exchange control on constitutional grounds and ensure that the benefits of today’s ruling accrue to all South Africans.
This is a time in our history when it will be increasingly important to defend constitutional rights. Historically, these are largely questions related to the balance of power between the state and the individual. For all the eloquence of our Constitution, it will be of little benefit to us all if it cannot be made binding on our government. It is expensive to litigate at the constitutional level, which means that such cases are imbalanced – the State has the resources to make its argument, but the individual often does not.
For that reason, I will commit the funds returned to me to today by the SCA to a trust run by veteran and retired constitutional scholars, judges and lawyers, that will selectively fund cases on behalf of those unable to do so themselves, where the counterparty is the state. The mandate of this trust will extend beyond South African borders, to address constitutional rights for African citizens at large, on the grounds that our future in South Africa is in every way part of that great continent.
This case is largely thanks to the team of constitutional lawyers who framed their arguments long before meeting me; I have been happy to play the role of model plaintiff and to underwrite the work, but it is their determination to correct this glaring flaw in South African government policy which inspired me to support them.
For that reason I will ask them to lead the establishment of this new trust and would like to thank them for their commitment to the principles on which our democracy is founded.
This case also has a very strong personal element for me, because it is exchange controls which make it impossible for me to pursue the work I am most interested in from within South Africa and which thus forced me to emigrate years ago. I pursue this case in the hope that the next generation of South Africans who want to build small but global operations will be able to do so without leaving the country. In our modern, connected world, and our modern connected country, that is the right outcome for all South Africans.
Posted 6 months ago
“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ... [More] ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.
Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn). Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.
One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.
So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.
There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.
20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.
Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.
A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for. [Less]
Posted 6 months ago
Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?
Posted 6 months ago
Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the ... [More] headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.
Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.
Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.
This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.
In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.
What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies – stupid and shameful short-sightedness.
In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.
The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.
What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.
The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.
Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.
Posted 6 months ago
So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.
Firstly, I very much like the focus on social ... [More] structure and needs – what our users and deployers need from us. That seems entirely right.
And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.
However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.
Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).
One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.
Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.
To pull from Monty’s post:
“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)
What does Nova need to count on existing so that it can provide that. “
He then goes on to list a bunch of things, but most of them are not needed for that:
We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.
Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.
So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.
In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.
How might we do this?
One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.
Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.
We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting
IaaS product: selects components from the tent to make OpenStack/IaaS
PaaS product: selects components from the tent to make OpenStack/PaaS
CaaS product (containers)
SaaS product (storage)
NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.
So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.
Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.
Posted 7 months ago
Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.
On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it ... [More] very very easy to write a subunit backend that e.g. testr can use.
On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.
So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.
Taking subunit.run as an example process to do this to:
There should be an option to change from one-shot to server mode
In server mode, it will listen for commands somewhere (lets say stdin)
On startup it might eager load the available tests
One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers
So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.
What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.
Would such a test server make sense in other languages? What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…
Posted 7 months ago
Distributed databases are hard. Distributed databases where you don't have
full control over what shards run which version of your software are even
harder, because it becomes near impossible to deal with fallout when things go
wrong. For ... [More] lack of a better term (is there one?), I'll refer to these databases
as Autonomous Shard Distributed Databases.
Distributed version control systems are an excellent example of such databases.
They store file revisions and commit metadata in shards ("repositories")
controlled by different people.
Because of the nature of these systems, it is hard to weed out corrupt data
if all shards ignorantly propagate broken data. There will be different people
on different platforms running the database software that manages the
This makes it hard - if not impossible - to deploy software updates to all
shards of a database in a reasonable amount of time (though a Chrome-like
update mechanism might help here, if that was acceptable). This has
consequences for the way in which you have to deal with every change to the
database format and model.
(e.g. imagine introducing a modification to the Linux kernel Git repository that
required everybody to install a new version of Git).
Defensive programming and a good format design from the start are essential.
Git and its database format do really well in all of these regards. As I wrote in my
retrospective, Bazaar has made a number of mistakes in this area, and that
was a major source of user frustration.
I think the followin gdesign
I propose that every autonomous shard distributed databases should aim for the
For the "base" format, keep it as simple as you possibly can. (KISS)
The simpler the format, the smaller the chance of mistakes in the design
that have to be corrected later. Similarly, it reduces the chances
of mistakes in any implementation(s).
In particular, there is no need for every piece of metadata to be
a part of the core database format.
(in the case of Git, I would argue that e.g. "author" might as well be a "meta-header")
Corruption should be detected early and not propagated. This means
there should be good tools to sanity check a database, and ideally
some of these checks should be run automatically during everyday operations
- e.g. when pushing changes to others or receiving them.
If corruption does occur, there should be a way for as much of the database
as possible to be recovered.
A couple of corrupted objects should not render the entire database unusable.
There should be tools for low-level access of the database, but also that
the format and structure should be documented well enough for power users to
understand it, examine files and extract data.
No "hard" format changes (where clients /have/ to upgrade to access the new
Not all users will instantly update to the latest and greatest version of the
software. The lifecycle of enterprise Linux distributions is long enough
that it might take three or four years for the majority of users to upgrade.
Keep performance data like indexes in separate files. This makes it possible for
older software to still read the data, albeit at a slower pace, and/or generate
older format index files.
New shards of the database should replicate the entire database if at all possible;
having more copies of the data can't hurt if other shards go away or
Having the data locally available also means users get quicker access to
Extensions to the database format that require hard format changes (think
e.g. submodules) should only impact databases that actually use those
Leave some room for structured arbitrary metadata, which gets propagated but
that not all clients need to be able to understand and can safely ignore.
(think fields like "Signed-Off-By", "Reviewed-By", "Fixes-Bug", etc) in
commit metadata headers. [Less]