165
I Use This!
Activity Not Available

News

Posted over 5 years ago by Antoine Béral
The OpenStack Days Nordic come back to Stockholm, on October 9th and 10th 2018. Objectif Libre will be a Bronze Sponsor !   Bringing together the major actors of the Nordic OpenStack community, the OpenStack Days Nordic is the perfect place for ... [More] Gauvain Pocentek, OpenStack expert, to animate a workshop on how to run an OpenStack platform. … Continue reading Objectif Libre will be at the OpenStack Days Nordic in Stockholm ! L’article Objectif Libre will be at the OpenStack Days Nordic in Stockholm ! est apparu en premier sur Objectif Libre. [Less]
Posted over 5 years ago by Juan Antonio Osorio robles
For folks integrating with TripleO, it has been quite painful to always need to modify puppet in order to integrate with the engine. This has been typically the case for things like adding a HAProxy andpoint and adding a database and a database user ... [More] (and grants). As mentioned in a previous post, this is no longer the case for HAProxy endpoints, and this ability has been in TripleO for a a couple of releases now. With the same logic in mind, I added this same functionality for mysql databases and database users. And this relecently landed in Stein. So, all you need to do is add something like this to your service template: service_config_settings: mysql: ... tripleo::my_service_name::mysql_user: password: 'myPassword' dbname: 'mydatabase' user: 'myuser' host: {get_param: [EndpointMap, MysqlInternal, host_nobrackets]} allowed_hosts: - '%' - "%{hiera('mysql_bind_host')}" This will create: A database called mydatabase A user that can access that database, called myuser The user myuser will have the password myPassword And grants will be created so that user can connect from the hosts specificed in the host and allowed_hosts parameters. Now you don’t need to modify puppet to add a new service to TripleO! [Less]
Posted over 5 years ago by Guest Post
In this article we'll look at how to configure Spinnaker for a Prometheus-based canary deployment using Kayenta.
Posted over 5 years ago by Lance Bragstad
One year after meeting in Denver for the Queens PTG, we returned to Stapleton, Colorado to plan the Stein release. Although the trains sound the same, much has changed in keystone since then.A year ago, we focused on rebuilding a roadmap to deliver ... [More] more APIs to end-users and make deployments more secure for operators with granular role-based access control (RBAC). We had a plan to better support application developers consuming OpenStack. We also worked on the foundation of a consistent hierarchical quota experience.Today, we have tools that help services protect APIs with more granularity and provide default roles out-of-the-box, allowing OpenStack developers to expose more functionality to end-users. We have application credentials that developers can use to give authorization to software consuming OpenStack in a more user-friendly and secure way. We also have a unified limits API that we've incrementally improved over the year while we prepare for services to start using it.It was exciting to watch all those initiatives take shape over the last 12 months. They still popped up throughout the week but from the perspective of services looking to consume them, as opposed to the design discussions we were having precisely one year ago.In the weeks leading up to the Stein PTG, contributors took time to think about the next set of challenges that face keystone, and how to address them. Their thoughts helped generate some new, refreshing discussions about what we expect to do over the next year or two.The following report is dense, but I've structured it into sections so that it's easier to pick out the parts you care about most.FederationThe most recent results from the OpenStack user survey lists federated identity as being the top contender for keystone-specific improvements. This feedback isn't news, but it's taken a back seat to other initiatives. With a few of our other items well underway, like granular policy and unified limits, this is an excellent time to revisit that work.We want federation to be a first-class citizen. To make that a reality, we were able to identify a set of bugs that improve user and operator experience.We found a way to relay group information in SAML assertionsWe're going to improve validation of groups during WebSSO flowsWe're going to make the remote ID attribute configurable via the API, as opposed to using configuration filesWe're going to improve delegation usability for federated users with refreshable application credentialsA couple of people who recently parsed the federated identity documentation gave us some pointers on how we can make using federation easier to understand. Everything from explaining why someone might want to use federated identity to debugging issues in service providers.Discussions for future enhancements revolved around the concept of keystone as an identity provider proxy. Ultimately, this means keystone would talk to multiple identity providers that might be using different protocols, like OpenID Connect or SAML, and requires implementing better native support for pluggable protocols. Benefits include the possibility of simplifying the attribute mappings used today to map from SAML attributes directly to OpenStack entities, as opposed to mapping from SAML to environment variables and finally to OpenStack entities. The mapping should be generalized for other protocols, too. With keystone's adoption of Flask in Rocky, it'll be easier to improve experience using single sign-on by implementing additional content-type headers. In the case of OIDC, we can use the OAuth 2.0 protocol to experiment with scopes to create tokens for specific operations.If we make federation a first-class citizen, we also need to think about how to handle multiple user accounts. For example, a user authenticating via LDAP and using SAML assertions from ADFS results in two separate accounts. Both users may have the same role assignments on various projects, but support for linking those accounts doesn't exist. The concept of shadow users started decoupling the way in which a user authenticates from the user reference itself. The next step in that work is to allow users to link accounts or associate multiple ways of authentication to a single user.We started the work for shadow users in Mitaka and continued to work on it through Newton and Ocata. There hasn't been any significant movement since then. We are going to refamiliarize ourselves with the current implementation and see if we can pick up the pieces for account linking.We made a note to create four specifications detailing the work summarized here.Operator FeedbackWe planned an impromptu feedback session with operators on Wednesday since the Foundation collocated the Ops Midcycle with the PTG. We took a pulse on system-scope and unified limits. I was reassured to hear responses along the line of "are they ready, yet?" concerning those initiatives. It means we're still on the right path and we were able to give them a better idea of the expected timeline for consumption. We also shared the two different enforcement models support by unified limits. Operators in the room weren't opposed to the strict two-level project hierarchy, primarily because most of their deployments still rely on flat project structure. Once services start adopting unified limits, operators should be able to utilize it without having to worry about making disruptive changes to project structure, which is encouraging.Edge ArchitectureWe spent Tuesday with the Edge working group to discuss architectural issues across tens or hundreds of regions. There were two solutions up for debate.The first was to write a layer in between the application and the database that attempts to be smarter about data replication in edge-specific deployments. Several people in the room were hesitant to pursue this approach, especially since it requires domain-specific knowledge about SQL and low-latency replication in general.The second approach was to use federation as a way to provide the appearance each independent region was part of the same deployment without replicating data. James Penick drew out the specific approach they use at Oath. Using federation also fits nicely with the work we'll be doing in that area moving forward. Federated improvements for edge deployments are going to be reusable for public, private, and hybrid deployments, allowing more bang for our buck across use-cases.JSON Web TokensWe revisited a specification detailing this work, which we've carried forward the last couple of cycles. In Dublin, we needed more clarity on why exactly we want to use JWTs, especially since we already support a non-persistent token format. There are two specific use-cases for implementing JWT support. First, it offers a backup solution to Fernet. Second, it allows for better support for validation-only keystone servers due to asymmetric encryption or signing.Since public key encryption and signing keeps the private key on the initial host, operators can sync public keys to keystone servers used only to validate tokens. Compromised read-only keystone servers wouldn't result in bad actors crafting tokens to use in other regions. We walked through the key setup and rotation strategy, mainly noting the differences based on asymmetric versus symmetric. The other significant detail from the discussion is that the libraries that support JWE (the JWT equivalent to Fernet) tokens are not compatible with OpenStack licensing. Libraries compatible with OpenStack licensing only support JWS tokens, which didn't seem to be a concern since keystone supports Fernet. Deployments concerned about sensitive data in token payloads should continue to use Fernet tokens. We also plan on using big red stickers to warn users that payloads are an internal implementation detail. Anyone relying on that information accepts the risk of being broken if payloads change for any reason.I've updated the specification with details from the session.Unified LimitsI noted earlier that nearly all the discussion here was specific to services adopting the new model. We went through things that changed since the Vancouver summit and walked through usage examples. I took five main things away from these sessions throughout the week.First, we found a way to make it easier for services to consume oslo.limit by reducing the number of callbacks required by the library. Initially, we were expecting services to provide a single callback responsible for returning the usage of a particular resource. The problem is in the event of a race condition where two clients claim a resource at the same time, exceeding the limit of the project. You would expect the service to clean up the resources created last, which would require an additional callback for oslo.limit to reap the exact resources specific to the failed request. This approach starts to muddy the water and requires specific hand-offs between the service and oslo.limit. Instead, Jay Pipes proposed doing all of this in a single callback and said he was going to work with Melanie on an example using nova. We'll plan on addressing any gaps flushed out by the example before releasing a version nova can use.Second, we discussed support for domain-level limits. Today, unified limit support is specific to projects, but it's not difficult to come up with cases where you'd like to have limits for a domain. As a group, we didn't have strong opposition to the idea, and there is a specification up for review.Third, we worked with the nova team to discuss the existing limits API in their service and what to do with it. The API in nova returns limits and usage in the same response, so how do we aggregate that information together now that it's coming from two different services? We iterated over three proposals:Proxy usage from services into keystoneAggregate usage and limits in clientsQuery keystone directly from the serviceThe issue with the first proposal is that it requires another API in keystone and requires keystone to iterate services. The second proposal relies heavily on clients to aggregate things correctly, and multiple implementations across sdks. We can use oslo.limit to do the third and is technically supported by keystone today. John Garbutt is working on a specification for integrating unified limits into nova. He plans on including this as a detail of that specification. Fourth, we realize that user limits aren't a valid use-case for unified limits, which are specific to projects. Most user-specific limits are not limiting resource consumption. They limit database bloat, but not resources. Everyone in the room agreed that limiting these types of things might be important, but falls outside the scope of using unified limits to rate-limit users.Finally, we walked through a migration path for both developers and operators. Developers must support pre-existing quota systems in their services while providing a way to consume unified limits. Despite reluctance to yet another configuration option, it might make sense to add one for this migration. Deployers can opt into using unified limits with the configuration option for a cycle or two. We plan to deprecate the toggle in favor of defaulting to limits defined in keystone.System Scope & Granular RBACSimilar to the previous section, much of the discussion here was specific to service adoption. We took the time to meet with teams to answer questions about system scope and how to use it. Keystone itself is going to be making changes to consume some of this work, too. A couple of people asked if this was a sign of an impending community goal to use system scope. That's the intent, but before we do, we want to make sure we've built out examples for other projects to use, and we can do that while making keystone's API more self-serviceable. If everything goes as scheduled, we should be able to consider this for a community goal proposal in the Train* cycle.I dropped into a cinder session where they were investigating ways to prevent regression when changing policy values, which happened late last release. Based on my interpretation, it sounded like the root of the problem is due to many services overriding policy evaluation in unit tests. This practice is common since it requires the test to model authorization relayed in tokens before invoking the API using oslo libraries. Bypassing authorization altogether is easier. Ideally, we'd like to unit test this across projects, or even using tempest and patrole, but that's going to be a significant amount of non-trivial code. Instead, I took a look at some of cinder's API tests and attempted to propose a patch that would exercise the policy engine and the default cinder policies. I hope the review process shares knowledge and promotes a convention that makes it easy for other developers to improve policy test coverage.Consistent Policy NamesOn Monday we discussed standardizing on a consistent set of policy names since there are several different formats in use. Operators also expressed interest in this initiative on Wednesday when we asked them for feedback.I spent some time last week going through most projects that use oslo.policy and attempting to distill a general convention for each service. I sent an email to the development and operator mailing lists afterward, ultimately kick starting a discussion to reach a uniform convention. I'm going to propose a couple of formats based on that feedback by the end of the week. Oslo.policy supports renaming and deprecating policies, which should make implementing the new convention easier than in the past. Note that only deployments that override policies with custom values are affected by this change.Horizon Feature GapsWe sat down to discuss missing identity features in horizon, and ended up building a laundry list of things that would be useful to add:Pagination of shadow usersManagement of unified limitsSystem role assignmentsImplied role assignmentSetting user optionsTagging projectsIf you're interested in working on any of these items, please don't hesitate to reach out to e0ne or me directly.Photo: I snapped the photo for this blog post nearly a year ago, to the day, as we worked out system-scope details.* Train is obviously the unofficial name for the next release but it does seem fitting given the relationship we've built with the A-line trains in Stapleton. [Less]
Posted over 5 years ago by Panos Vlachos
How important is governance for cloud applications? The short and simple answer, in my humble opinion, is a lot. In fact, it can’t be emphasized enough. Governance is the act of policy establishment, continuous monitoring, and the separation of ... [More] authorities and duties within a system. So, as a concept, it’s realized by identity and access […] The post The importance of governance for cloud applications appeared first on Stackmasters. [Less]
Posted over 5 years ago by Chris Dent
Rather than writing a TC Report this week, I've written a report on the OpenStack Stein PTG.
Posted over 5 years ago by Chris Dent
For the TL;DR see the end. The OpenStack PTG finished yesterday. For me it is six days of continuous meetings and discussions and one of the most busy and stressful events in my calendar. I always get ill. There are too few opportunities to refresh ... [More] my introversion. The negotiating process is filled with hurdles. I often have a sense of not being fully authentic. I have a lot of sympathy for people who come away from the event making tweets like: People use laughing interrupting my talk. People explans to me they aren’t attacking my personal, it is for my company. But It’s rude. Another sucks #PTG, i don’t want to back again. But I have a job, and fuck, they move future summit to Denver again.— Alex Xu (@alex_xuhj) September 15, 2018 I wasn't in the nova room when that happened, so I don't know the full context, but whatever it was, it sounds wrong. For some people the PTG is a grand time, for some it is challenging and difficult. For most it is a mix of both. Telling it how it is can help to make it better, even if it is uncomfortable. There was a great deal of discussion about placement being extracted from nova. In the weeks leading up to the PTG there was a quite a lot of traffic, some of it summarized in a recent TC Report. Because I've been involved to a lot of that discussion I got to hear a lot of jokes about placement this past week. I'm sure most of them are meant well, but when the process of extraction has been so long, and sometimes so frustrating, the jokes are tiresome and another load on my already taxed brain. Much of the time they just made me want to punch someone or something. I'd like to thank Eric Fried, Balazs Gibizer, Ed Leafe, Tetsuro Nakamura, and Matt Riedemann for doing a huge amount of work the past few weeks to get the extracted placement to a state where it has a working collection of tests and creates an operating service. As a team, we've made progress on a thing people have been saying they want for years: making nova smaller and decomposing it into smaller parts. Let's make it a trend. The PTG was a long week, and I want to remember what happened, so I'm going to write down my experience of the event. This will not be the same as the experiences other people have had, but I hope it is useful. On a long list of things I take for granted but forget that other people do not: If some piece of info doesn’t have an accessible, discoverable and eventually well-known URL it is none of True, Useful, Actionable, or Real.— scandent (@anticdent) September 14, 2018 This was written partially on Saturday while I was still in Denver, and the rest on Tuesday after I returned. On Saturday I was already starting to forget details and now on Tuesday it's all fading away. Sunday Sunday afternoon we held the first of two Technical Committee sessions (the other was Friday). The agenda had a few different topics. The big one was reviewing the existing commitments that TC members have. No surprise: Most people are way over-extended and many tasks, both personal and organisational, fall on the floor. Based on that information we were able to remove several tasks from the TC Tracker. Items that will never get done or should not be the TC's responsibility. We also talked about needing to manage applications to be official projects with a little more care and attention so that the process is less open-ended than it often is. To help with this there will be some up front time limits on application and we'll ensure that each application has a shepherd from the TC from earlier in the process. Alan Clark, from the Foundation board, joined in on the conversation for a while. We discussed how to make the joint leadership meetings more effective and what the board needs from the TC: Information about the exciting and interesting things that are in progress in the technical community. To some extent this is needed to help the board members understand why they are bothering to participate and while there is always plenty of cool and interesting stuff going on, it is not always obvious. This is useful advice as it helps to focus the purpose of the meetings, which sometime have a sense of "why are we here?" Doug produced a summary of his notes from both days. Lance Bragstad also made a report. Monday api-sig The API-SIG had a room for all of Monday. At the first PTG in Atlanta we had two days, and used them both. Now we struggled to use all of Monday. In part this is because the great microversion battles have lost their energy, but also the SIG is currently going through a bit of a lull while other topics have more relevance. We talked about most of the isues on the etherpad and kept notes on the discussion. One interesting question was whether the SIG was interested in being a home for people interested in distributed APIs that are not based on HTTP. The answer is: "Sure, of course, but those people need to show up." (People needing to show up was a theme throughout the week.) Prior to the PTG we tried to kill off the common healthcheck middleware due to lack of attention. This threat drew out some interested parties and brought it back to life. cyborg Right after lunch Ed Leafe (the other API-SIG "leader" who was able to attend the PTG) and I were pulled away to attend a discussion about how cyborg interacts with nova and placement. Tuesday blazar Tuesday morning there was a gathering of blazar, nova and placement people to figure out the best ways for blazar to interact. There are some notes on the related etherpad. The two main outcomes from that were that it ought to be possible to satisfy many of the desired features by implementing a "not member of" functionality in placement which allows a caller to say "I'll accept resources that are not in this aggregate". A spec for that has been started. That discussion made it clear that the existing member_of functionality is not entirely correct for nested resource providers. The currently functionality requires all the participants in a nested tree to be in an aggregate to show up in results. We decided this not what we want. A bug was created. placement governance Right before lunch there was an impromptu gathering of the various people involved in placement to create a list of technical milestones that need to be reached to allow placement to be an independent project. A good summary of that was posted to the mailing list. It was a useful thing to do, the plan is solid, but nobody seemed to be in the right frame of mind to get into any of the personal, social, and political issues that have caused so much tension, either locally in the past few weeks, or in the last two years. cinder Later in the afternoon there was a meeting with cinder to see if there was a way that placement could be useful to cinder. It turns out there is a bit of a conceptual mismatch between placement and cinder. Placement wants to represent a hard measurement of resources as they actually are while cinder, especially when "thin provisioning" is being used, needs to be more flexible. Representing that flexibility to placement in a way that is "safe" is difficult. Dynamic inventory management is considered either too costly or too racy. I'm not certain this has to be the case. Architecturally, the system ought to be able to cope. There are some risks, but if we wanted to accommodate the risk it might be manageable and would make placement useful to more types of resources. Wednesday nova retrospective Wednesday morning started with a nova cycle retrospective. There was limited attention to that etherpad before the event, but once we got rolling in person it turned out to be a pretty important topic. The main takeaway, for me, was that when we have to change priorities because of unforeseen events, we must trim the list of existing priorities to remove something. It was surprisingly difficult to get people to agree that this was necessary. Time and resources are finite. What other conclusion can we make? placement topics Then began a multi-day effort to cover all of the placement topics on the nova etherpad. A lot of this was done Wednesday, but in gaps on Thursday and Friday people returned to placement. Rather than trying to cover each day's topics on the day it happened, all the discussion is aggregated here in this section. Interestingly (at least to me), during these discussion I had a very clear moment explaining why I often feel culturally alienated while working in the OpenStack community. While trying to argue that we should wait to do something, I use the term YAGNI. Few people in the room were familiar with it, and once it was explained, few people appeared to be sympathetic to the concept. In my experience this is a fundamental concept and driver of good software development. This was then followed by a lack of sympathy for wanting or needing to define when a project can be considered "done". This too is something I find critical to software development: What are we striving for? How will we know when we get there? When do we get to stop? The reaction in the room seemed to be something along the lines of "never" and "why would we want to?". These two experiences combined may explain why my experience of OpenStack development, especially in nova, feels so unconstrained and cancerous: There's a desire to satisfy everything, in advance, and to never be done. This is exactly opposite of what I want: narrow what we satisfy, do only what it is required, now, and figure out a way to reach done. I suspect the reality of things is much less dramatic, but in the moment it felt that way and helped me understand things more. Once through that, I felt like we managed to figure out some things that we need to do: An idempotent upgrade script that makes it easy for a deployment to move placement data to a new home. Dan has started something. Long term goals include managing affinity in placement, and enabling a form of service sharding so that one placement can manage multiple openstacks. GET /allocation_candidates needs, eventually, an in_tree query parameter to allow the caller to say "give me candidates from these potential trees". Highest priority at this time is getting nested resource providers working on the nova side, in nova-scheduler, in the resource tracker and in the virt drivers. As other services increase their use of placement, and we have more diverse types of hardware being represented as resource providers, we badly need documentation that explains best practices for using resource classes and traits to model that hardware. We need to create an os-resource-classes library, akin to os-traits, to contain the standard resource classes and manage the existing static identifiers associated with those classes. Since naming things is the hardest problem we spent a long time trying to figure out how to name such a thing. There are issues to be resolved with not causing pain for packagers and deployers. While we figure that out I went ahead and created a cookiecutter-based os-resource-classes. Getting shared providers working on the nova side is not an immediate concern in the face of the attention required to finish placement extraction and get nested providers working. However Tushar and his colleagues may devote some time to it. Thursday There was continued discussion of placement on Thursday, mostly noted above. Towards the end of this day I was running out of attention and working more on making minor changes to the placement repo. The energy required to give real attention to the room is so high, especially when it is couched in making sure I don't say something that's going to be taken the wrong way. After a while it is easier and a more productive use of time to give attention to something else. The people who are able to stick through a solid three days in the nova room are made of sterner stuff than me. Friday On Friday it was back to TC-related discussions, following the agenda on the etherpad. As stated above, Doug made a good summary email. We started off by reviewing team health. Lots of different issues but a common thread is that many teams are suffering from lack of contributors. Some teams report burn out in their core reviewers. In the room we discussed why we sometimes I only find out about issues in team late; Why aren't project team members seeking out the assistance of the TC sooner? I suggested that perhaps there's insufficient evidence that the TC is empowered to resolve things. Even if that's the case (we did not resolve that question), reaching out to the TC sooner than later is going to be beneficial for all involved as it increases awareness and can help direct people to the right resources. There was a great deal of discussion in the room about making OpenStack (including the TC) more accessible to contributors from China. This resulted in a proposed resolution for a tc role in global reachout. There was also a lot of discussion about strategies for increasing traction for SIGs, such as the public cloud sig. Some of this reflected the orchestration thread that Matt Riedemann started. During the discussion another resolution was proposed to Add long term goal proposal to community wide goals. Discussion of the pending tech vision was around clarifying what the vision is for and making sure we publicize it well enough to get real feedback. Two main reasons to have the vision is to help drive the decision making process when evaluating projects that wish to be "official" and when selecting community wide goals. These are both important things but I think the main thing a vision we've all agreed to can provide is as a guide in any decisions in OpenStack. If we are able to point at a thing as the overarching goal of all of OpenStack, it becomes easier to say "no" to things that are clearly out of scope and thus have more energy for the things to which we clearly say "yes". Throughout the discussion of project health and gaps in contribution I kept thinking it's important that we make gaps more visible, not come up with ways to do more with the resources we have. Many many people are expressing that they are overextended. We cannot take on more and remain healthy. If something is important enough people will come. If they don't come, the needs are either not important or not visible enough. The role of the TC should be to make things visible. Feature wise we need to be more reactive and enabling, "we will make space for you to do this thing" and less "we're listening and will do this thing for you". This includes allowing things to be less than perfect so their brokenness operates as an attracting influence. As a community we've been pre-disposed to thinking that if we don't make things proper from the start people will ignore us. I think we need to have some confidence that we are making useful stuff and make room for people to come and help us, for the sake of helping themselves. What Now? Based on what I've been able to read from various members of the community in blog posts, tweets, posts to the os-dev mailing list, it sounds like it was a pretty good week: We made some plans and figured out solutions to some difficult problems. The trick now is to follow through and focus on those things while avoiding adding yet more to the list. For me, however, it is hard to say that it is worth it. I do not come away from these things motivated and focused. I'm overwhelmed by the vast array of things we seem to promise and concerned by the unaddressed disconnects and differences in our perceptions and actions. I'm sure once I've recovered I'll be back to making steady progress, but for now if I'm "telling it how it is" I have to wonder if the situation would be any different if I hadn't gone, or if none of us had gone. [Less]
Posted over 5 years ago by Nicole Martinelli
IBM's Olaph Wagoner on how the open-source project meshes with Kubernetes and OpenStack. The post How to manage micro-services with Istio appeared first on Superuser.
Posted over 5 years ago by Juan Antonio Osorio robles
I’ve gotten the request for help deploying TLS everywhere with TripleO several times. Even though there’s documentation, deploying from scratch can be quite daunting, specially if all you want to do is test it out, or merely integrate your service to ... [More] it. However, for development purposes, there is tripleo-quickstart, which makes deploying such a scenario way simpler. Here’s the magical incantation to deploy TripleO with TLS everywhere enabled: ./quickstart.sh --no-clone --teardown all --clean -p quickstart-extras.yml \ -N config/nodes/1ctlr_1comp_1supp.yml \ -c config/general_config/ipa.yml \ -R master-tripleo-ci \ --tags all \ $VIRTHOST Note that this assumes that you’re in the tripleo-quickstart repository. Assuming $VIRTHOST is the host where you’ll do the deployment, this will leave you with a very minimal deployment: An undercloud, one controller, one compute, and a supplemental node where we deploy FreeIPA. Because we’re using the master-tripleo-ci, this setup also deploys the latest promoted images. If you want to use the latest “stable” master deployment, you can use master instead. If you want to deploy Queens, you’ll merely use queens instead. So, for reference, here’s how to deploy a Queens environment: ./quickstart.sh --no-clone --teardown all --clean -p quickstart-extras.yml \ -N config/nodes/1ctlr_1comp_1supp.yml \ -c config/general_config/ipa.yml \ -R queens \ --tags all \ $VIRTHOST Lets also note that --tags all deploys the “whole thing”; meaning, it’ll also do the overcloud deployment. If you remove this, the quickstart will leave you with a deployed undercloud, and you can do the overcloud deployment yourself. [Less]
Posted over 5 years ago by Lance Bragstad
I spent all day Friday, except for one nova session, in the TC room. I’ll admit I wasn’t able to completely absorb all discussions and outcomes, but these are the discussions I was able to summarize.Note, if the picture in this post looks familiar ... [More] , it’s because you saw it while ordering a beer at Station 26, located right behind the conference hotel. The brewery operates out of an old fire station, hence the name. They show support for first responders, firefighters, law enforcement, military, and emergency medical services by displaying department patches throughout their establishment. Besides that, their beer is delicious.Project HealthThe morning started off discussing project health. This initiative is relatively new and came out of the Vancouver summit. The purpose is to open lines of communication between project leads and members of the TC. It also helps the TC keep a pulse on overall health across OpenStack project teams. The discussion focused on feedback, determining how useful it was, and the ways it could be improved.Several TC members reported varying levels of investment in the initiative, ranging from an hour to several hours. Responses from PTLs varied from community goal status to contributor burnout. The TC decided to refine the phrasing used when reaching out to projects hoping that it clarifies the purpose, reduces time spent collecting feedback, and makes it easier for PTLs to formulate accurate responses. Action items included amending the PTL guide to include a statement about open communication with the TC and sending welcome emails to new PTLs with a similar message.The usefulness of Help Wanted lists surfaced a few times during this discussion. Several people in the room voiced concerns that the lists were not driving contributions as effectively as we'd initially hoped. No direct action items came from this as far as I could tell, but this is a topic for another day.Global OutreachWe spent the remainder of the morning discussing ways we can include contributors in other regions, specifically the Asia-Pacific region. Not only do different time zones and language barriers present obstacles in communication, but finding common tooling is tough. Most APAC developers struggle with connecting to IRC, which can have legal ramifications depending on location and jurisdiction. The ask was to see if participants would be receptive to a non-IRC-based application to facilitate better communication, specifically WeChat, which is a standard method of communication in that part of the world. Several people in the room made it clear that officially titling a chat room as "OpenStack Technical Committee" would be a non-starter if there wasn't unanimous support for the idea. Another concern was that having a TC-official room might eventually be empty as TC members rotate, thus resulting in a negative experience for the audience we're trying to reach.The OpenStack Foundation does have formal WeChat groups for OpenStack discussions, and a few people were open to joining as a way to bridge the gap. It helped to have a couple of APAC contributors participating in the discussion, too. They were able to share a perspective that only a few other people in the room have experienced first-hand.Ultimately, I think everyone agreed that fragmenting communication would be a negative side-effect of doing something like this. Conversely, using WeChat as a way to direct APAC contributors to formal mailing list communication could be very useful in building our contributor base and improving project health.Howard sent a note to the mailing list after the session, continuing the discussion with a specific focus on asking TC candidates for their opinions.Evolving Service Architecture & Dependency ManagementAfter lunch, I stepped out to attend a nova session about unified limits. When I returned to the TC room, they were in the middle of discussing service dependencies and architectures.OpenStack has a rich environment full of projects and services, some of which aren't under OpenStack governance but provide excellent value for developers and operators. On the contrary, there is much duplication across OpenStack services systemic of hesitation to add dependencies. In particular, service dependencies that raise the bar for operators. A great example of this duplication is the amount of user secret or security-specific code for storing sensitive data across services when Barbican was developed to solve that issue. Another good example is the usage of etcd, which was formally accepted as a base service shortly after the Boston summit in 2017. How do we allow developers the flexibility to solve problems using base services without continually frustrating operators because of changing architectural dependencies?Luckily, there were some operators in the room that were happy to share their perspective. More-often-than-not, the initial reaction operators have when told they need to deploy yet another service is, no. Developers either continue to push the discussion or decide to fix the problem another way. The operators in the room made it clear that justification was the next logical step in that conversation. It's not that operators oppose architectural decisions made by developers, but the reason behind it needs to be explicit. Informing operators that a dependency for secure user secret storage probably isn't going to result in as much yelling and screaming as you might think. Ultimately, developers need to build services in ways that make sense with the tools available to them, and they need to justify why specific dependencies are required. This concise clarification is imperative for operators, deployers, and packagers.In my opinion, explanations like this are a natural fit for the constellation work in OpenStack, especially since deployers and operators would consume constellations to deploy OpenStack for a particular use-case. I didn't raise this during the meeting, and I'm unsure if others feel the same way. I might try and bring this up in a future office hours session.Long-Term Community GoalsCommunity goals fall within the domain of the TC. Naturally, so do long-running community goals. Some points raised in this discussion weren't specific to long-running goals, but community goals in general.As a community, we started deciding on community-wide initiatives during the Ocata development cycle. Community goals are useful, but they are contentious for multiple reasons. Since they usually affect many projects, resources are always a bottleneck. They are also subject to the priorities of a particular project. Long-running goals are difficult to track, especially if it's a considerable non-trivial amount of work across 30+ projects.While those things affect the success rate of community-wide goals, we made some progress on making it easier to wrangle long-running initiatives. First and foremost, breaking complicated goals into more digestible sub-goals was a requirement. Some previous goals that were relatively trivial are good examples that even straight-forward code changes can take the entire cycle to propagate across the OpenStack ecosystem. That said, breaking a goal into smaller pieces makes pushing change through our community easier, especially significant change. However, this introduces another problem, which is making the vision for multiple goals clear. Often there are only a few people who understand the end game. We need to leverage the domain-knowledge of those experts to document how all the pieces fit together. A document like this disseminates the knowledge, making it easier for people to chip in effectively and understand the approach. At the very least, it helps projects get ahead of changes and incorporate them into their roadmap early.There is a patch up for review to clarify what this means for goal definitions. I'd like to try this process with the granular RBAC work that we've been developing over the last year. We already have a project-specific document describing the overall direction in our specification repository. At the very least, going through this process might help other people understand how we can make OpenStack services more consumable to end-users and less painful for deployers to maintain. [Less]