Posted
about 6 years
ago
by
Jessica Field
The post New Solutionaut Troops: Welcome Prashant, Shane and Adam to the Aptira Army. appeared first on Aptira.
|
Posted
about 6 years
ago
by
Graham Hayes
Not getting on a plane was a nice change for an OpenStack event :) - especially
as it looks like I would not have made it home for a few days.
Cross Project Days (Monday / Tuesday)
These days are where I think the major value of the PTG is. The
... [More]
cross project
days feel like the Summit of old, with more time for topics, and less running
around a conference centre trying to cram 2 weeks worth of talks / developer
sessions, and other meetings into a few days.
Unified Limits / olso.limits / keystone stored limit data
First up for me was the keystone based limits API for unifying quota data in
keystone. It was decided to create oslo.limts (olso.limits repo,
olso.limits spec & oslo.limits etherpad). The keystone team already
created a keystone-limits-api that is currently experimental, and the
feeling in the room was that we should try and implement it using a new oslo
library to find where changes need to be made.
The migration procedure was discussed, and how we (the services) would need to
run multiple quota systems for quite a few cycles, due to partial upgrades
that happen in OpenStack. [1] Possible implementations were discussed, and
the oslo.limits etherpad has a good overview of them.
olso.healthcheck
This is an idea that I have been very interested in since it was discussed in
Sydney. We actually had 3 sessions on this in Dublin, across 3 different cross
project rooms - API-SIG, Oslo and Self Healing SIG.
Overall, most people were receptive - the commentary is that the spec is too
wordy, and contains my favorite description:
It feels like OpenStack, but not really in a good way.
After listening to feedback, and talking offline to a few people I think I have
a handle on where the issues are, and I think I have a rough spec I can work flesh
out over the next few days. I think I will just start writing code at that
point as well - I think with a more concrete example it could help clear up
issues for people.
Edge Computing
I stopped by the Edge room on the Tuesday to listen in on the discussion.
Personally, I really think this group needs to stop bikesheding on, well,
everything, and actually go and implement a POC and see what breaks.
The push still seems to be "make OpenStack work on the edge" instead of (what
I think is the quickest / most productive way forward) "write extra tooling
to orchestrate OpenStack on the edge."
There was some interesting items brought up, like Glance, and image / data
residency. I think that actually engaging with the Glance team might have been
helpful, as they were completely unaware that the discussion was being held,
but the concepts raised sounded interesting.
I lasted about an hour or so, before I gave up. From my limited exposure, it
sounded exactly like the discussions I have heard on the Edge Calls, which
were the same as the ones I heard in Sydney.
Designate Sessions
The Designate room was a quiet enough affair, but it marks the first time since
the PTG's started that getting a dedicated room was justified. We did some
onboarding with new folks, and laid out a plan for the cycle.
The plan so far looks like this:
DNSSEC
A starting point, using signing keys in the Designate database, which we
can use as a jumping point to storing keys in a HSM / Barbican
People are currently looking at PowerDNS's inline signing as a short term
solution.
Docs (it will never not be a priority :) )
Shared Zones
Improve the UI
This really relies on us either rm -rf openstack/designate-dashboard
or finding people who understand Angular.
TC / QA / Tempest / Trademark Programs
If you follow the mailing list, or openstack/governance reviews, you may
have seen a long running discussion over where tempest tests used for Trademark
Programs should go. From memory I seem to remember this being raised in Boston,
but it could have been Barcelona. There was tension between QA, me, the InterOp
Work Group, and others about the location. Chris Dent covered this pretty well
in his TC updates over the last while, so I am not going to rehash it, but
it does look like we finally have some sort of agreement on the location,
after what was 2 other times I thought we had agreement :).
Board of Directors Meeting
The OpenStack Board of Directors met on the first day of the PTG. This was an
issue in its own right, which was highlighted in a thread on the
foundation list. Thankfully, I have been told that this situation will not
happen again (I can't seem to find any record of the decision, so it may have
been an informal board discussion, but if anyone from the board is reading,
replying to the foundation list would be great.).
As it met on day one, I didn't get to see much - I arrived to wait for the
Add On Trademark program approvals, and happened to catch a very interesting
presentation by Dims, Chris and Melvin. I then got to see the board approve
DNS as a Trademark add on, which is great for the project, and people who want
a constant DNSaaS experience across multiple clouds.
Johnathon Price's Board Recap is also a good overview, with links to things
that were presented at the meeting.
The Board, Community, and how we interact
One topic that was highlighted by the TC / QA / Tempest / Trademark Programs
discussion was that the QA team is very under resourced. This, combined with
the board discussing future of the PTGs due to cost makes me very worried.
The foundation has (in my mind) two main focuses.
Promote the use of the software or IP produced by people working on projects
under the foundation, and protect its reputation.
Empower the developers working on said software or IP to do so.
In my eyes, Trademark programs are very much part of #1, and the board should
either:
Fund / find resources for the QA team, to ensure they have enough bandwidth
to maintain all trademark programs, the associated tests, and tooling.
Fund / find a team that does it separately, but removes the entire burden
from the QA team.
The PTG falls firmly under #2. I was initially a PTG skeptic, but I really
think it works as an event, and adds much more value than the old
mid-cycles did. I understand it has problems, but without it, teams will go
back to the mid cycles, which may have looked cheaper at first glance, but
for some people either meant multiple trips, or missing discussions.
One very disappointing thing to see was the list of Travel Support Program
donors - there was some very generous individuals in the community that stood
up and donated, but none of the corporate foundation members contributed. This,
with members being added to the foundation that seem to stop at paying the
membership fee (see tencent who were added at the Sydney board meeting),
makes me wonder about the value placed on the community by the board.
I know the OpenStack Foundation is diversifying its portfolio of projects
beyond just the OpenStack Project (this is going to get confusing :/), but
we should still be supporting the community that currently exists.
Other great write ups
This turned into a bit of a PTG + 2 weeks after update, so here are some other
write ups I have read over the last week or so, and prompted me to remember
things that I would have otherwise forgotten.
Chris Dent
Colleen Murphy
Adam Spiers
Mark Voelker
The Hotel
And, I saved the best until last. The Croke Park Hotel was absolutely
amazing during the conference. When we needed to leave the main venue on
Thursday, they managed the transition of a few hundred developers into all the
public spaces we could find extremely well. They kept us fed, watered and happy
the entire time we were in the hotel. The fact they managed to do this, while
not leaving the hotel to go home and sleep themselves! I cannot say enough good
things about them, and encourage anyone who is looking for a hotel in Dublin to
stay there, or is running an event to use Croke Park and the hotel.
[1]I have heard of companies running Newton / Ocata Designate (and other projects) on clouds as old as Liberty.
[Less]
|
Posted
about 6 years
ago
by
Superuser
As the OSF focuses on open infrastructure, get ready to experience a new Summit.
The post What’s new at the Vancouver Summit appeared first on Superuser.
|
Posted
about 6 years
ago
by
Mary Thengvall
Hardware burn-in in the CERN datacenter by Tim Bell During the Ironic sessions at the recent OpenStack Dublin PTG in Spring 2018, there were some discussions on adding a further burn in step to the OpenStack Bare Metal project (Ironic) state machine.
... [More]
The notes summarising the sessions were reported to the openstack-dev list. This blog covers the CERN burn in... Read more →
[Less]
|
Posted
about 6 years
ago
by
Lauren Sell
"OCI is an important place for the container ecosystem to come together and drive common formats across tools and deployments," says James Kulina, member of the Kata Containers Working Committee.
The post Kata Containers and OpenStack support open container standards appeared first on Superuser.
|
Posted
about 6 years
ago
by
ed
We recently held the OpenStack PTG for the Rocky cycle. The PTG ran from Monday to Friday, February 26 – March 2, in Dublin, Ireland. So of course the big stuff to write about would be the interesting meetings between the teams, and the discussions
... [More]
about future development, right? Wrong! The big news from the … Continue reading "Dublin PTG Recap"
[Less]
|
Posted
about 6 years
ago
by
Superuser
Project team leads and core contributors talk about what's new in this release and what to expect from the next one.
The post Check out these OpenStack project updates appeared first on Superuser.
|
Posted
about 6 years
ago
by
Chris Dent
Much of the activity of the TC in the past week has been devoted to
discussing and sometimes arguing about two pending resolutions:
Location of Interop Tests and Extended Maintenance (links below).
While there has been a lot of IRC chat, and back and
... [More]
forth on the
gerrit reviews, it has resulted in things moving forward.
Since I like to do this: a theme I would identify from this week's
discussions is continued exploration of what the TC feels it can and
should assert when making resolutions. This is especially apparent
in the discussions surrounding Interop Tests. The various options
run the gamut from describing and enforcing many details, through
providing a limited (but relatively clear) set of options, to
letting someone else decide.
I've always wanted the TC to provide enabling but not overly
limiting guidance that actively acknowledges concerns.
Location of Interop Tests
There are three reviews related to the location of the Interop
Tests (aka Trademark Tests):
A detailed one, based on PTG discussion
A middle of the road one, simplifying the first
A (too) simple one
It's looking like the middle one has the most support now, but that
is after a lot of discussion. On
Wednesday
I introduced the middle of the road version to make sure the
previous discussion was represented in a relatively clear way. Then
on
Thursday,
a version which effectively moves responsibility to the InteropWG.
Throughout this process there have been hidden
goals
whereby this minor(?) crisis in policy is being used to attempt to
address shortcomings in the bigger picture. It's great to be working
on the bigger picture, but hidden doesn't sound like the right
approach.
Tracking TC Goals
One of the outcomes from the PTG was an awareness that some
granular and/or middle-distance TC goals tend to get lost. The TC is
going to try to use
StoryBoard
to track these sort of things. The hope is that this will result in
more active and visible progress.
Extended Maintenance
A proposal to leave branches open for
patches for longer has
received at least as much attention as the Interop discussions.
Some talk starts on Thursday
afternoon
and then carries on intermittently for the rest of time^w^w^w^w^wthrough
today. The review has a great deal of interaction as well.
There's general agreement on the principal ("let's not limit people
from being able to patch branches for longer") but reaching
consensus on the details has been more challenging. Different people
have different goals.
What's a SIG for?
Discussion
about renaming the
recently named PowerStackers
group eventually
migrated into talking about what SIGs are for or
mean.
There seems to be a few different interpretations, with some
overlap:
Upstream and downstream
concern
or "breadth of potential paricipants".
Not focused on producing
code.
Different people in the same
"room".
In problem space rather than solution
space.
None of these are really complete. I think of SIGs as a way to break
down boundaries, provide forums for discussion and make progress
without worrying too much about bureaucracy. We probably don't need
to define them to death.
[Less]
|
Posted
about 6 years
ago
by
Horia Merchi
Dear Users,
Orchestration service is now in Newton version: let’s deep dive into new features and Heat resources improvements.
New Features
Convergence feature allows to improve performance of your create, update, delete actions. Most effective
... [More]
when stacks are large. Enabled by default when service upgraded, you will take benefit from it without any action from your side.
Observe and Update feature is also enabled, it checks resources state before doing any update.
For example, taking a stack with a volume and an instance, you aim to update, adding a volume attachment to this instance. If instance or/and volume do not exist anymore, the update without Observe and Update feature will fail. Reversely, with it, the update ends with success.
Heat resources improvements
Volume resources
If you plan to update one of your stack, we recommend you strongly to explicit the metadata property filling “{}” in its template.
New resources available
OS::Neutron::SecurityGroupRule
A resource for managing Neutron security group rules. Rules to use in security group resource.
Example :
type: OS::Neutron::SecurityGroupRule
properties:
description: String
direction: String
ethertype: String
port_range_max: Integer
port_range_min: Integer
protocol: String
remote_group: String
remote_ip_prefix: String
security_group: String
OS::Heat::Value
A resource which exposes its value property as an attribute.
This is useful for exposing a value that is a simple manipulation of other template parameters and/or other resources.
Example :
type: OS::Heat::Value
properties:
type: String
value: Any
OS::Heat::ResourceChain
This resource allows to create multiple resources with same properties.
Example :
resource:
type: OS::Heat::ResourceChain
properties:
concurrent: True
resource_properties: {size: 5, volume_type: "performant"}
resources: ["OS::Cinder::Volume","OS::Cinder::Volume", "OS::Cinder::Volume"]
Here above, 3 performant volumes of 5G have been created.
Property “concurrent” defines if resource creation can be done in parallel.
Property “resource_properties” defines properties to apply to resources
Property “resources” defines resources to create.
This resource allows mainly to factorize yaml templates.
OS::Heat::None
It allows to disable easily some resources through “resource_registry” on environment file.
This resource works with all properties and return any attribute.
Example :
resource_registry:
OS::Cinder::Volume: OS::Heat::None
OS::Heat::TestResource
This a resource which store the value from the chain.
Some control actions are available, such as ‘action_wait_secs’, ‘wait_secs’, ‘attr_wait_secs’. This allow to adjust time for stack creation or for waiting period to obtain resource output.
Example ::
alarm_cache_wait:
type: OS::Heat::TestResource
properties:
action_wait_secs:
create: 60
update: 60
value: "test"
outputs:
res_value:
value: {get_attr: [alarm_cache_wait, output]}
OS::Heat::SoftwareDeploymentGroup
This resource associates a group of servers with some configuration.
Replaces the resource OS::Heat::SoftwareDeployments
Example :
type: OS::Heat::SoftwareDeploymentGroup
properties:
actions: [Value, Value, ...]
config: String
input_values: {...}
name: String
servers: {...}
signal_transport: String
OS::Heat::StructuredDeploymentGroup
This resource associates a group of servers with some configuration.
This resource works similar as OS::Heat::SoftwareDeploymentGroup, but for structured resources.
Replaces the resource OS::Heat::StructuredDeployments
Example :
type: OS::Heat::StructuredDeploymentGroup
properties:
actions: [Value, Value, ...]
config: String
input_key: String
input_values: {...}
input_values_validate: String
name: String
servers: {...}
signal_transport: String
New attributes
For resource OS::Heat::AutoScalingGroup
refs : A list of resource IDs for the resources in the group
refs_map : A map of resource names to IDs for the resources in the group
For resource OS::Heat::ResourceGroup
refs_map : A map of resource names to IDs for the resources in the group
removed_rsrc_list : A list of removed resource names
New properties
For resource OS::Nova::Server
For block_device_mapping_v2:
image : The ID or name of the image to create a volume from. Updated by image property.
For OS::Glance::Image
architecture
os_distro ( In order to make images more easily searchable in different openstack installations )
kernel_id ( The ID of image stored in Glance that should be used as the kernel when booting an AMI-style image,)
ramdisk_id ( The ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image )
extra_properties
tags
owner
New update policies
For resource OS::Heat::SoftwareDeploymentGroup and OS::Heat::StructuredDeploymentGroup (resources added in mitaka)
Batch_create
Map value expected :
max_batch_size : The maximum number of resources to create at once
pause_time
Rolling_update
Map value expected :
max_batch_size : The maximum number of deployments to replace at once
Pause_time
Example :
type: OS::Heat::SoftwareDeploymentGroup
update_policy:
rolling_update:
max_batch_size: 2
pause_time: 1
properties:
config: {get_resource: config}
input_values:
foo: {get_param: input}
servers:
'0': dummy0
Deprecated resources
For resource OS::Nova::Server and For block_device_mapping_v2
The image_id has to be replaced by image
End of support for the following resources
OS::Neutron::PortPair
You can contact our support service [email protected]
[Less]
|
Posted
about 6 years
ago
by
Nicole Martinelli
Think before you multiply your cloud: a few key considerations.
The post Getting started with multi-cloud appeared first on Superuser.
|