Posted
over 5 years
ago
by
Chris Dent
Here's this week's placement update. We remain focused on
specs and pressing issues with extraction, mostly because until the
extraction is "done" in some form doing much other work is a bit
premature.
Most Important
There have been several
... [More]
discussions recently about what to do with
options that impact both scheduling and configuration. Some of this
was in the thread about intended purposes of
traits,
but more recently there was discussion on how to support guests
that want an HPET. Chris Friesen summarized a
hangout
that happened yesterday that will presumably be reflected in an
in-progress spec.
The work to get grenade upgrading to
placement is very close.
After several iterations of tweaking, the grenade jobs are now
passing. There are still some adjustments to get devstack jobs
working, but the way is relatively clear. More on this in
"extraction" below, but the reason this is a most important is that
this stuff allows us to do proper integration and upgrade testing,
without which it is hard to have confidence.
What's Changed
In both placement and nova, placement is no longer using
get_legacy_facade(). This will remove some annoying deprecation
warnings.
The nova->placement database migration script for MySQL has merged.
The postgresql version is still up for
review.
Consumer generations are now being used in some allocation handling
in nova.
Questions
What should we do about nova calling the placement db, like in
nova-manage
and
nova-status.
Should we consider starting a new extraction etherpad? The old
one
has become a bit noisy and out of date.
Bugs
Placement related bugs not yet in progress: 17.
-1.
In progress placement bugs 8. -1.
Specs
Many of these specs don't seem to be getting much attention. Can the
dead ones be abandoned?
https://review.openstack.org/#/c/544683/
Account for host agg allocation ratio in placement
(Still in rocky/)
https://review.openstack.org/#/c/595236/
Add subtree filter for GET /resource_providers
https://review.openstack.org/#/c/597601/
Resource provider - request group mapping in allocation candidate
https://review.openstack.org/#/c/549067/
VMware: place instances on resource pool
(still in rocky/)
https://review.openstack.org/#/c/555081/
Standardize CPU resource tracking
https://review.openstack.org/#/c/599957/
Allow overcommit of dedicated CPU
(Has an alternative which changes allocations to a float)
https://review.openstack.org/#/c/600016/
List resource providers having inventory
https://review.openstack.org/#/c/593475/
Bi-directional enforcement of traits
https://review.openstack.org/#/c/599598/
allow transferring ownership of instance
https://review.openstack.org/#/c/591037/
Modelling passthrough devices for report to placement
https://review.openstack.org/#/c/509042/
Propose counting quota usage from placement and API database
(A bit out of date but may be worth resurrecting)
https://review.openstack.org/#/c/603585/
Spec: allocation candidates in tree
https://review.openstack.org/#/c/603805/
[WIP] generic device discovery policy
https://review.openstack.org/#/c/603955/
Nova Cyborg interaction specification.
https://review.openstack.org/#/c/601596/
supporting virtual NVDIMM devices
https://review.openstack.org/#/c/603352/
Spec: Support filtering by forbidden aggregate
https://review.openstack.org/#/c/552924/
Proposes NUMA topology with RPs
https://review.openstack.org/#/c/552105/
Support initial allocation ratios
https://review.openstack.org/#/c/569011/
Count quota based on resource class
https://review.openstack.org/#/c/607989/
WIP: High Precision Event Timer (HPET) on x86 guests
https://review.openstack.org/#/c/571111/
Add support for emulated virtual TPM
https://review.openstack.org/#/c/510235/
Limit instance create max_count (spec) (has some concurrency
issues related placement)
https://review.openstack.org/#/c/141219/
Adds spec for instance live resize
So many specs.
Main Themes
Making Nested Useful
Work on getting nova's use of nested resource providers happy and
fixing bugs discovered in placement in the process. This is creeping
ahead. There is plenty of discussion going along nearby with regards
to various ways they are being used, notably GPUs.
https://review.openstack.org/#/q/topic:bp/use-nested-allocation-candidates
I feel like I'm missing some things in this area. Please let me know
if there are others. This is related:
https://review.openstack.org/#/c/589085/
Pass allocations to virt drivers when resizing
Extraction
There continue to be three main tasks in regard to placement
extraction:
upgrade and integration testing
database schema migration and management
documentation publishing
The upgrade aspect of (1) is in progress with a patch to
grenade and a patch to
devstack. This is very
close to working. The remaining failures are with jobs that do not
have openstack/placement in $PROJECTS.
Once devstack is happy then we can start thinking about integration
testing using tempest. I've started some experiments with using
gabbi for that. I've
explained my reasoning in a blog
post.
Successful devstack is dependent on us having a reasonable solution
to (2). For the moment a hacked up
script is being used to
create tables. This works, but is not sufficient for deployers nor
for any migrations we might need to do.
Moving to alembic seems a reasonable thing to do, as a part of that.
We have work in progress to tune up the documentation but we are not
yet publishing documentation (3). We need to work out a plan for
this. Presumably we don't want to be publishing docs until we are
publishing code, but the interdependencies need to be teased out.
Other
Going to start highlighting some specific changes across several
projects. If you're aware of something I'm missing, please let me
know.
https://review.openstack.org/#/c/601866/
Generate sample policy in placement directory
(This is a bit stuck on not being sure what the right thing to do
is.)
https://review.openstack.org/#/q/topic:reduce-complexity+status:open
Some efforts by Eric to reduce code complexity
https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios
Improve handling of default allocation ratios
https://review.openstack.org/#/q/topic:minimum-bandwidth-allocation-placement-api
Neutron minimum bandwidth implementation
https://review.openstack.org/#/c/607953/
TripleO: Use valid_interfaces instead of os_interface for placement
https://review.openstack.org/#/c/605507/
Puppet: Separate placement database is not deprecated
https://review.openstack.org/#/c/602160/
Add OWNERSHIP $SERVICE traits
https://review.openstack.org/#/c/604182/
Puppet: Initial cookiecutter and import from nova::placement
https://review.openstack.org/#/c/601407/
WIP: Add placement to devstack-gate PROJECTS
https://review.openstack.org/#/c/586960/
zun: Use placement for unified resource management
End
I'm going to be away next week, so if any my pending code needs some
fixes and is blocking other stuff, please fix it. Also, there will
be no pupdate next week (unless someone else does one).
[Less]
|
Posted
over 5 years
ago
by
dulek
Kuryr-Kubernetes provides networking for Kubernetes pods by using OpenStack Neutron and Octavia.
|
Posted
over 5 years
ago
by
Emilien
In the first post, we demonstrated that we can now use Podman to deploy a containerized OpenStack TripleO Undercloud. Let’s see how we can operate the containers with SystemD. Podman, by design, doesn’t have any daemon running to manage the
... [More]
containers lifecycle; while Docker runs dockerd-current and docker-containerd-current which take care of a bunch of things, such as restarting the containers when they are in failure (and configured to do it, with restart policies). In OpenStack TripleO, we still want our containers to restart when they are configured to, so we thought about managing the containers with SystemD. I recently wrote a blog post about how Podman can be controlled by SystemD, and we finally implemented it in TripleO. The way it works, as of today, is that any container managed by Podman with a restart policy in Paunch container configuration, will be managed by SystemD. Let’s take the example of Glance API. This snippet is the configuration of the container at step 4: As you can see, the Glance API container was configured to always try to restart (so Docker would do so). With Podman, we re-use this flag and we create (+ enable) a SystemD unit file: How it works underneath: Paunch will run podman run –conmon-pidfile=/var/run/glance_api.pid (…) to start the container, during the deployment steps. If there is a restart policy, Paunch will create a SystemD unit file. The SystemD service is named by the container name, so if you were used to the old services name before the containerization, you’ll have to refresh your mind. By choice, we decided to go with the container name to avoid confusion with the podman ps output. Once the containers are deployed, they need to be stopped / started / restarted by SystemD. If you run Podman CLI to do it, SystemD will take over (see in the demo). Note about PIDs: If you configure the service to start the container with “podman start -a” then systemd will monitor that process for the service. The problem is that this leaves podman start processes around which have a bunch of threads and is attached to the STDOUT/STDIN. Rather than leaving this start process around, we use a forking type in systemd and specify a conmon pidfile for monitoring the container. This removes 500+ threads from the system at the scale of TripleO containers. (Credits to Alex Schultz for the finding). Note about PIDs: If you configure the service to start the container with “podman start -a” then systemd will monitor that process for the service. The problem is that this leaves podman start processes around which have a bunch of threads and is attached to the STDOUT/STDIN. Rather than leaving this start process around, we use a forking type in systemd and specify a conmon pidfile for monitoring the container. This removes 500+ threads from the system at the scale of TripleO containers. (Credits to Alex Schultz for the finding). Stay in touch for the next post in the series of deploying TripleO and Podman! [Less]
|
Posted
over 5 years
ago
by
Emilien
In the first post, we demonstrated that we can now use Podman to deploy a containerized OpenStack TripleO Undercloud. Let’s see how we can operate the containers with SystemD. Podman, by design, doesn’t have any daemon running to manage the
... [More]
containers lifecycle; while Docker runs dockerd-current and docker-containerd-current which take care of a bunch of things, such as restarting the containers when they are in failure (and configured to do it, with restart policies). In OpenStack TripleO, we still want our containers to restart when they are configured to, so we thought about managing the containers with SystemD. I recently wrote a blog post about how Podman can be controlled by SystemD, and we finally implemented it in TripleO. The way it works, as of today, is that any container managed by Podman with a restart policy in Paunch container configuration, will be managed by SystemD. Let’s take the example of Glance API. This snippet is the configuration of the container at step 4: As you can see, the Glance API container was configured to always try to restart (so Docker would do so). With Podman, we re-use this flag and we create (+ enable) a SystemD unit file: How it works underneath: Paunch will run podman run to start the container, during the deployment steps. If there is a restart policy, Paunch will create a SystemD unit file. The SystemD service is named by the container name, so if you were used to the old services name before the containerization, you’ll have to refresh your mind. By choice, we decided to go with the container name to avoid confusion with the podman ps output. Once the containers are deployed, they need to be stopped / started / restarted by SystemD. If you run Podman CLI to do it, SystemD will take over (see in the demo). Stay in touch for the next post in the series of deploying TripleO and Podman! [Less]
|
Posted
over 5 years
ago
by
Emilien
In the first post, we demonstrated that we can now use Podman to deploy a containerized OpenStack TripleO Undercloud. Let’s see how we can operate the containers with SystemD. Podman, by design, doesn’t have any daemon running to manage the
... [More]
containers lifecycle; while Docker runs dockerd-current and docker-containerd-current which take care of a bunch of things, such as restarting the containers when they are in failure (and configured to do it, with restart policies). In OpenStack TripleO, we still want our containers to restart when they are configured to, so we thought about managing the containers with SystemD. I recently wrote a blog post about how Podman can be controlled by SystemD, and we finally implemented it in TripleO. The way it works, as of today, is that any container managed by Podman with a restart policy in Paunch container configuration, will be managed by SystemD. Let’s take the example of Glance API. This snippet is the configuration of the container at step 4: step_4: map_merge: - glance_api: start_order: 2 image: *glance_api_image net: host privileged: {if: [cinder_backend_enabled, true, false]} restart: always healthcheck: test: /openstack/healthcheck volumes: *glance_volumes environment: - KOLLA_CONFIG_STRATEGY=COPY_ALWAYS As you can see, the Glance API container was configured to always try to restart (so Docker would do so). With Podman, we re-use this flag and we create (+ enable) a SystemD unit file: [Unit] Description=glance_api container After=paunch-container-shutdown.service [Service] Restart=always ExecStart=/usr/bin/podman start -a glance_api ExecStop=/usr/bin/podman stop -t 10 glance_api KillMode=process [Install]WantedBy=multi-user.target How it works underneath: Paunch will run podman run to start the container, during the deployment steps.If there is a restart policy, Paunch will create a SystemD unit file.The SystemD service is named by the container name, so if you were used to the old services name before the containerization, you’ll have to refresh your mind. By choice, we decided to go with the container name to avoid confusion with the podman ps output.Once the containers are deployed, they need to be stopped / started / restarted by SystemD. If you run Podman CLI to do it, SystemD will take over (see in the demo). Stay in touch for the next post in the series of deploying TripleO and Podman! [Less]
|
Posted
over 5 years
ago
by
Emilien
In this series of blog posts, we’ll demonstrate how we can replace Docker by Podman when deploying OpenStack containers with TripleO. Group of seals, also named as a pod This first post will focus on the Undercloud (the deployment cloud) which
... [More]
contains the necessary components to deploy and manage an “Overcloud” (a workload cloud). During the Rocky release, we switched the Undercloud to be containerized by default, using the same mechanism as we did for the Overcloud. If you need to be convinced by Podman, I strongly suggest to see this talk but in short, Podman bring more security and make systems more lightweight. It also brings containers into a Kubernetes friendly environment. Note: Deploying OpenStack on top of Kubernetes isn’t in our short-term roadmap and won’t be discussed during these blog posts for now. To reproduce this demo, you’ll need to follow the official documentation which explains how to deploy an Undercloud but change the undercloud.conf to have container_cli = podman (instead of default docker for now). In the next post, we’ll talk about operational changes when containers are managed with Podman versus Docker. [Less]
|
Posted
over 5 years
ago
by
Doug Smith
So you need a Kubernetes Operator Tutorial, right? I sure did when I started. So guess what? I got that b-roll! In this tutorial, we’re going to use the Operator SDK, and I definitely got myself up-and-running by following the Operator Framework User
... [More]
Guide. Once we have all that setup – oh yeah! We’re going to run a custom Operator. One that’s designed for Asterisk, it can spin up Asterisk instances, discover them as services and dynamically create SIP trunks between n-number-of-instances of Asterisk so they can all reach one another to make calls between them. Fire up your terminals, it’s time to get moving with Operators. Continue readingA Kubernetes Operator Tutorial? You got it, with the Operator-SDK and an Asterisk Operator!
[Less]
|
Posted
over 5 years
ago
by
Nicole Martinelli
Cutting edge open source projects are driving this architectural shift even further, says AT&T's Gnanavelkandan Kathirvel.
The post How open source projects are pushing the shift to edge computing appeared first on Superuser.
|
Posted
over 5 years
ago
by
Chris Dent
Update: gabbi-tempest is
now documented and fully merged up to make the ideas below work. I
wrote an email to the
openstack-dev
explaining the current state of affairs. I also found a post from
January describing an earlier
stage in the process.
... [More]
Imagine being able to add integration API tests to an OpenStack
project by creating a directory and adding a YAML file in that
directory that is those tests. That's the end game of what I'm
trying to do with
gabbi-tempest and some
experiments with zuul
jobs.
Gabbi is a testing tool for HTTP
APIs that models the requests and responses of a series of HTTP
requests in a YAML file. Gabbi-tempest integrates gabbi with
tempest, an
integration test suite for OpenStack, to provide some basic handling
for access to the service catalog and straightforward authentication
handling within the gabbi files. Tempest sets up the live services,
and provides some images and flavors to get you started.
Here's a simple example that sets some defaults for all requests and
then verifies the version discovery doc for the placement service
using
JSONPath:
defaults:
request_headers:
x-auth-token: $ENVIRON['SERVICE_TOKEN']
content-type: application/json
accept: application/json
openstack-api-version: 'compute latest, placement latest'
verbose: True
tests:
- name: get placement version
GET: $ENVIRON['PLACEMENT_SERVICE']
response_json_paths:
$.versions[0].id: v1.0
This is used in a work-in-progress change to
placement. It is a work
in progress because there are several pieces which need to come
together to make the process as clean and delightful as possible.
The desired endpoint is that for a project to turn this kind of
testing on they would:
Add an entry to templates in their local .zuul.yaml,
something like openstack-tempest-gabbi. This would cause
a few different things:
openstack-tempest-gabbi jobs added to both gate and check.
That job would set or extend the GABBI_TEMPEST_PATH
environment variable to include a gabbits directory from
the code checkout of the current service. That environment
variable defines where the gabbi-tempest plugin looks for
YAML files.
And then run tempest:
with the gabbi-tempest plugin
With tempest_test_regex: 'gabbi' to limit the tests to
just gabbit tests (not necessary if other tempest tests
are desired).
With tox_envlist: all, which is the tox environment
that is the current correct choice when wanting to use a
test regex.
Create a directory in their repo, perhaps gabbits and put one
or more gabbi YAML files like the example above in there.
We're some distance from that but there are pieces in progress that
move things in that direction. I'm hoping that the above gives a
better description of what I'm hoping to achieve and encourages
people to help, because I need some.
Some of the in progress pieces:
WIP: Create a gabbi-tempest zuul job
WIP: Use gabbi-tempest job from tempest
gabbi-tempest, which
needs to become an openstack thing
[Less]
|
Posted
over 5 years
ago
by
Chris Dent
Imagine being able to add integration API tests to an OpenStack
project by creating a directory and adding a YAML file in that
directory that is those tests. That's the end game of what I'm
trying to do with
gabbi-tempest and some
experiments with
... [More]
zuul
jobs.
Gabbi is a testing tool for HTTP
APIs that models the requests and responses of a series of HTTP
requests in a YAML file. Gabbi-tempest integrates gabbi with
tempest, an
integration test suite for OpenStack, to provide some basic handling
for access to the service catalog and straightforward authentication
handling within the gabbi files. Tempest sets up the live services,
and provides some images and flavors to get you started.
Here's a simple example that sets some defaults for all requests and
then verifies the version discovery doc for the placement service
using
JSONPath:
defaults:
request_headers:
x-auth-token: $ENVIRON['SERVICE_TOKEN']
content-type: application/json
accept: application/json
openstack-api-version: 'compute latest, placement latest'
verbose: True
tests:
- name: get placement version
GET: $ENVIRON['PLACEMENT_SERVICE']
response_json_paths:
$.versions[0].id: v1.0
This is used in a work-in-progress change to
placement. It is a work
in progress because there are several pieces which need to come
together to make the process as clean and delightful as possible.
The desired endpoint is that for a project to turn this kind of
testing on they would:
Add an entry to templates in their local .zuul.yaml,
something like openstack-tempest-gabbi. This would cause
a few different things:
openstack-tempest-gabbi jobs added to both gate and check.
That job would set or extend the GABBI_TEMPEST_PATH
environment variable to include a gabbits directory from
the code checkout of the current service. That environment
variable defines where the gabbi-tempest plugin looks for
YAML files.
And then run tempest:
with the gabbi-tempest plugin
With tempest_test_regex: 'gabbi' to limit the tests to
just gabbit tests (not necessary if other tempest tests
are desired).
With tox_envlist: all, which is the tox environment
that is the current correct choice when wanting to use a
test regex.
Create a directory in their repo, perhaps gabbits and put one
or more gabbi YAML files like the example above in there.
We're some distance from that but there are pieces in progress that
move things in that direction. I'm hoping that the above gives a
better description of what I'm hoping to achieve and encourages
people to help, because I need some.
Some of the in progress pieces:
WIP: Create a gabbi-tempest zuul job
WIP: Use gabbi-tempest job from tempest
gabbi-tempest, which
needs to become an openstack thing
[Less]
|