50
I Use This!
High Activity

News

Analyzed 1 day ago. based on code collected 2 days ago.
Posted about 6 years ago by David Blackwell
Welcome to part one of a series on reducing risk in OpenStack with NetApp.  Whether you are already running OpenStack, or are considering running it, the security of your data should always be at the forefront of your thoughts.  NetApp has many ... [More] features that work in concert with our OpenStack drivers to give you piece ... Read more The post Reducing Risk in OpenStack with NetApp Part 1: Encryption  appeared first on thePub. [Less]
Posted about 6 years ago by Doug Smith
So you want to install Kubernetes on CentOS? Awesome, I’ve got a little choose-your-own-adventure here for you. If you choose to continue installing Kubernetes, keep reading. If you choose to not install Kubernetes, skip to the very bottom of the ... [More] article. I’ve got just the recipe for you to brew it up. It’s been a year since my last article on installing Kubernetes on CentOS, and while it’s still probably useful – some of the Ansible playbooks we were using have changed significantly. Today we’ll use kube-ansible which is a playbook developed by my team and I to spin up Kubernetes clusters for development purposes. Our goal will be to get Kubernetes up (and we’ll use Flannel as the CNI plugin), and then spin up a test pod to make sure everything’s working swimmingly. Continue readingSpin up a Kubernetes cluster on CentOS, a choose-your-own-adventure [Less]
Posted about 6 years ago by Nicole Martinelli
“Very few can markets of any type can absorb as much product as you can throw at it -- HPC is one of them,” says researcher Christopher Willard. The post Trends to watch in high performance computing appeared first on Superuser.
Posted about 6 years ago by Chris Dent
This week's TC Report goes off in the weeds a bit with the editorial commentary from yours truly. I had trouble getting started, so had to push myself through some thinking by writing stuff that at least for the last few weeks I wouldn't normally be ... [More] including in the summaries. After getting through it, I realized that the reason I was struggling is because I haven't been including these sorts of things. Including them results in a longer and more meandering report but it is more authentically my experience, which was my original intention. Zuul Extraction and the Difficult Nature of Communication Last Tuesday Morning we had some initial discussion about Zuul being extracted from OpenStack governance as a precursor to becoming part of the CI/CD strategic area being born elsewhere in the OpenStack Foundation. Then on Thursday we revisited the topic, especially as it related to how we communicate change in the community and how we invite participation in making decisions about change. In this case by "community" we're talking about anything under the giant umbrella of "stuff associated with the OpenStack Foundation". Plenty of people expressed that though they were not surprised by the change, it was because they are insiders and could understand how some, who are not, might be surprised by what seemed like a big change. This led to addressing the immediate shortcomings and clarifying the history of the event. There was also concern that some of the reluctance to talk openly about the change appeared to stem from needing to preserve the potency of a Foundation marketing release. I expressed some frustration: "...as usual, we're getting caught up in details of a particular event (one that in the end we're all happy to see happen), rather than the general problem we saw with it (early transparency etc). Solving the immediate problem is easy, but since we keep doing it, we've got a general issues to resolve." We went round and round about the various ways in which we have tried and failed to do good communication in the past, and while we make some progress, we fail to establish a pattern. As Doug pointed out, no method can be 100% successful, but if we pick a method and stick to it, people can learn that method. We have a cycle where we not only sometimes communicate poorly but we also communicate poorly about that poor communication. So when I come round to another week of writing this report, and am reminded that these issues persist and I am once again communicating about them, it's frustrating. Communicating, a lot, is generally a good thing, but if things don't change as a result, that can be a strain. If I'm still writing these things in a year's time, and we haven't managed to achieve at least a bit more grace, consistency, and transparency in the ways that we share information within and between groups (including, and maybe especially, the Foundation executive wing) in the wider community, it will be a shame and I will have a sad. In a somewhat related and good sign, there is great thread on the operators list that raises the potential of merging the Ops Meeting and the PTG into some kind of "OpenStack Community Working Gathering". Encouraging Upstream Contribution On Friday, tbarron raised some interesting questions about how the summit talk selection process might relate to the four opens. The talk eventually led to a positive plan to try bring some potential contributors upstream in advance of summit as, well as to work to create more clear guidelines for track chairs. Executive Power I had a question at this morning's office hour, related to some work in the API-SIG that hasn't had a lot of traction, about how best to explain how executive power is gained and spent in a community where we intentionally spread power around a lot. As with communication above, this is a topic that comes up a fair amount, and investigating the underlying patterns can be instructive. My initial reaction on the topic was the fairly standard (but in different words): If this is important to you, step up and make it happen. I think, however, that when we discuss these things we fail to take enough account of the nature of OpenStack as a professional open source environment. Usually, nonhierarchical, consensual collaborations are found in environments where members represent their own interests. In OpenStack our interactions are sometimes made more complex (and alienating) by virtue of needing to represent the interests of a company or other financial interest (including the interest of keeping our nice job) while at the same time not having the recourse of being able to complain to someone's boss when they are difficult (because that boss is part of a different hierarchy than the one you operate in). We love (rightfully so) the grand project which is OpenStack, and want to preserve and extend as much as possible the beliefs in things that make it feel unique, like "influence tokens". But we must respect that these things are collectively agreed hallucinations that require regular care and feeding, and balance them against the surrounding context which is not operating with those agreements. Further, those of us who have leeway to spend time building influence tokens are operating from a position of privilege. One of the ways we sustain that position is by behaving as if those tokens are more readily available to more people than they really are. /me wipes brow TC Elections Coming The next round of TC elections will be coming up in late April. If you're thinking about it, but feel like you need more information about what it might entail, please feel free to contact me. I'm sure most of the other TC members would be happy to share their thoughts as well. [Less]
Posted about 6 years ago
Posted about 6 years ago by Jessica Field
The post SUSE Expert Days appeared first on Aptira.
Posted about 6 years ago by Josh Berkus and Stephen Gordon
Josh Berkus and Stephen Gordon take a look at KubeVirt for the traditional VM use case and Kata Containers for the isolation use case. The post Explore KubeVirt and Kata Containers appeared first on Superuser.
Posted about 6 years ago by Cameron Seader
Posted about 6 years ago by Chris Dent
This is the fifth in a series of posts about experimenting with OpenStack's placement service in a container. In the previous episode, I made an isolated container that persists data to itself work in kubernetes. In there I noted that persisting data ... [More] to itself rather takes the joy and functionality out of using kubernetes: You can't have replicas, you can't autoscale, you lose all your data. I spent some of yesterday and today resolving those issues and report on the results here: an autoscaling placement service that persists data to a postgresql server running $elsewhere. The code for this extends the same branch of placedock as playground 4 and continues to use minikube. I gave up trying to get things to work on linux with the kvm or kvm2 drivers. I should probably try the none driver at some point, but for now this work has been happening on a mac. Update the next day: Tried the none driver, worked fine, but need to be aware of docker permissions. There are two main chunks to make this work: Adapting the creation of the container and the creation and syncing of the database so that the database can be outside the container. Tweaking the kubernetes bits to get a horizontal pod autoscaler working. Note: This isn't a tutorial on using kubernetes or placement, it's more of a trip report about the fun stuff I did over the weekend. If you try to follow this exactly for managing a placement service, it's not going to work very well. Think of this as a conversation starter. If you're interested in this stuff, let's talk. I recognize that my writing on this topic has become increasingly incoherent as I've gone off into the weeds of discovery. I will write a summary of all the playgrounds once they have reached a natural conclusion. For now, even in the weeds, they've taught me a lot. Database Tweaking In playground 4, the database is established when building the container: every container gets its own sqlite db sitting there ready and waiting for run time. This does not work if we want to use a remote db and we want multiple containers talking to the same db. Therefore the sync.py script, which creates the database tables is copied into the container at build time, but not actually run until run time. At run time, it gets the database connection url from an environment variable, DB_STRING. If it's not set, a default is used. We can define the value in the kubernetes deployment.yaml. But wait, the container had only been running the uwsgi process. How do we get it to use the environment variable, run sync.py, and only once that's done, start up the uwsgi server? Turns out we can replace the existing docker CMD with a script that does all that stuff. In the Dockerfile the end is adjusted to: ADD startup.sh / CMD ["sh", "-c", "/startup.sh"] and startup.sh is: DB_STRING=${DB_STRING:-sqlite:////cats.db} # Do substitutions in the template. sed -e "s,{DB_CONNECTION},$DB_STRING," < /etc/nova/nova.conf.tmp > /etc/nova/nova.conf # establish the database python3 /sync.py --config-file /etc/nova/nova.conf # run the web server /usr/sbin/uwsgi --ini /placement-uwsgi.ini It would surprise me not one iota if there aren't cleaner ways than that, but that way worked. The result of this is that each time a container starts, it connects to the database described in $DB_STRING and tries to create and update the tables. If they are already created, it's happy. If something else is in the midst of versioning the database an exception is caught and ignored. I had a postgresql server running on a nearby VM, so I used that. For the time being I simply added the necessary connection drivers to the container at build time, but if it was required to be super dynamic, then the python driver code could be installed at runtime. Being super dynamic is not really in scope for my experiments. After doing all that, I adjusted my deployment to have 4 replicas and made sure things worked. And it did. Onward. Kubernetes Tweaking Having 4 containers, either doing nothing or being overloaded, is not really taking advantage of some of the best stuff about kubernetes. What we really want, to be both more useful and more cool, is to create and destroy the placement pods as needed. This is done with a Horizontal Pod Autoscaler. You tell it the minimum and maximum number of pods you're willing to accept and a metric for determining the percent of resource consumption that is the boundary between OK and overloaded. Here's the autoscaler.yaml that works for me: apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: placement-deployment namespace: default spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: placement-deployment targetCPUUtilizationPercentage: 50 While this is a relatively simple concept and the tooling is straightforward it took me quite some time to get this to work. I've been using minikube 0.25.0 (the latest release as of this writing) but running it at the maximum version of kubernetes that it supports (v1.9.0). This leads to some conflicts. In older versions, the expected way to manage autoscaling and metrics is to use heapster. Minikube includes an addon for this, but as installed it does not present a "rest" API for the information. That's okay for some unclear number of kubernetes versions back, but is not with v1.9.0. Modern kubernetes has the Resource Metrics API. heapster can support that, but only if it is started with a particular flag. The other option is to start the kubernetes-controller with a particular flag so that autoscaling doesn't use the metrics API. I preferred to stay modern. Unreleased minikube adds the metrics server as an addon. I was able to copy that code into my minikube setup and establish the service. Next I discovered that my placement deployment needed to describe a resource limit in order for the autoscaling to work. In hindsight this is obvious. The autoscaling is done based on a percentage of a limit the deployment sets for itself. For instance if you say that say things should be scaled up when resource usage hits 50%, kubernetes says "50% of what?". In my case that meant adjusting the containers section of deployment.yaml: containers: - name: placement image: placedock:1.0 env: - name: DB_STRING value: postgresql+psycopg2://[email protected]/placement?client_encoding=utf8 ports: - containerPort: 80 # We must set resources for scaling to work. resources: requests: cpu: 250m That last stanza is saying "we request 1/4 core worth of cpu". So now, the autoscaler is expressing when cpu utilization hits 50% (of 1/4 core), scale. Note: If you're following along and using minikube with docker-machine keep in mind that the default "machine" is pretty small so you need to keep the resource request for each individual container pretty small or you will soon overwhelm the machine. I had cpu above set to 1000m initially. Starting new pods was slow enough that they never became ready. When I finally got this working it was fun to watch (literally with watch kubectl get hpa). If you're starting from scratch it can take a while for everything to warm up and be running, but eventually you'll see low usage and low replicas (this output is wide, you may need to scroll): NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE placement-deployment Deployment/placement-deployment 0% / 50% 1 10 1 1m To load up the deployment and make it scale (assuming there's a bit of data in the database, like the one resource provider that the gabbi-run in bootstrap.sh will create) I did this: export PLACEMENT=$(minikube service placement-deployment --url) ab -n 100000 -c 100 -H 'x-auth-token: admin' $PLACEMENT/resource_providers After a while the target usage raises above the 50% target and more replicas are created. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE placement-deployment Deployment/placement-deployment 109% / 50% 1 10 3 8m When the ab was done and resource usage settled, all but one of the containers were terminated. That one was working as expected: curl -H 'x-auth-token: admin' $PLACEMENT/resource_providers |json_pp Even though I know that's exactly how it is supposed to work, it's still pretty cool. What next? I need to add forbidden traits support to the placement service, but after that I will likely revisit this stuff, either for more scale fun or the cat management mentioned in playground 4. As always, please leave a comment or otherwise contact me if you have questions, there's something I've done weirdly, or you're doing similar stuff and there's an opportunity for us to collaborate. [Less]
Posted about 6 years ago by Chris Dent
This is the fifth in a series of posts about experimenting with OpenStack's placement service in a container. In the previous episode, I made an isolated container that persists data to itself work in kubernetes. In there I noted that persisting data ... [More] to itself rather takes the joy and functionality out of using kubernetes: You can't have replicas, you can't autoscale, you lose all your data. I spent some of yesterday and today resolving those issues and report on the results here: an autoscaling placement service that persists data to a postgresql server running $elsewhere. The code for this extends the same branch of placedock as playground 4 and continues to use minikube. I gave up trying to get things to work on linux with the kvm or kvm2 drivers. I should probably try the none driver at some point, but for now this work has been happening on a mac. There are two main chunks to make this work: Adapting the creation of the container and the creation and syncing of the database so that the database can be outside the container. Tweaking the kubernetes bits to get a horizontal pod autoscaler working. Note: This isn't a tutorial on using kubernetes or placement, it's more of a trip report about the fun stuff I did over the weekend. If you try to follow this exactly for managing a placement service, it's not going to work very well. Think of this as a conversation starter. If you're interested in this stuff, let's talk. I recognize that my writing on this topic has become increasingly incoherent as I've gone off into the weeds of discovery. I will write a summary of all the playgrounds once they have reached a natural conclusion. For now, even in the weeds, they've taught me a lot. Database Tweaking In playground 4, the database is established when building the container: every container gets its own sqlite db sitting there ready and waiting for run time. This does not work if we want to use a remote db and we want multiple containers talking to the same db. Therefore the sync.py script, which creates the database tables is copied into the container at build time, but not actually run until run time. At run time, it gets the database connection url from an environment variable, DB_STRING. If it's not set, a default is used. We can define the value in the kubernetes deployment.yaml. But wait, the container had only been running the uwsgi process. How do we get it to use the environment variable, run sync.py, and only once that's done, start up the uwsgi server? Turns out we can replace the existing docker CMD with a script that does all that stuff. In the Dockerfile the end is adjusted to: ADD startup.sh / CMD ["sh", "-c", "/startup.sh"] and startup.sh is: DB_STRING=${DB_STRING:-sqlite:////cats.db} # Do substitutions in the template. sed -e "s,{DB_CONNECTION},$DB_STRING," < /etc/nova/nova.conf.tmp > /etc/nova/nova.conf # establish the database python3 /sync.py --config-file /etc/nova/nova.conf # run the web server /usr/sbin/uwsgi --ini /placement-uwsgi.ini It would surprise me not one iota if there aren't cleaner ways than that, but that way worked. The result of this is that each time a container starts, it connects to the database described in $DB_STRING and tries to create and update the tables. If they are already created, it's happy. If something else is in the midst of versioning the database an exception is caught and ignored. I had a postgresql server running on a nearby VM, so I used that. For the time being I simply added the necessary connection drivers to the container at build time, but if it was required to be super dynamic, then the python driver code could be installed at runtime. Being super dynamic is not really in scope for my experiments. After doing all that, I adjusted my deployment to have 4 replicas and made sure things worked. And it did. Onward. Kubernetes Tweaking Having 4 containers, either doing nothing or being overloaded, is not really taking advantage of some of the best stuff about kubernetes. What we really want, to be both more useful and more cool, is to create and destroy the placement pods as needed. This is done with a Horizontal Pod Autoscaler. You tell it the minimum and maximum number of pods you're willing to accept and a metric for determining the percent of resource consumption that is the boundary between OK and overloaded. Here's the autoscaler.yaml that works for me: apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: placement-deployment namespace: default spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: placement-deployment targetCPUUtilizationPercentage: 50 While this is a relatively simple concept and the tooling is straightforward it took me quite some time to get this to work. I've been using minikube 0.25.0 (the latest release as of this writing) but running it at the maximum version of kubernetes that it supports (v1.9.0). This leads to some conflicts. In older versions, the expected way to manage autoscaling and metrics is to use heapster. Minikube includes an addon for this, but as installed it does not present a "rest" API for the information. That's okay for some unclear number of kubernetes versions back, but is not with v1.9.0. Modern kubernetes has the Resource Metrics API. heapster can support that, but only if it is started with a particular flag. The other option is to start the kubernetes-controller with a particular flag so that autoscaling doesn't use the metrics API. I preferred to stay modern. Unreleased minikube adds the metrics server as an addon. I was able to copy that code into my minikube setup and establish the service. Next I discovered that my placement deployment needed to describe a resource limit in order for the autoscaling to work. In hindsight this is obvious. The autoscaling is done based on a percentage of a limit the deployment sets for itself. For instance if you say that say things should be scaled up when resource usage hits 50%, kubernetes says "50% of what?". In my case that meant adjusting the containers section of deployment.yaml: containers: - name: placement image: placedock:1.0 env: - name: DB_STRING value: postgresql+psycopg2://[email protected]/placement?client_encoding=utf8 ports: - containerPort: 80 # We must set resources for scaling to work. resources: requests: cpu: 250m That last stanza is saying "we request 1/4 core worth of cpu". So now, the autoscaler is expressing when cpu utilization hits 50% (of 1/4 core), scale. Note: If you're following along and using minikube with docker-machine keep in mind that the default "machine" is pretty small so you need to keep the resource request for each individual container pretty small or you will soon overwhelm the machine. I had cpu above set to 1000m initially. Starting new pods was slow enough that they never became ready. When I finally got this working it was fun to watch (literally with watch kubectl get hpa). If you're starting from scratch it can take a while for everything to warm up and be running, but eventually you'll see low usage and low replicas (this output is wide, you may need to scroll): NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE placement-deployment Deployment/placement-deployment 0% / 50% 1 10 1 1m To load up the deployment and make it scale (assuming there's a bit of data in the database, like the one resource provider that the gabbi-run in bootstrap.sh will create) I did this: export PLACEMENT=$(minikube service placement-deployment --url) ab -n 100000 -c 100 -H 'x-auth-token: admin' $PLACEMENT/resource_providers After a while the target usage raises above the 50% target and more replicas are created. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE placement-deployment Deployment/placement-deployment 109% / 50% 1 10 3 8m When the ab was done and resource usage settled, all but one of the containers were terminated. That one was working as expected: curl -H 'x-auth-token: admin' $PLACEMENT/resource_providers |json_pp Even though I know that's exactly how it is supposed to work, it's still pretty cool. What next? I need to add forbidden traits support to the placement service, but after that I will likely revisit this stuff, either for more scale fun or the cat management mentioned in playground 4. As always, please leave a comment or otherwise contact me if you have questions, there's something I've done weirdly, or you're doing similar stuff and there's an opportunity for us to collaborate. [Less]