I Use This!
Very High Activity


Analyzed 13 days ago. based on code collected 13 days ago.
Posted about 1 hour ago by Jean Philippe Braun
Introduction SDN products are evolving fast. The release cycles can be short and more and more features are added in each cycle. This is clearly a change that network administrators weren’t used to with hardware solutions. In this context the ... [More] operational team in charge of the SDN functionnality of the platform must be confident when deploying new releases. For that matter the team must be able to test new builds for ISO functionality with the previous build, and detect possible regressions. Of course SDN vendors are already running end to end tests on their releases but does they test your use cases ? And what if your are building yourself the SDN software ? You cannot be sure that your build passes the vendor tests. A good idea would be to integrate the vendor functionnal tests in your CI platform but it is not always possible. The tests are maybe not distributed or even runnable outside the vendor infrastructure. At Cloudwatt we are building our version of OpenContrail, meaning OpenContrail upstream branch with backports and sometimes non upstreamed patches that are specific to our platform. As for OpenContrail functionnal tests there is repository avaiable at https://github.com/Juniper/contrail-test-ci. We tried to run these tests in our CI but it quickly becomes a nightmare. The tests are open but clearly not suitable to be run on a generic CI platform. In the end we decided to write our own functionnal tests. Objectives The tests we want to run can be sumarized in 3 steps Deploy an infrastructure with multiple VMs, VNs, SGs, etc… Generate some traffic with classic network debugging tools (ping, netcat, netperf, scappy…) Validate that the traffic is going to the right place and has the right shape As for the global objectives of the tests we want to be SDN agnostic. We also want to avoid any large customization of the VMs images like setting up agents and be able to control them. Ideally we should be able to integrate a complex customer stack and test it with minor modifications. Finally the orchestration must be simple as possible. Our solution Instead of reinventing the wheel and to make tests as KISS as possible we are using two powerful tools: Terraform[1] which is used to deploy the infrastructure for the test and can also modify it during the test Skydive[2] which is used to validate the traffic In our tests we are not checking internals of the SDN solution, OpenContrail in our case. We’d like to keep the tests backend agnostic and in the end if the test passes we can assume that the backend is behaving correctly. Because of that we don’t need a complex setup to run the tests so they can be run simply from a laptop. Basically you need terraform and skydive being deployed on the platform. The tools are easy to deploy or install. Terraform is quite well known. It provides a DSL to describe the infrastructure you wish to deploy on a cloud provider. In our case we are using the Openstack provider but Terraform can handle other providers as well (AWS, Azure…). The tool is quite comparable to the Heat component in the Openstack world. The advantage of Terraform over Heat is that you can do incremental updates to your infrastructure. Skydive on the other end is quite new and not widely used yet. The project aims to provide a tool to debug and troubleshoot network infrastructures and especially SDN platforms. It provides a representation of the network topology (interfaces, links between them) and traffic capture on demand via REST apis. In our tests we are using the ondemand capture feature to validate the traffic in the infrastructure. The “hello world” test So, how a test would look like with this solution ? For example, let’s have a look on a simple security group test. The goal of the test is to validate that 2 VMs can talk to each other because the SG allows it, then after removing some rule of the SG we validate that the traffic is dropped. Terraform stack First we need to describe the infrastructure to setup with Terraform. We are booting 2 VMs (sg_vm1, sg_vm2). They are spawned in the same VN (sg_net) and both use the same security group (sg_secgroup) which allows ICMP and SSH traffic. By using nova cloudinit API a script will be run on sg_vm1 that will run a ping to sg_vm2 as soon as the VM is booted. The test ifself Next we will write a small shell script to run a sequence of tasks. If one task fails the whole test should fail. The tasks to run in order would be: apply the terraform stack on the target environment start a traffic capture on the sg_vm1 port poll skydive until we see some ICMP traffic going out and coming back on the interface remove the ICMP rule of the security-group using terraform check with skydive that the ICMP traffic is going out the interface but that nothing is coming back destroy the infrastructure and the skydive capture This is the full script with comments: Doing this with bash isn’t probably the best option but it shows that with only a few lines we are able to have a end to end test. There is also no need for synchronization and no need to contact the VMs directly which makes things simpler. The VM gets its configuration and commands to run from the Nova metadata service then only requests to Skydive are made to ensure the traffic is behaving as it should. Result of the script: Conclusion Relying on powerful tools makes our lives easier and so our tests. Instead of developing a complete test framework in-house to do the same we rely on tools that have good community support. The glue between theses tools is so simple that you could rewrite the last test with some test framework in a day. Finally, investing time on these tools is interesting because they are not just useful for tests but in a lot of other use-cases, such as debugging production environmnents when some bugs passed trough the test CI! [1] https://www.terraform.io/ [2] http://skydive-project.github.io/skydive/ [Less]
Posted about 1 hour ago by Rob H
RackN revisits OpenStack deployments with an eye on ongoing operations. I’ve been an outspoken skeptic of a Joint OpenStack Kubernetes Environment (my OpenStack BCN preso, Super User follow-up and BOS Proposal) because I felt that the technical hurdles of cloud native architecture … Continue reading →
Posted about 1 hour ago by Stig Telfer
PG Day covered all things Postgres at FOSDEM 2017, and Steve Simpson, one of StackHPC's senior technical leads, presented at PG Day on his thoughts for how some of the advanced features of Postgres could really shine as a backing store for telemetry ... [More] , logging and monitoring. As Steve describes in his interview for FOSDEM PG Day, he understands Postgres from the intimate vantage point of having worked with the code base, and gained respect for its implementation under the hood in addition to its capabilities as an RDBMS. Through exploiting the unique strengths of Postgres, Steve sees an opportunity to both simplify and enhance OpenStack monitoring in one move. He'll be elaborating on his proposed designs and the progress of this project in a StackHPC blog post in due course. Steve's talk was recorded and slides are available on slideshare. [Less]
Posted about 1 hour ago by Ramon Acedo
Enable Ironic in the Overcloud in a multi-controller deployment with TripleO or the director, a new feature introduced in Red Hat OpenStack Platform 10. The post Deploying Ironic in OpenStack Newton with TripleO appeared first on OpenStack Superuser.
Posted about 1 hour ago by Walter Bentley
If you’re confused about the differences between OpenStack and virtualization, you’re not alone. They are indeed different, and this post will describe how, review some practical ‘good fit’ use cases for OpenStack, and finally dispel a few myths ... [More] about this growing open source cloud platform. To get started, a few basics: Virtualization has its roots in partitioning, which divides The post OpenStack and Virtualization: What’s the Difference? appeared first on The Official Rackspace Blog. [Less]
Posted about 1 hour ago by Nicole Martinelli
Superuser talks to ONF’s Bill Snow about how the telcos play together and the upshot of the new Innovation Pipeline for XOS — a service management toolkit built on top of OpenStack. The post How the Open Networking Foundation pioneers innovation appeared first on OpenStack Superuser.
Posted about 1 hour ago by Brad Topol
  The OpenStack community has just made available the fifteenth release of OpenStack, codenamed Ocata. With Ocata, OpenStack delivers a release with increased focus on stability and maturity at a point in time when companies are getting very ... [More] comfortable with placement of workloads across public and private clouds, and also realizing significant cost savings from […] The post A Guide to the OpenStack Ocata Release appeared first on IBM OpenTech. [Less]
Posted about 1 hour ago by OSIC Team
Working towards a more resilient OpenStack with Live MigrationOSIC TeamFebruary 22, 2017Over the last few years, many enterprise customers have moved application workloads into public and private clouds, such as those powered by OpenStack. This trend ... [More] is projected to grow significantly until 2020. Moving to the cloud offers customers lower costs and a consolidation of virtual estates, and they can benefit from OpenStack’s increased manageability. Host maintenance is a common task in running a cloud—rebooting to install a security fix, patching the host operating system, replacing hardware because of an imminent failure. In these cases, live migration enables the administrator to move a virtual machine (VM) to an unaffected host before such impacting maintenance is performed on the affected host, which ensures almost no instance downtime during the normal operations of the cloud. During the Ocata release, the OpenStack Innovation Center (OSIC) benchmark tested live migration to discover the best way to move forward with non-impacting cloud maintenance. The team deployed two 22-node OpenStack clouds using OpenStack-Ansible to test two types of live migration: One with local storage where the team could test block migration. One with a remote storage back end based on Ceph to test non-block migration. The team used OpenStack's Rally project to build a test suite to serially live-migrate several VMs from one host (host A) to another (host B). The team then live-migrated the VMs back to host A. This test was repeated several times to reduce the level of uncertainty in the results. Another part of the test ensured that the VMs had a suitable workload running inside them to exercise live migration. For the duration of the test, Spark Streaming was constantly run inside the VMs during live migration to stream processing. The team needed some disk usage to exercise moving the disk between two hosts, and needed a level of memory dirtying to exercise the copying of memory during live migration. Before the live migration iterations were started, all the benchmarking tests were launched by sending packets to the VM. To rate the performance of the live migration operations, a benchmarker tool configured by the OSIC DevOps team measured the following KPIs: per VM live migration timing, per VM downtime, per VM TCP stream continuity, and per VM metrics (CPU, network bandwidth, disk IO, and RAM). To have reliable results, 240 VM live migrations were performed for four cases: Block storage live migration with tunneling disabled. Non-block storage live migration with tunneling disabled. Block storage live migration with tunneling enabled. Non-block storage live migration with tunneling enabled. The average time and standard deviation of VM live migration were recorded for each case. These results gave the team a sense of the variations of the time needed for live migration. The team plotted the following example graph to show these results: The tests revealed two bugs, which the team addressed. In the first bug, the team discovered that the live migrations were incorrectly tracked. The team successfully submitted a fix upstream, which was merged. In the second bug, the team encountered a race condition that was causing failures during the cleanup after the live migration had completed. The team successfully submitted a fix upstream. After the team applied these fixes, no live migration failures occurred in any of the test runs. The workload was then tuned to help get results that mirror what is observed in production. The OSIC team continues to experiment with live migration and is preparing a white paper for the Boston summit. [Less]
Posted about 1 hour ago by Doug Smith
Sometimes, one isn’t enough. Especially when you’ve got network requirements that aren’t just “your plain old HTTP API”. By default in Kubernetes, a pod is exposed only to a loopback and a single interface as assigned by your pod networking. In the ... [More] telephony world, something we love to do is isolate our signalling, media, and management networks. If you’ve got those in separate NICs on your container host, how do you expose them to a Kubernetes pod? Let’s plug in the CNI (container network interface) plugin called multus-cni into our Kubernetes cluster and we’ll expose multiple network interfaces to a (very simple) pod. [Less]
Posted 2 days ago by Thierry Carrez
It is now pretty well accepted that open source is a superior way of producing software. Almost everyone is doing open source those days. In particular, the ability for users to look under the hood and make changes results in tools that are better ... [More] adapted to their workflows. It reduces the cost and risk of finding yourself locked-in with a vendor in an unbalanced relationship. It contributes to a virtuous circle of continuous improvement, blurring the lines between consumers and producers. It enables everyone to remix and invent new things. It adds up to the common human knowledge. And yet And yet, a lot of open source software is developed on (and with the help of) proprietary services running closed-source code. Countless open source projects are developed on GitHub, or with the help of Jira for bugtracking, Slack for communications, Google docs for document authoring and sharing, Trello for status boards. That sounds a bit paradoxical and hypocritical -- a bit too much "do what I say, not what I do". Why is that ? If we agree that open source has so many tangible benefits, why are we so willing to forfeit them with the very tooling we use to produce it ? But it's free ! The argument usually goes like this: those platforms may be proprietary, they offer great features, and they are provided free of charge to my open source project. Why on Earth would I go through the hassle of setting up, maintaining, and paying for infrastructure to run less featureful solutions ? Or why would I pay for someone to host it for me ? The trick is, as the saying goes, when the product is free, you are the product. In this case, your open source community is the product. In the worst case scenario, the personal data and activity patterns of your community members will be sold to 3rd parties. In the best case scenario, your open source community is recruited by force in an army that furthers the network effect and makes it even more difficult for the next open source project to not use that proprietary service. In all cases, you, as a project, decide to not bear the direct cost, but ask each and every one of your contributors to pay for it indirectly instead. You force all of your contributors to accept the ever-changing terms of use of the proprietary service in order to participate to your "open" community. Recognizing the trade-off It is important to recognize the situation for what it is. A trade-off. On one side, shiny features, convenience. On the other, a lock-in of your community through specific features, data formats, proprietary protocols or just plain old network effect and habit. Each situation is different. In some cases the gap between the proprietary service and the open platform will be so large that it makes sense to bear the cost. Google Docs is pretty good at what it does, and I find myself using it when collaborating on something more complex than etherpads or ethercalcs. At the opposite end of the spectrum, there is really no reason to use Doodle when you can use Framadate. In the same vein, Wekan is close enough to Trello that you should really consider it as well. For Slack vs. Mattermost vs. IRC, the trade-off is more subtle. As a sidenote, the cost of lock-in is a lot reduced when the proprietary service is built on standard protocols. For example, GMail is not that much of a problem because it is easy enough to use IMAP to integrate it (and possibly move away from it in the future). If Slack was just a stellar opinionated client using IRC protocols and servers, it would also not be that much of a problem. Part of the solution Any simple answer to this trade-off would be dogmatic. You are not unpure if you use proprietary services, and you are not wearing blinders if you use open source software for your project infrastructure. Each community will answer that trade-off differently, based on their roots and history. The important part is to acknowledge that nothing is free. When the choice is made, we all need to be mindful of what we gain, and what we lose. To conclude, I think we can all agree that all other things being equal, when there is an open-source solution which has all the features of the proprietary offering, we all prefer to use that. The corollary is, we all benefit when those open-source solutions get better. So to be part of the solution, consider helping those open source projects build something as good as the proprietary alternative, especially when they are pretty close to it feature-wise. That will make solving that trade-off a lot easier. [Less]