I Use This!
Activity Not Available


Analyzed about 2 months ago. based on code collected about 2 months ago.
Posted about 11 hours ago by kjohnston
Getting Started with OpenStack, The OSIC WaykjohnstonOctober 11, 2016by Alexandra Settle (OSIC Docs Queen) and Ala Raddaoui (OSIC Ops Team)   OpenStack, the leading open source cloud platform, boasts 50 teams managing separate projects, 14 successful ... [More] release cycles, and thousands of diverse contributors from all over the world.   Those contributors are eager to develop new tools to make it simpler to deploy and manage OpenStack, and projects like OpenStack-Ansible, Fuel, JuJu, and Kolla are just a few of the deployment projects available, each with different solutions, for different use cases, on how to best deploy your OpenStack cloud.   However, organizations wanting to provision bare metal for OpenStack deployments were asking: “what’s out there for us?”   Our end goal was to prove that you should not run away from OpenStack because the first deployment step is hard. Proving that something is a pain point is just the first step, there is so much room for innovation within OpenStack.   The OpenStack Innovation Center, a collaboration between Intel and Rackspace to help speed enterprise deployment of OpenStack, focused on open sourcing a way to provision bare metal servers for OpenStack deployments.   Once an investigation of available open source tools began, it became clear there was a lack of general information readily available offering a clear and easy way to configure bare metal servers to install OpenStack. The team decided to start with the tools most used in the community that offered a high degree of stability, and maturity in capability.   When working with enterprises at any scale, more often than not, they already have systems in place to take care of bare metal provisioning. When they don’t, Cobbler can be used.   As a result, the team selected PXE booting with Cobbler to deploy an Ubuntu server in bare metal. Cobbler is a simple, open source, scalable and stable solution many users already trust.   Innovation Center novices were given a 22 node bare metal environment each weekly cycle, to PXE boot using Cobbler. The successes of these tests were measured and graphed. It soon became obvious that we needed to document the steps to PXE boot the servers. At the starting point, with no documentation, it took five working days to install OpenStack – 12 hours to deploy bare metal, and 28 hours to install OpenStack-Ansible. The graph below indicates the time it took each individual to deploy. Deploying bare metal with limited documentation proved to be difficult, so we asked the novices to document pain points, time measure, and filing any bugs if found. Through these sessions, over several months, the process was continuously iterated until the time-to-deploy was dramatically shortened. From five working days (12 hours to deploy bare metal, 28 hours to install OpenStack-Ansible) we got it down to 1.65 hours for bare metal provisioning, and 4.86 hours to deploy OpenStack-Ansible.   It was important to the testing team that the bare metal deployment process not become another pain point for operators deploying OpenStack, so the team started documenting the steps into a full deployment guide.   Documentation is imperative to any new product; it maximizes the potential and ensures that the end consumer is receiving the best information when they need it. Starting with a rough outline, the team worked to build the installation document into a fully fleshed out, step-by-step tutorial. Once we had the document refined, we brought in the OSIC documentation team to ensure the installation guide was up to OpenStack enterprise standards.   The documentation team converted the OSIC deployment process to ReStrutcured Text (RST), to be compatible with OpenStack upstream documentation. Alongside the operations team, the documentation team developed the best Information Architecture possible for the guide, and continued iterating on the documentation structure to ensure it provided the most effective and efficient route to success.   Asking the right question, “what’s out there for us?” enabled our team to begin work and foray into a new strategy for the growing enterprise market, collaborating with different teams and diverse contributors to ensure the best solution.   If you are interested in bare metal deployment, or to learn more about OSIC, visit OSIC.org.   [Less]
Posted about 11 hours ago by Alin Serdean
OVS GRE setup on Hyper-V without OpenStack In this post we will explain how to configure manually a Open vSwitch GRE tunnel between VMs running on Hyper-V and KVM hosts. KVM OVS configuration In this example, KVM1 provides a GRE tunnel with local ... [More] endpoint gre-1 connected to Hyper-V ( through br-eth3 [crayon-580a5693197d8126883017/] Please note the MTU… The post Open vSwitch 2.5 on Hyper-V (GRE) – Part 3 appeared first on Cloudbase Solutions. [Less]
Posted about 11 hours ago by Flavio Percoco
Communities, regardless of their size, rely mainly on the communication there is between their members to operate. The existing processes, the current discussions, and the future growth depend heavily on how well the communication throughout the ... [More] community has been established. The channels used for these conversations play a critical role in the health of the communication (and the community) as well. The things that are communicated are, of course, important. They are the objects being sent among the peers in the community. These things are the messages traveling throughout the system and they must respect a protocol, like every message in every other protocol. Failing to respect this protocol will result in a non-effective communication. Failed communications have side-effects on the system. A community is a live ecosystem and as such it relies on communications to inform other peers of the system about the current status, evolution, changes, etc. These communications (or channels therefor) cannot guarantee awareness. Let us leave delivery guarantees aside for the sake of the argument being made. Awareness comes after delivery and delivery does not guarantee awareness. A message could have been delivered to other members of the ecosystem but it does not mean the message was processed, therefore the peer may be neither aware of the message nor of the message content even after the message was delivered. Think of emails, blogs, or any other asynchronous way of communication. None of these channels can guarantee the peers that have received the message have actually read it. This is not under the sender's control. There's a large number of elements that may affect the communication. If you take mailing lists, for example, it may very well be that the receiver of the message is getting too many emails and therefore is subject to missing some of them. This is just one, realistic, example of what could happen. The number of cases that can cause lack of awareness is bigger than what I've mentioned so far but it's not worth exploring it any further. The way some systems cope with the lack of the above guarantees is by propagating the same message several times - perhaps through different channels - with the same expectations (or lack thereof). Over-communicating won't solve the issue of peers not being aware of the message. This won't get rid of surprises. It does, however, increases the probabilities of the message being processed. The use of multiple channels will provide different ways for consumers of this message to process it. Communities, specifically, are built by individual peers from different environments and cultures. These peers have different preferences and they may consume messages from different sources. It is indeed impossible to cover all the options and to satisfy every preference. Selecting the right set of channels for these communications and propagating the messages through multiple of these channels when necessary is the key to increase the probability for the messages to be consumed. Over-communicating does not imply spamming consumers, it does not imply sending the same message, multiple times, through the same channel either. Over-communicating, in the context of communities, requires using different channels to reach different sets of peers. These sets may overlap, nonetheless. Surprise (sometimes) doesn't mean there's lack of communication or transparency. It's important, however, to reflect on whether the communication channels and methodologies being used are the right ones - or simply enough - for reducing the lack of awareness. If you liked this post, you may be interested in the keynote I gave at Pycon South Africa. Keeping up with the pace of a fast growing community without dying [Less]
Posted about 11 hours ago by Alin Serdean
OVS STT setup on Hyper-V without OpenStack In this post we will explain how to configure manually a Open vSwitch STT tunnel between VMs running on Hyper-V and KVM hosts. KVM OVS configuration In this example, KVM1 provides a STT tunnel with local ... [More] endpoint stt-1 connected to Hyper-V ( through br-eth3 [crayon-580a569318f41444465265/] Please note the… The post Open vSwitch 2.5 on Hyper-V (STT) – Part 4 appeared first on Cloudbase Solutions. [Less]
Posted about 11 hours ago by dfineberg
OpenStack Interop ChallengedfinebergOctober 17, 2016The goal is to prove that different OpenStack clouds can work seamlessly with each other.  From a report by Luz Cazares, Intel Open Source Technology Center   The OpenStack Innovation Center (OSIC) ... [More] proudly participates in the interoperability challenge, originally proposed by IBM via the Interop Working Group (aka Defcore).  The goal is to prove OpenStack interoperability—that is, different OpenStack clouds can work seamlessly with each other.    “OpenStack has evolved at a rapid pace, emerging as a competitive open cloud OS, yet barriers to its broader adoption still exist.  Together with the OpenStack community, Intel is actively working to address enterprise needs within OpenStack, bringing feature-rich, interoperable, easy-to-deploy cloud solutions to market.” said David Brown, Director, OpenStack Core Engineering, Intel Open Source Technology Center.  “As a result, data centers of all sizes can run diverse workloads, and take best advantage of the benefits of OpenStack on Intel® architecture.”   Interop Challenge participants run Defcore guidelines via RefStack tools. Second, participants run workloads on top of OpenStack production clouds—without any modification or special tweaks.   Two common workloads (lampstack and dockerswarm) run on participant’s clouds.  Both workloads use Ansible as the underlying IT automation engine.    To take part in the challenge, OSIC made a project available on Cloud1, which includes all needed resources, and where we executed all the test cases.  The quality engineering team successfully tested Defcore guidelines 2016.01 and 2016.08.  The lampstack and dockerswarm workloads also ran successfully, with no major incidents.     However, the team faced challenges when setting up the deployer VM.  As a result, we provided shell scripts to handle the installation of deployer VM pre-requisites.  To get the shell scripts and more, visit our repo on github.   Going to the OpenStack Summit in Barcelona, 25-28 Oct. 2016?   Stop by the OSIC booth for a demo, and look for the Interop (DefCore) Working Group Work Session on Tue 25 Oct., 2:15pm-2:55pm.  Also check out the presentation, Beyond RefStack: The Interop Challenge, Thu 27 Oct. 11:00am-11:40am.   [Less]
Posted about 11 hours ago by dfineberg
VMware Integrated OpenStackdfinebergOctober 17, 2016The team tested the full-stack environment, from hardware configuration to VMware and OpenStack software installation.   Adapted from a report in development by Intel OTC’s Dr. Yih Leong Sun, and ... [More] VMware’s Binbin Zhao, Fred Vong, and Arvind Soni   In July 2016, VMware collaborated with the Intel Open Source Technology Center (OTC) to analyze the performance of VMware Integrated OpenStack (VIO).  The OpenStack Innovation Center (OSIC provided a 132-node cluster as the testing environment for a three-week period.  The team tested the performance and scalability of provisioning the full-stack environment, from hardware configuration to VMware and OpenStack software installation.   The scale test environment included 120 compute nodes. The entire scale testing environment took a week to set up. This included a complete installation of virtualization products, hypervisor, SDN, and converged storage, as well as setup of OpenStack services (all deployed as virtual machines).   Each server included HP Integrated Lights-Out (iLO) remote server administration, used for retrieving mac addresses, mounting ISO images, setting boot devices, resetting power, and other functions.   The team used two different performance tools to simulate the workload and measure the results, a fog.io based orchestration tool and OpenStack Rally.   Key Findings The OpenStack control plane successfully handled the creation of 10,000 VMs across 120 servers. For all OpenStack services, vital statistics of CPU and memory remained stable and healthy as the workloads increased. However, we didn’t get a chance to measure the API response time for the operations involved. As the number of total objects increased, we saw degradation in the response time for completing OpenStack operations, such as VM creation. Depending on the infrastructure orchestration tool, we saw a jump in failure rates when concurrency was increased beyond a certain threshold. Keystone and Neutron interaction was identified as the cause for failures, although the exact defect is still under investigation. Default Nova scheduler results in uniform distribution of workloads across all the nodes and uniform utilization of underlying resources.      The team plans to expand the experiment to a larger compute cluster with 256 nodes, and they will add measurements of the overall API response time for OpenStack services, in addition to the performance of message queuing and databases.   “It all went well, for the first phase of testing,” said Intel’s Yih Leong Sun, “The results are helping us fine-tune things for the next 256-node phase, and ultimately to our goal of more than 500 nodes.”   For more information, read the full report, and visit this link for more information on VMware Integrated OpenStack (VIO). [Less]
Posted about 11 hours ago by Jen Wike Huger
Becoming a QA Engineer for OpenStack was a career shift for Emily Wilson who has a background in research microbiology. But there's an odd similarity between the two careers—they both involve figuring out what makes complicated systems work and where ... [More] the weak points are. Paradoxically, this requires both a big picture perspective of a system, as well as an in-depth understanding of how the individual components function. [Less]
Posted about 11 hours ago by Lana Brindley
Newcomers now have a clearer path to getting started, docs project team lead (PTL) Lana Brindley on how it happened. The post Turning the OpenStack Install Guide on its head appeared first on OpenStack Superuser.
Posted about 11 hours ago by dfineberg
Midokura MidoNet ScalabilitydfinebergOctober 20, 2016 Midokura tested MidoNet 5.0 in an OSIC environment to validate its availability, scalability, and agility as a network virtualization overlay solution.   Adapted from a Midokura white paper    ... [More] Transformative application and data analytic workloads place incredible demands on the data center network. They pose challenges to using traditional networking devices and tools designed for physical, relatively static network infrastructures.  An open source network virtualization overlay, Midokura MidoNet software enables operators to build, operate and manage virtual networks at scale.  Operators can overlay MidoNet on top of their existing hardware and hypervisor software, deploying a single network for any platform.   Midokura tested MidoNet 5.0 in an OpenStack Innovation Center (OSIC) environment to validate the availability, scalability, and agility of MidoNet as a network virtualization overlay solution.  The validation was performed at the OpenStack Innovation Center (OSIC) in San Antonio on 132 HP DL380 servers:  Model: HP DL380 Gen9  Processor: 2x 12-core Intel E5-2680 v3 @ 2.50GHz  RAM: 256GB RAM  Disk: 12x 600GB 15K SAS - RAID10  NICs: 2x Intel X710 Dual Port 10 GbE   The team used OpenStack Liberty (projects Neutron, Keystone) and Open Source MidoNet release 5.0.2 software. The server OS was Ubuntu 14.04.4 with Linux Kernel version 4.2.      The cloud setup used two networks, one for the data plane traffic and the other one for the management plane.  Six gateways for North South traffic provided a theoretical maximum throughput of 6X 10GB.   The team performed tests for scalability and for performance.    Scalability testing demonstrated that MidoNet can satisfy the following conditions:  MidoNet can support 1000 hosts, 10,000 VMs  An individual MidoNet agent can support 100 VMs  MidoNet can support 6 Gateways with 10G uplinks   Performance tests show how MidoNet handles the simulations for large-scale deployments, without performance overhead.  Specifically, the tests demonstrate:  MidoNet delivers the same data transfer performance when deployed on bare metal versus virtual machines  MidoNet can deliver the same or acceptable latency on bare metal servers when compared to virtual machines  MidoNet delivers the same or equivalent transaction [request/response] rate on bare metal servers versus virtual machines   The testing at OSIC demonstrates how well MidoNet handles production cloud use cases, with performance in virtual environments essentially equivalent to bare metal.   For more information, read MidoNet Scalability Report: Virtual Performance Equivalent to Bare Metal     [Less]
Posted about 11 hours ago by dfineberg
Live Upgrade for OpenStackdfinebergOctober 20, 2016The OSIC Quality Engineering team uses continuous delivery to test OpenStack upgradability on a daily basis. Adapted from a paper by Luz Cazares, Intel Open Source Technology Center   The OpenStack ... [More] Innovation Center (OSIC) Quality Engineering (QE) team uses continuous delivery to test upgradability on a daily basis, going from the current N release to the master branch.  The QE and Operations teams worked together to automate a multi-node deployment using a rolling upgrade approach for minimal control plane downtime.   The solution uses one physical server with 15 VMs.   A full OpenStack is deployed on top of them, with three nodes each for controllers, compute, cinder, swift, and logging. OpenStack-Ansible (OSA) is the deployment technology.   The physical host came from the Rackspace OnMetal IO v2 bare-metal service.  It requires Rackspace credentials.  You can use your own physical host by modifying the pipeline workflow (removing the physical provisioning function).   The Jenkins engine drives the automation.  Pipelines are built from groovy-like functions and can be easily modified as needed.    The team added Elasticsearch and Kibana servers to analyze the data and visualize it.  The goal of the effort is to validate the stability of the upgrade over time, find upgrade mechanism issues almost as soon as they are introduced, and measure API downtime during the rolling upgrade.   Get the playbooks, scripts, configurations, pipelines and functions at our repo on github, and click here to read the full “Jumping on a Live Upgrade” paper.   [Less]