Very High Activity
142
I Use This!

News

Analyzed 5 days ago. based on code collected 5 days ago.
Posted about 21 hours ago by Jason Baker
To help you find the best of these, every month, Opensource.com goes on the hunt for the best community-created OpenStack how-tos published in the previous month.
Posted about 21 hours ago by Angela McCallister
The post Six reasons OpenStack fails (Hint: it’s not the technology) appeared first on Mirantis | The Pure Play OpenStack Company. We know OpenStack is hard. But why? The post Six reasons OpenStack fails (Hint: it’s not the technology) appeared first on Mirantis | The Pure Play OpenStack Company.
Posted about 21 hours ago by Angela McCallister
The post Six DevOps myths and the realities behind them appeared first on Mirantis | The Pure Play OpenStack Company. At OpenStack Days Silicon Valley, Puppet Founder and CEO Luke Kanies dispelled the six most common misconceptions he’s encountered ... [More] that prevent organizations from adopting and benefiting from DevOps. The post Six DevOps myths and the realities behind them appeared first on Mirantis | The Pure Play OpenStack Company. [Less]
Posted about 21 hours ago by Jason Baker
Keep up with the latest news about OpenStack, the open source cloud project, in this week's edition of Opensource.com's OpenStack news roundup.
Posted about 21 hours ago by adrian
The public cloud hosting market is large and growing, with 50,000 companies worldwide competing for customers. With 80 percent of the market occupied by small and medium providers, it’s clear that businesses today need to stand out and provide ... [More] innovative service to their clients. OpenStack, a massive open-source software platform for cloud computing, has already been […] [Less]
Posted about 21 hours ago by hugh
Introduction Welcome to Last week on OpenStack Dev (“Lwood”) for the week just past. For more background on Lwood, please refer here. Basic Stats for week 22 to 28 – August 2016 for openstack-dev: ~363 Messages (down about 8.5% relative to last week) ~161 Unique threads (down about 5% relative to last week) Traffic down again […]
Posted about 21 hours ago by hastexo
SUSE is one of the world's most established open source companies, offering enterprise-ready products based on the Linux platform since 1992. Its flagship product, SUSE Linux Enterprise Server, is one of the world's most popular Linux distributions ... [More] , and SUSE OpenStack Cloud is a leading product in the enterprise cloud market. SUSE's full commitment to open-source software and community collaboration is evident even when a band of SUSE people flocks together to shoot an amazing music video: (Video courtesy of SUSE. See more videos like this) Streamlined Training to Scale with Growing Teams SUSE is currently experiencing a period of rapid growth, both in sales and in head count. This is certainly a good position to be in, but comes with a challenge: how do we make sure that we can keep our teams informed and up to speed as we grow? How do we ensure that the scalability, reliability, and efficiency of our products is also reflected in our training? SUSE evaluated several options to address this challenge. Unsurprisingly, an open-source, open-community platform won the race. hastexo Academy and Open edX: Open Source, Community Driven Learning At Scale In selecting the hastexo Academy platform (powered by Open edX) as the basis of its professional learning platform, SUSE benefits from three valuable assets: the Open edX platform, an open-source, scalable, extensible learning management system supporting a multitude of learning methods, hastexo's technical expertise in deploying and maintaining this platform on OpenStack, and hastexo's multi-faceted expertise in developing self-paced on-line training courses. The first courses on the new platform are expected to come to SUSE employees within the quarter. SUSE and hastexo will continue to collaborate on managing and maintaining the SUSE learning management platforms on a long-term basis. [Less]
Posted 1 day ago by Superuser
Open source cloud computing aficionados gathered in Bejiing for the two-day event.
Posted 3 days ago by Giulio Fidente
Time to roll up some notes on the status of Ceph in TripleO. The majority of these functionalities were available in the Mitaka release too but the examples work with code from the Newton release so they might not apply identical to Mitaka. The ... [More] TripleO default configuration No default is going to fit everybody, but we want to know what the default is to improve from there. So let's try and see: uc$ openstack overcloud deploy --templates tripleo-heat-templates -e tripleo-heat-templates/environments/puppet-pacemaker.yaml -e tripleo-heat-templates/environments/storage-environment.yaml --ceph-storage-scale 1 Deploying templates in the directory /home/stack/example/tripleo-heat-templates ... Overcloud Deployed Monitors go on the controller nodes, one per node, the above command is deploying a single controller though. First interesting thing to point out is: oc$ ceph --version ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) Jewel! Kudos to Emilien for bringing support for it in puppet-ceph. Continuing our investigation, we notice the OSDs go on the cephstorage nodes and are backed by the local filesystem, as we didn't tell it to do differently: oc$ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03999 root default -2 0.03999 host overcloud-cephstorage-0 0 0.03999 osd.0 up 1.00000 1.00000 Notice we got SELinux covered: oc$ ls -laZ /srv/data drwxr-xr-x. ceph ceph system_u:object_r:ceph_var_lib_t:s0 . ... And use CephX with autogenerated keys: oc$ ceph auth list installed auth entries: client.admin key: AQC2Pr9XAAAAABAAOpviw6DqOMG0syeEYmX2EQ== caps: [mds] allow * caps: [mon] allow * caps: [osd] allow * client.openstack key: AQC2Pr9XAAAAABAAA78Svmmt+LVIcRrZRQLacw== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics But which OpenStack service is using Ceph? The storage-environment.yaml file has some informations: uc$ grep -v '#' tripleo-heat-templates/environments/storage-environment.yaml | uniq resource_registry: OS::TripleO::Services::CephMon: ../puppet/services/ceph-mon.yaml OS::TripleO::Services::CephOSD: ../puppet/services/ceph-osd.yaml OS::TripleO::Services::CephClient: ../puppet/services/ceph-client.yaml parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd GnocchiBackend: rbd The registry lines enable the Ceph services, the parameters instead are setting Ceph as backend for Cinder, Nova, Glance and Gnocchi. They can be configured to use other backends, see the comments in the environment file. Regarding the pools: oc$ ceph osd lspools 0 rbd,1 metrics,2 images,3 backups,4 volumes,5 vms, Despite the replica size set by default to 3, we only have a single OSD so with a single OSD the cluster will never get into HEALTH_OK: oc$ ceph osd pool get vms size size: 3 Good to know, now a new deployment with more interesting stuff. A more realistic scenario What makes it "more realistic"? We'll have enough OSDs to cover the replica size. We'll use physical disks for our OSDs (and journals) and not the local filesystem. We'll cope with a node with a different disks topology and we'll decrease the replica size for one of the pools. Define a default configuration for the storage nodes, telling TripleO to use sdb for the OSD data and sdc for the journal: ceph_default_disks.yaml parameter_defaults: CephStorageExtraConfig: ceph::profile::params::osds: /dev/sdb: journal: /dev/sdc For the node which has two (instead of a single) rotatory disks, we'll need a specific map. First get its system-uuid from the Ironic introspection data: uc$ openstack baremetal introspection data save | jq .extra.system.product.uuid "66C033FA-BAC0-4364-9E8A-3184B5952370" then create the node specific map: ceph_mynode_disks.yaml resource_registry: OS::TripleO::CephStorageExtraConfigPre: tripleo-heat-templates/puppet/extraconfig/pre_deploy/per_node.yaml parameter_defaults: NodeDataLookup: > {"66C033FA-BAC0-4364-9E8A-3184B5952370": {"ceph::profile::params::osds": {"/dev/sdb": {"journal": "/dev/sdd"}, "/dev/sdc": {"journal": "/dev/sdd"} } } } Finally, to override the replica size (and why not, PGs number) of the "vms" pool (where by default the Nova ephemeral disks go): ceph_pools_config.yaml parameter_defaults: CephPools: vms: size: 2 pg_num: 128 pgp_num: 128 We also want to clear and prepare all the non-root disks with a GPT label, which will allow us, for example, to repeat the deployment multiple times reusing the same nodes. The implementation of the disks cleanup script can vary, but we can use a sample script and wire it to the overcloud nodes via NodeUserData: uc$ curl -O https://gist.githubusercontent.com/gfidente/42d3cdfe0c67f7c95f0c/raw/1f467c6018ada194b54f22113522db61ef944e20/ceph_wipe_disk.yaml ceph_wipe_env.yaml: resource_registry: OS::TripleO::NodeUserData: ceph_wipe_disk.yaml parameter_defaults: ceph_disks: "/dev/sdb /dev/sdc /dev/sdd" All the above environment files could have been merged in a single one but we split them out in multiple ones for clarity. Now the new deploy command: uc$ openstack overcloud deploy --templates tripleo-heat-templates -e tripleo-heat-templates/environments/puppet-pacemaker.yaml -e tripleo-heat-templates/environments/storage-environment.yaml --ceph-storage-scale 3 -e ceph_pools_config.yaml -e ceph_mynode_disks.yaml -e ceph_default_disks.yaml -e ceph_wipe_env.yaml Deploying templates in the directory /home/stack/example/tripleo-heat-templates ... Overcloud Deployed Here is our OSDs tree, with two instances running on the node with two rotatory disks (sharing the same journal disk): oc$ ceph os tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.03119 root default -2 0.00780 host overcloud-cephstorage-1 0 0.00780 osd.0 up 1.00000 1.00000 -3 0.01559 host overcloud-cephstorage-2 1 0.00780 osd.1 up 1.00000 1.00000 2 0.00780 osd.2 up 1.00000 1.00000 -4 0.00780 host overcloud-cephstorage-0 3 0.00780 osd.3 up 1.00000 1.00000 and the custom PG/size values for for "vms" pool: oc$ ceph osd pool get vms size size: 2 oc$ ceph osd pool get vms pg_num pg_num: 128 Another simple customization could have been to set the journals size. For example: ceph_journal_size.yaml parameter_defaults: ExtraConfig: ceph::profile::params::osd_journal_size: 1024 Also we did not provide any customization for the crushmap but a recent addition from Erno makes it possible to disable global/osd_crush_update_on_start so that any customization becomes possible after the deployment is finished. Also we did not deploy the RadosGW service as it is still a work in progress, expected for the Newton release. Submissions for its inclusion are on review. We're also working on automating the upgrade from the Ceph/Hammer release deployed with TripleO/Mitaka to Ceph/Jewel, installed with TripleO/Newton. The process will be integrated with the OpenStack upgrade and again the submissions are on review in a series. For more scenarios The mechanism recently introduced in TripleO to make composable roles, discussed in a Steven's blog post, makes it possible to test a complete Ceph deployment using a single controller node too (hosting the OSD service as well), just by adding OS::TripleO::Services::CephOSD to the list of services deployed on the controller role. And if the above still wasn't enough, TripleO continues to support configuration of OpenStack with a pre-existing, unmanaged Ceph cluster. To do so we'll want to customize the parameters in puppet-ceph-external.yaml and deploy passing that as argument instead. For example: puppet-ceph-external.yaml resource_registry: OS::TripleO::Services::CephExternal: tripleo-heat-templates/puppet/services/ceph-external.yaml parameter_defaults: # NOTE: These example parameters are required when using Ceph External and must be obtained from the running cluster #CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19' #CephClientKey: 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==' #CephExternalMonHost: '172.16.1.7, 172.16.1.8' # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova NovaEnableRbdBackend: true CinderEnableRbdBackend: true CinderBackupBackend: ceph GlanceBackend: rbd GnocchiBackend: rbd # If the Ceph pools which host VMs, Volumes and Images do not match these # names OR the client keyring to use is not named 'openstack', edit the # following as needed. NovaRbdPoolName: vms CinderRbdPoolName: volumes GlanceRbdPoolName: images GnocchiRbdPoolName: metrics CephClientUserName: openstack # finally we disable the Cinder LVM backend CinderEnableIscsiBackend: false Come help in #tripleo @ freenode and don't forget to check the docs at tripleo.org! Some related topics are described there, for example, how to set the root device via Ironic for the nodes with multiple disks or how to push in ceph.conf additional arbitraty settings. [Less]
Posted 3 days ago by Jason Baker
To help you find the best of these, every month, Opensource.com goes on the hunt for the best community-created OpenStack how-tos published in the previous month.