Activity Not Available
55
I Use This!

News

Analyzed about 2 years ago. based on code collected about 3 years ago.
Posted over 3 years ago by nore...@blogger.com (Glen)
I have been a bit slack with my blogging and not posted much for a long time. This has been due to both working on lots of things, buying a house and a busy lifestyle. I do however have a few things to blog about. So, in the coming days i will ... [More] blog about auto_inst os testing, corporate patching, android tools, aucklug, raspberry pi, rdiff-backup, mulitseat Linux, the local riverside community centre, getting 10 laptops, which will run mageia, my cat gorse, gps tracking, house automation, Amazon AMIs and maybe some other stuff. [Less]
Posted over 3 years ago by nore...@blogger.com (Glen)
I have been a bit slack with my blogging and not posted much for a long time. This has been due to both working on lots of things, buying a house and a busy lifestyle. I do however have a few things to blog about. So, in the coming days i will ... [More] blog about auto_inst os testing, corporate patching, android tools, aucklug, raspberry pi, rdiff-backup, mulitseat Linux, the local riverside community centre, getting 10 laptops, which will run mageia, my cat gorse, gps tracking, house automation, Amazon AMIs and maybe some other stuff. [Less]
Posted over 3 years ago
This week in Test Days: we’ll be testing ABRT on Tuesday 2013-05-07 and SSSD improvements and Active Directory integration on Thursday 2013-05-09! ABRT is the Fedora tool for catching and reporting crashes. If you’ve been running Fedora 19, or you’ve ... [More] updated with updates-testing in Fedora 18 in the last few days, you may have noticed some major changes to ABRT and libreport, including a completely new graphical tool for reporting crashes called gnome-abrt. We’ll be testing out these big changes at the ABRT Test Day. ABRT gets better every Fedora release, but the more broad-based testing we get the more issues we can squish, so please, come along and help us test! The SSSD improvements and Active Directory integration Test Day will focus on Fedora 19 enhancements to our enterprise authentication tools. In particular, we’ll be testing integrating Fedora 19 systems into Active Directory domains. This probably won’t be of interest to some of you, but if you use or even help to admin a FreeIPA or AD shop, you might well want to come along and help check if we have things working properly for your deployment. As always, full instructions for taking part in each Test Day are available on the Wiki page, and we’ll be making live images available so you can do as much of the testing as possible without needing to install a pre-release Fedora. QA and development folks will be present in the #fedora-test-day channel on Freenode IRC for discussion and any help you might need in testing. If you’re not sure what IRC is or how to use it, we have instructions here, and you can also simply click here to join the chat through a Web front end. Thanks to all in advance! [Less]
Posted over 3 years ago
This week in Test Days: we’ll be testing ABRT on Tuesday 2013-05-07 and SSSD improvements and Active Directory integration on Thursday 2013-05-09! ABRT is the Fedora tool for catching and reporting crashes. If you’ve been running Fedora 19, or you’ve ... [More] updated with updates-testing in Fedora 18 in the last few days, you may have noticed some major changes to ABRT and libreport, including a completely new graphical tool for reporting crashes called gnome-abrt. We’ll be testing out these big changes at the ABRT Test Day. ABRT gets better every Fedora release, but the more broad-based testing we get the more issues we can squish, so please, come along and help us test! The SSSD improvements and Active Directory integration Test Day will focus on Fedora 19 enhancements to our enterprise authentication tools. In particular, we’ll be testing integrating Fedora 19 systems into Active Directory domains. This probably won’t be of interest to some of you, but if you use or even help to admin a FreeIPA or AD shop, you might well want to come along and help check if we have things working properly for your deployment. As always, full instructions for taking part in each Test Day are available on the Wiki page, and we’ll be making live images available so you can do as much of the testing as possible without needing to install a pre-release Fedora. QA and development folks will be present in the #fedora-test-day channel on Freenode IRC for discussion and any help you might need in testing. If you’re not sure what IRC is or how to use it, we have instructions here, and you can also simply click here to join the chat through a Web front end. Thanks to all in advance! [Less]
Posted over 3 years ago
It’s not very often that I separate mindi from mondo in the publication of releases. But this time it was needed as I had a customer who suffered from bugs that were only needing a mindi realease, and I thought it would help many other users ,so here ... [More] you are ! Mindi 2.1.5 is there, and is principally solving kernel support detection for the type of initrd possible (solves an abort of mindi on RHEL3/4), and also reduces the number of error messages when dealing with links containing more than 2 references to .. Should help with some recent reports. Also I had a report that the -H option and RESTORE keyword were not completely without interaction, so this is now solved as well. Finally, this version supports better HP ProLiant Gen8 and future platforms by also using hp-rcu and hp-fm tools. Now available on ftp://ftp.mondorescue.org for more than 120 distribution tuples ! And for those who ask why I do that: first because I like it, then because I have the tools to do it, and also because I do have users who are using Fedora 7, RHEL 3 or even Red Hat 6.2. Filed under: FLOSS Tagged: HP, HPLinux, Linux, Mondorescue, Open Source, ProLiant [Less]
Posted over 3 years ago
It’s not very often that I separate mindi from mondo in the publication of releases. But this time it was needed as I had a customer who suffered from bugs that were only needing a mindi realease, and I thought it would help many other users ,so here ... [More] you are ! Mindi 2.1.5 is there, and is principally solving kernel support detection for the type of initrd possible (solves an abort of mindi on RHEL3/4), and also reduces the number of error messages when dealing with links containing more than 2 references to .. Should help with some recent reports. Also I had a report that the -H option and RESTORE keyword were not completely without interaction, so this is now solved as well. Finally, this version supports better HP ProLiant Gen8 and future platforms by also using hp-rcu and hp-fm tools. Now available on ftp://ftp.mondorescue.org for more than 120 distribution tuples ! And for those who ask why I do that: first because I like it, then because I have the tools to do it, and also because I do have users who are using Fedora 7, RHEL 3 or even Red Hat 6.2. Filed under: FLOSS Tagged: HP, HPLinux, Linux, Mondorescue, Open Source, ProLiant [Less]
Posted over 3 years ago
11am: Arrive at work, check out crack pipe from inventory 11:05am – noon: Read online forums, cackle at victims; crack pipe Noon – 1pm: Read latest standards documents; write code that is in technical compliance but to any sane observer appears ... [More] screamingly inept, baroque, buggy, unusable and downright dangerous 1pm – 2pm: Lunch with friend from International Tax Code Writers’ Union; compare notes 2pm – 3pm: Review usability testing results; remove all discovered usability 3pm – 3:30pm: Bonghits 3:30pm – 4:00pm: Reading – “Transparency, The Apple Way (S. Jobs)” 4:00pm – 4:30pm: Notice latest production firmware code does not include enough potential bricking bugs; run random bug generator 4:30pm – 5:00pm: Notice company has minor hardware revision upcoming; write entirely new firmware implementation for it for no apparent reason 5:00pm: Home, with a warm fuzzy feeling of achievement 5:30pm – 11:30pm: Tease dog by pretending to throw ball 11:35pm: Watch Leno [Less]
Posted over 3 years ago
11am: Arrive at work, check out crack pipe from inventory 11:05am – noon: Read online forums, cackle at victims; crack pipe Noon – 1pm: Read latest standards documents; write code that is in technical compliance but to any sane observer appears ... [More] screamingly inept, baroque, buggy, unusable and downright dangerous 1pm – 2pm: Lunch with friend from International Tax Code Writers’ Union; compare notes 2pm – 3pm: Review usability testing results; remove all discovered usability 3pm – 3:30pm: Bonghits 3:30pm – 4:00pm: Reading – “Transparency, The Apple Way (S. Jobs)” 4:00pm – 4:30pm: Notice latest production firmware code does not include enough potential bricking bugs; run random bug generator 4:30pm – 5:00pm: Notice company has minor hardware revision upcoming; write entirely new firmware implementation for it for no apparent reason 5:00pm: Home, with a warm fuzzy feeling of achievement 5:30pm – 11:30pm: Tease dog by pretending to throw ball 11:35pm: Watch Leno [Less]
Posted over 3 years ago
PKI tokens has been implemented in keystone by Adam Young and others and was shipped for the OpenStack grizlly release. It is available since the version 2.0 API of keystone. PKI is a beautiful acronym to Public-key infrastructure which according to ... [More] wikipedia defines it like this : Public-key cryptography is a cryptographic technique that enables users to securely communicate on an insecure public network, and reliably verify the identity of a user via digital signatures. As described more lengthy on this IBM blog post keystone will start to generate a public and a private key and store it locally. When getting the first request the service (i.e: Swift) will go get the public certificate from keystone and store it locally for later use. When the user is authenticated and a PKI token needs to be generated, keystone will take the private key and encrypt the token and the metadata (i.e: roles, endpoints, services). The service by the mean of the auth_token middleware will decrypt the token with the public key and get the info to pass on to the service it set the *keystone.identity* WSGI environement variable to be used by the other middleware of the service in the paste pipeline. The PKI tokens are then much more secure since the service can trust where the token is coming from and much more efficient since it doesn’t have to validate it on every request like done for UUID token. Auth token This bring us to the auth_token middleware. The auth token middleware is a central piece of software of keystone to provide a generic middleware for other python WSGI services to integrate with keystone. The auth_token middleware was moved in grizzly to the python-keystoneclient package, this allows us to don’t have to install a full keystone server package to use it (remember this is supposed to be integrated directly in services). You usually would add the auth_token middleware in your paste pipeline at the begining of it (there may be other middlewares before like logging, catch_errors and stuff so not quite the first one). [filter:authtoken] signing_dir = /var/cache/service paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = keystone_host auth_port = keystone_public_port auth_protocol = keystone_public_port auth_uri = http://keystone_host:keystone_admin_port/ admin_tenant_name = service admin_user = service_user admin_password = service_password There is much more options to the auth_token middleware, I invite you to refer to your service documentation and read a bit the top of the auth_token file here. When the service get a request with a X-Auth-Token header containing a PKI token the auth middleware will intercept it and start to do some works. It will validate the token by first md5/hexdigesting it, this is going to be the key in memcache as you may have seen the PKI token since containing all the metadatas can be very long and are too big to server as is for memcache. It will check if we have the key in memcache and if not start verify the signed token. Before everything the token is checked if it was revoked (see my previous article about PKI revoked tokens). The way it’s getting the revoked token is to first check if the token revocation list is expired (by default it will do a refresh for it every seconds). If it need to be refreshed it will do a request to the url ‘/v2.0/tokens/revoked‘ with an admin token to the keystone admin interface and get the list of revoked tokens. The list get stored as well on disk for easy retrieval. If the token is not revoked it will convert the token to a proper CMS format and start verifying it. Using the signing cert filename and the ca filename it will invoke the command line openssl CLI to do a cms -verify which will decode the cms token providing the decoded data. If the cert filename or the ca filename was missing it will fetch it again. Fetching the signing cert will be done by doing a non authenticated query to the keystone admin url ‘/v2.0/certificates/signing‘. Same goes for the ca making a query to the keystone url ‘/v2.0/certificates/ca‘. When we have the decoded data we can now build our environement variable for the other inside the environement variable call keystone.token_info this will be used next by the other services middleware. Bunch of new headers will be added to the request with for example the User Project ID Project Name etc.. The md5/hexdigest PKI token is then stored with the data inside memcache. And that’s it, there is much more information on the IBM blog post and on Adam’s blog I am mentionning earlier. [Less]
Posted over 3 years ago
PKI tokens has been implemented in keystone by Adam Young and others and was shipped for the OpenStack grizlly release. It is available since the version 2.0 API of keystone. PKI is a beautiful acronym to Public-key infrastructure which according to ... [More] wikipedia defines it like this : Public-key cryptography is a cryptographic technique that enables users to securely communicate on an insecure public network, and reliably verify the identity of a user via digital signatures. As described more lengthy on this IBM blog post keystone will start to generate a public and a private key and store it locally. When getting the first request the service (i.e: Swift) will go get the public certificate from keystone and store it locally for later use. When the user is authenticated and a PKI token needs to be generated, keystone will take the private key and encrypt the token and the metadata (i.e: roles, endpoints, services). The service by the mean of the auth_token middleware will decrypt the token with the public key and get the info to pass on to the service it set the *keystone.identity* WSGI environement variable to be used by the other middleware of the service in the paste pipeline. The PKI tokens are then much more secure since the service can trust where the token is coming from and much more efficient since it doesn’t have to validate it on every request like done for UUID token. Auth token This bring us to the auth_token middleware. The auth token middleware is a central piece of software of keystone to provide a generic middleware for other python WSGI services to integrate with keystone. The auth_token middleware was moved in grizzly to the python-keystoneclient package, this allows us to don’t have to install a full keystone server package to use it (remember this is supposed to be integrated directly in services). You usually would add the auth_token middleware in your paste pipeline at the begining of it (there may be other middlewares before like logging, catch_errors and stuff so not quite the first one). [filter:authtoken] signing_dir = /var/cache/service paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = keystone_host auth_port = keystone_public_port auth_protocol = keystone_public_port auth_uri = http://keystone_host:keystone_admin_port/ admin_tenant_name = service admin_user = service_user admin_password = service_password There is much more options to the auth_token middleware, I invite you to refer to your service documentation and read a bit the top of the auth_token file here. When the service get a request with a X-Auth-Token header containing a PKI token the auth middleware will intercept it and start to do some works. It will validate the token by first md5/hexdigesting it, this is going to be the key in memcache as you may have seen the PKI token since containing all the metadatas can be very long and are too big to server as is for memcache. It will check if we have the key in memcache and if not start verify the signed token. Before everything the token is checked if it was revoked (see my previous article about PKI revoked tokens). The way it’s getting the revoked token is to first check if the token revocation list is expired (by default it will do a refresh for it every seconds). If it need to be refreshed it will do a request to the url ‘/v2.0/tokens/revoked‘ with an admin token to the keystone admin interface and get the list of revoked tokens. The list get stored as well on disk for easy retrieval. If the token is not revoked it will convert the token to a proper CMS format and start verifying it. Using the signing cert filename and the ca filename it will invoke the command line openssl CLI to do a cms -verify which will decode the cms token providing the decoded data. If the cert filename or the ca filename was missing it will fetch it again. Fetching the signing cert will be done by doing a non authenticated query to the keystone admin url ‘/v2.0/certificates/signing‘. Same goes for the ca making a query to the keystone url ‘/v2.0/certificates/ca‘. When we have the decoded data we can now build our environement variable for the other inside the environement variable call keystone.token_info this will be used next by the other services middleware. Bunch of new headers will be added to the request with for example the User Project ID Project Name etc.. The md5/hexdigest PKI token is then stored with the data inside memcache. And that’s it, there is much more information on the IBM blog post and on Adam’s blog I am mentionning earlier. [Less]