I Use This!
Activity Not Available

News

Analyzed 2 months ago. based on code collected 10 months ago.
Posted 7 months ago
Since my last article, lots of things happened in the container world! Instead of using LXC, I find myself using the next great thing much much more now, namely LXC's big brother, LXD. As some people asked me, here's my trick to make containers use ... [More] my host as an apt proxy, significantly speeding up deployment times for both manual and juju-based workloads. Setting up a cache on the host First off, we'll want to setup an apt cache on the host. As is usually the case in the Ubuntu world, it all starts with an apt-get: sudo apt-get install squid-deb-proxy This will setup a squid caching proxy on your host, with a specific apt configuration listening on port 8000. Since it is tuned for larger machines by default, I find myself wanting to make it use a slightly smaller disk cache, using 2Gb instead of the default 40Gb is way more reasonable on my laptop. Simply editing the config file takes care of that: $EDITOR /etc/squid-deb-proxy # Look for the "cache_dir aufs" line and replace with: cache_dir aufs /var/cache/squid-deb-proxy 2000 16 256 # 2 gb Of course you'll need to restart the service after that: sudo service squid-deb-proxy restart Setting up LXD Compared to the similar procedure on LXC, setting up LXD is a breeze! LXD comes with configuration templates, and so we can conveniently either create a new template if we want to use the proxy selectively, or simply add the configuration to the "default" template, and all our containers will use the proxy, always! In the default template Since I never turn the proxy off on my laptop I saw no reason to apply the proxy selectively, and simply added it to the default profile: export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set default user.user-data - Of course the first part of the first command line automates the discovery of your IP address, conveniently, as long as your LXD bridge is called "lxdbr0". Once set in the default template, all LXD containers you start now have an apt proxy pointing to your host set up! In a new template Should you not want to alter the default template, you can easily create a new one: export PROFILE_NAME=proxy lxc profile create $PROFILE_NAME Then substitute the newly created profile in the previous command line. It becomes: export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set $PROFILE_NAME user.user-data - Launching a new container needs to add this configuration template, so that the container benefits form the proxy configuration: lxc launch ubuntu:xenial -p $PROFILE_NAME -p default Reverting If for some reason you don't want to use your host as a proxy anymore, it is quite easy to revert the changes to the template: lxc profile set user.user-data That's it! As you can see it is trivial to set an apt proxy on LXD, and using squid-deb-proxy on the host makes that configuration trivial. Hope this helps! [Less]
Posted 7 months ago
This is the eleventh blog post in this series about LXD 2.0. Introduction First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number ... [More] of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly. I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked! So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!). Requirements This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM. Remember, we’re running a full OpenStack here, this thing isn’t exactly light! Setting up the container OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container. We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed). Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features. lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch" lxc config device add openstack mem unix-char path=/dev/mem There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with: lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going. lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y lxc exec openstack -- apt-add-repository ppa:juju/stable -y lxc exec openstack -- apt update lxc exec openstack -- apt dist-upgrade -y lxc exec openstack -- apt install conjure-up -y And the last setup step is to configure LXD networking inside the container. Answer with the default for all questions, except for: Use the “dir” storage backend (“zfs” doesn’t work in a nested container) Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it) lxc exec openstack -- lxd init And that’s it for the container configuration itself, now we can deploy OpenStack! Deploying OpenStack with conjure-up As mentioned earlier, we’ll be using conjure-up to deploy OpenStack. This is a nice, user friendly, tool that interfaces with Juju to deploy complex services. Start it with: lxc exec openstack -- sudo -u ubuntu -i conjure-up Select “OpenStack with NovaLXD” Then select “localhost” as the deployment target (uses LXD) And hit “Deploy all remaining applications” This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected. Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard. Access the dashboard and spawn a container The dashboard runs inside a container, so you can’t just hit it from your web browser. The easiest way around this is to setup a NAT rule with: lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to Where “” is the dashboard IP address conjure-up gave you at the end of the installation. You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http:///horizon This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard! You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned. Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container. Conclusion OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine. Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves. It’s also one of the very few cases where multiple level of container nesting actually makes sense! Extra information The conjure-up website can be found at: http://conjure-up.io The Juju website can be found at: http://www.ubuntu.com/cloud/juju The main LXD website is at: https://linuxcontainers.org/lxd Development happens on Github at: https://github.com/lxc/lxd Mailing-list support happens on: https://lists.linuxcontainers.org IRC support happens in: #lxcontainers on irc.freenode.net Try LXD online: https://linuxcontainers.org/lxd/try-it [Less]
Posted 7 months ago
I was the sole editor and contributor of new content for Ubuntu Unleashed 2017 Edition. This book is intended for intermediate to advanced users.
Posted 7 months ago
FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium. This email contains information about: Real-Time ... [More] communications dev-room and lounge, speaking opportunities, volunteering in the dev-room and lounge, related events around FOSDEM, including the XMPP summit, social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities), the Planet aggregation sites for RTC blogs Call for participation - Real Time Communications (RTC) The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge. The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days. To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list. To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list. Speaking opportunities Note: if you used FOSDEM Pentabarf before, please use the same account/username Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission. Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form. You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track. First-time speaking? FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it. Submission guidelines The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one. In the "Submission notes", please tell us about: the purpose of your talk any other talk applications (dev-rooms, lightning talks, main track) availability constraints and special needs You can use HTML and links in your bio, abstract and description. If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work. We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics. Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate. Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used. Volunteers needed To make the dev-room and lounge run successfully, we are looking for volunteers: FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February participation in the Real-Time lounge helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses circulating this Call for Participation (text version) to other mailing lists See the mailing list discussion for more details about volunteering. Related events - XMPP and RTC summits The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details. We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event. Social events and dinners The traditional FOSDEM beer night occurs on Friday, 3 February. On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat. Spread the word and discuss If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk. If you regularly blog about RTC topics, please send details about your blog to the planet site administrators: Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community. Contact For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list. The dev-room administration team: Saúl Ibarra Corretgé (email) Iain R. Learmonth (email) Ralph Meijer (email) Daniel-Constantin Mierla (email) Daniel Pocock (email) [Less]
Posted 7 months ago
I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement. DNS66 creates a local VPN service on your Android device, and ... [More] diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked. You can find DNS66 here: on GitHub: https://github.com/julian-klode/dns66 on F-Droid: https://f-droid.org/app/org.jak_linux.dns66 F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version. Implementation Notes DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit: All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe). We literally redirect your DNS servers. Meaning if your DNS server is 1.2.3.4, all traffic to 1.2.3.4 is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN. We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.  Filed under: Android, Uncategorized [Less]
Posted 7 months ago
Ubuntu Advantage is the commercial support package from Canonical. It includes Landscape, the Ubuntu systems management tool, and the Canonical Livepatch Service, which enables you to apply kernel fixes without restarting your Ubuntu 16.04 LTS ... [More] systems. Ubuntu Advantage gives the world’s largest enterprises the assurance they need to run mission-critical workloads such as enterprise databases, virtual/cloud hosts or infrastructural services on Ubuntu. The infographic below gives an overview of Ubuntu Advantage, it explains the business benefits, why Ubuntu is #1 in the cloud for many organisations and includes a selection of Ubuntu Advantage customers.   Download infographic     OR     Find out more about Ubuntu Advantage [Less]
Posted 7 months ago
Canonical will be taking part in Microsoft and IDC’s Enterprise Open Source Roadshow this autumn and winter.  This roadshow will pass through many western European countries and showcase a number of Open Source technologies that are driving change in ... [More] the software-defined datacentre. IDC predicts that by 2017, over 70% of enterprise companies will embrace Open Source and open APIs as the underpinnings for cloud integration strategies. This is already visible as developers search for flexible and agnostic platforms that enable them to work quickly and easily, even as the scale and complexity of software increases. Canonical’s Linux-based operating system, Ubuntu, delivers the platform of choice for many of these software developers.  And Canonical’s Juju enables them to model & deploy open source technologies on endpoints such as Microsoft Azure with just a few clicks.   Canonical will be demonstrating how to apply model driven operations to address the current phase change in software operations at the event.  In addition to live demonstrations of open source technologies, attendees of the 2016 Enterprise Open Source roadshow will learn how the IT industry is:  Adopting open source technologies with a focus on cloud-first datacenter modernization initiatives, Big Data projects, and DevOps oriented methodologies Focusing on governance, security, licensing and hybrid environment management for enterprise ready technologies Formulating a reassessment of IT skills and key competencies to develop talent for a new era Join Canonical at an upcoming event to learn how open source tooling such as Juju can help developers build next generation Big Software.  To register, or for more information, please visit the event website.   [Less]
Posted 7 months ago
with automatic updates on changes in CodeCommit Git repository A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen ... [More] yet. In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda. This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost. One of the most exciting features is the launch-time ability to specify an AWS Lambda function plugin (ZIP file) that defines a static site generator to run on the Git repository site source before deploying to the static website. A sample plugin is provided for the popular Hugo static site generator. Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content. Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon. The input to the CloudFormation stack includes: Domain name for the static website Email address to be notified of Git repository changes The output of the CloudFormation stack includes: DNS nameservers for you to set in your domain registrar Git repository endpoint URL Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements. Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach. How to run To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console: I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket: AWS Git-backed Static Website GitHub repo If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console. When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these: ACM Certificate (2) SNS topic subscription The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete. Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above. AWS Services The stack uses a number of AWS services including: CloudFormation - Infrastructure management. CodeCommit - Git repository. CodePipeline - Passes Git repository content to AWS Lambda when modified. AWS Lambda - Syncs Git repository content to S3 bucket for website S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts CloudFront - CDN, HTTPS management Certificate Manager - Creation of free certificate for HTTPS CloudWatch - AWS Lambda log output, metrics SNS - Git repository activity notification Route 53 - DNS for website IAM - Manage resource security and permissions Cost As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment. If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs. If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely. Notes This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere. This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters. Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so. Thanks Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3. Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes. Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process. Some other resources you might fine useful: Creating a Static Website Using a Custom Domain - Amazon Web Services S3 Static Website with CloudFront and Route 53 - AWS Sysadmin Continuous Delivery with AWS CodePipeline - Onur Salk Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more. [Update 2016-10-28: Added Notes section.] [Update 2016-11-01: Added note about static site generation and Hugo plugin.] Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/ [Less]
Posted 7 months ago
with automatic updates on changes in CodeCommit Git repository A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen ... [More] yet. In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda. This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost. Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content. Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon. The input to the CloudFormation stack includes: Domain name for the static website Email address to be notified of Git repository changes The output of the CloudFormation stack includes: DNS nameservers for you to set in your domain registrar Git repository endpoint URL Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements. Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach. How to run To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console: I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket: AWS Git-backed Static Website GitHub repo If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console. When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these: ACM Certificate (2) SNS topic subscription The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete. Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above. AWS Services The stack uses a number of AWS services including: CloudFormation - Infrastructure management. CodeCommit - Git repository. CodePipeline - Passes Git repository content to AWS Lambda when modified. AWS Lambda - Syncs Git repository content to S3 bucket for website S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts CloudFront - CDN, HTTPS management Certificate Manager - Creation of free certificate for HTTPS CloudWatch - AWS Lambda log output, metrics SNS - Git repository activity notification Route 53 - DNS for website IAM - Manage resource security and permissions Cost As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment. If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs. If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely. Notes This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere. This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters. Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so. Thanks Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3. Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes. Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process. Some other resources you might fine useful: Creating a Static Website Using a Custom Domain - Amazon Web Services S3 Static Website with CloudFront and Route 53 - AWS Sysadmin Continuous Delivery with AWS CodePipeline - Onur Salk Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more. [Update 2016-10-28: Added Notes section.] Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/ [Less]
Posted 7 months ago
with automatic updates on changes in CodeCommit Git repository A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen ... [More] yet. In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda. This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost. Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content. Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon. The input to the CloudFormation stack includes: Domain name for the static website Email address to be notified of Git repository changes The output of the CloudFormation stack includes: DNS nameservers for you to set in your domain registrar Git repository endpoint URL Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements. Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach. How to run To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console: I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket: AWS Git-backed Static Website GitHub repo If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console. When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these: ACM Certificate (2) SNS topic subscription The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete. Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above. AWS Services The stack uses a number of AWS services including: CloudFormation - Infrastructure management. CodeCommit - Git repository. CodePipeline - Passes Git repository content to AWS Lambda when modified. AWS Lambda - Syncs Git repository content to S3 bucket for website S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts CloudFront - CDN, HTTPS management Certificate Manager - Creation of free certificate for HTTPS CloudWatch - AWS Lambda log output, metrics SNS - Git repository activity notification Route 53 - DNS for website IAM - Manage resource security and permissions Cost As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment. If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs. If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely. Notes This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere. This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters. Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so. Thanks Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3. Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes. Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process. Some other resources you might fine useful: Creating a Static Website Using a Custom Domain - Amazon Web Services S3 Static Website with CloudFront and Route 53 - AWS Sysadmin Continuous Delivery with AWS CodePipeline - Onur Salk Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more. [Update 2016-10-28: Added Notes section.] Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/ [Less]