Setting up a static SSL website on AWS using GitHub, CloudFront and S3

What this document covers:

  • Cost
  • Mail server hosting
  • Domain name transfer
  • SSL & CloudFront CDN
  • Using GitHub to update the site

Many articles like this going into great detail on how to set up these services with screenshots and step-by-step instructions. This article however is a birds-eye view of my recommended workflow with links to amazon’s own documentation, which I think is good enough. If you get stuck, just contact me and I’ll help you and update this document.

Cost

Your first year with Amazon includes Free Tier Limits that allow you to gain exposure to many of their services at almost no cost. Hosting your website should cost you about $6.00 for your first year, not including the domain registration. After your first year, your costs will jump, but should still remain relatively low compared to traditional hosting, most likely under $50.00 a year. Your typical costs will include:

Storage: Your costs will vary based on the type of storage you use, the region you store your assets, and most importantly the size of the assets you store. Figure on $5-10 a year.

DNS: Whether you let Amazon manage your domain name, or use another registrar, budget about $12 a year for a .com domain, or more for a different type of domain name. No matter who you register your domain with, Amazon will charge you about $6.00 a year to provide DNS routing to their services after your Free Tier Limits have expired.

Delivery: Costs will depend on the number of requests to your website, the amount of data transferred, and the amount of edge locations you configure, but a typical website should only cost a few dollars a month.

To estimate your costs with more accuracy, use the AWS pricing calculator:
https://calculator.aws

Email:

This document does not describe how to set up an email server, exactly. Amazon does have an email service called WorkMail, but I chose to host my mail with ProtonMail, a fantastic SECURE email service based in Switzerland. I pay 5 Euro a month for a custom domain, 15 GB of storage, and 5 email addresses. Gmail and other free mail services are notorious for reading the contents of your private emails and selling the contents to anyone who asks. Even a paid service like Amazon WorkMail is not secure from snooping, emails are routinely scanned, indexed and stored indefinitely as they pass international borders. For this reason, I prefer to host my mail with ProtonMail, which encrypts your email at rest on their servers, and has the option to encrypt your email in transit to its final destination. For example, I can give the recipient a unique password, and when I send them an email, they get a link to a webpage where they must enter the password to read the email. This does not mean that they are logging into ProtonMail to read their messages, ProtonMail never has access to the unencrypted email. Instead, the password is used to decrypt the email in your recipient’s browser. No network device between your browser and their browser, including home routers and operating systems on or between the sender’s and recipient’s computer, have access to unencrypted network communications. Mail between two ProtonMail accounts is automatically encrypted, making it a great product for businesses. Once you set up your DNS server in Amazon Route 53, it takes only a few hours to set up custom domain hosting with ProtonMail, including support for SPF, DKIM, DMARC for security, something that is hard to set up on other mail servers. For more information, see their website: https://protonmail.com.

DNS

The riskiest part of this transfer is managing the DNS records switchover. Remember that there are two entities that are responsible for DNS: The registrar and the name servers. Often these are combined into one service, but there are advantages to keeping them separate, including reducing the risk of your domain being stolen during a migration like this. Since my registrar is not changing, only my DNS server, there is no risk of theft, I remain the domain name owner throughout the process.

More on domain name ownership: My domain registrar is Gandi, based in Paris. I highly recommend them for this reason: You own the domain, not Gandi. The domain ownership contract is one of many reasons why I choose Gandi over other registrars. Budget about 15 Euro a year to use Gandi as your registrar, more expensive than transferring to AWS, but less risky in my opinion.

If your domain name was registered through your current webhosting service, you may need to transfer the domain name to a new registrar like Gandi, transfer to AWS, or continue paying the old hosting provider for the registrar service (but not the DNS serving service, this must be handled by AWS).

I recommend the following workflow to minimize risk:

  1. Create a hosted zone in Route 53 for your domain
  2. Duplicate all the records that exist in your current name servers in Route 53. Your A and CNAME records will continue to point to the IP address of your current webhosting. This is a good time to do an audit of other services that you may have to think about transferring to AWS or elsewhere. You may have database servers, mail servers, or application servers that are out of scope of this document. Today we are looking at static webservers only.
  3. Update the NS records in your registrar to point to the same servers as Route 53 configuration. There should be four servers in the format of ns-123.awsdns-12.net, ns-123.awsdns-12.org, ns-123.awsdns-12.co.uk etc.
  4. Test heavily for 24-28 hours. This is the amount of time it takes for all the name server caches in the world to recognize the new settings. There are methods of lowering the TTL of these caches, but still budget on 24-48 hours to be safe. On the Unix or Mac command line, you can use the following command to see if the change has taken effect on your network: dig +short NS yourdomain.com. . Also check your analytics to make sure that traffic has not dropped. You have analytics on your website, right? If not, it takes 20 minutes to set up google analytics. https://analytics.google.com.
  5. You are now free to set up your new AWS hosting on a subdomain, like test.yourdomain.com using the steps below.
  6. Once everything is working on your test subdomain, you can point the A records to AWS so that www.yourdomain.name points to AWS and not your old webservers, and remove the test subdomain. Once you are satisfied that the old webservers are not getting any traffic, you can shut them down. Make sure you have transferred ALL of your content before you shut down your old hosting service!

Here is the AWS documentation for this process.

Making Route 53 the DNS service for a domain that’s in use
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html

Setting up S3 for Webhosting

This part is relatively simple.

You have to create 2 S3 buckets, one for yourdomain.com, and one for www.yourdomain.com. One bucket forwards to the other bucket. This is inelegant, but that is how amazon recommends doing it. Bucket names need to be globally unique, but so do domain names, so there is no problem here. Name your two buckets after your domain name, one preceded with www and one without. Upload a dummy index.html file to test that everything works, we’ll transfer the real content via a more efficient means later. If you are impatient, you can transfer over your content manually. There is no need to fiddle with DNS, yet, that will follow when we set up CloudFront. This guide is pretty easy to follow:

Configuring a static website using a custom Domain registered with Route 53
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Check your site into GitHub

I told you this document won’t go into great detail, and I deliver. The workflow to set up a GitHub repository is heavily dependent on your operating system and preferred development environment, so follow the appropriate documentation for the environment you are most comfortable working in. The good thing about using git source control, is no matter what development environment you will choose to use in the future, it will most likely work seamlessly with your GitHub repro, and you will never be working on out of date content, no matter how many devices and development environments you will use, both now and in the future.

Setting up CodePipeline

I could not find a good document on this workflow in AWS’s documentation, so I will give some detailed steps in this section:

  1. In AWS Console, go to the CodePipeline service.
  2. Click the create pipeline button.
  3. Give your new pipeline a name.
  4. Use the default option of creating a new service role. Give it a name as well.
  5. Artifact store should be left at the default location as well.
  6. Click next.
  7. SOURCE: Source provider should be GitHub.
  8. SOURCE: Click the connect to GitHub button, and authenticate AWS CodePipeline to access your repo.
  9. SOURCE: Leave the change detection as the default “GitHub webhooks” option.
  10. Click next.
  11. BUILD Click skip to skip the build stage. We don’t need to build a static website.
  12. DEPLOY: Deploy provider is Amazon S3.
  13. DEPLOY: choose your static website bucket from the dropdown.
  14. DEPLOY: Leave deployment path blank.
  15. DEPLOY: Make sure that “Extract the file before deploy” is checked.
  16. Click Next.
  17. Click Create Pipeline.
  18. Make a change to your site, commit and push to GitHub, and check to see that everything works. You can check for errors in the CodePipeline console.

SSL Certificate

There are two methods to creating and installing your SSL certificate.

  1. You can import your own into AWS. Do this if you already have a certificate from a Certificate Authority that you would like to keep. If you are using Gandi for domain registration, a free certificate is included for your first year. In this case, you will need to import your certificate into AWS Certificate Manager. Docs here: https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
  2. You can generate a new certificate via AWS. You can do this in AWS console or on the command line. Docs here: https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html

I have no preference on whether to use an imported cert or an AWS generated cert, in fact I use both for different use cases.

Set up CloudFront

Now we are going to connect your certificate and domain name to your S3 bucket using CloudFront. This is the easiest part of the migration. Create a CloudFront distribution that leverages your s3 bucket as your origin and your AWS managed cert for SSL. You can create any subdomain you want to test with as long as you add it to your Route 53 record set. When you have transferred all of your content to your s3 bucket via GitHub or manually and everything tests ok and with your test subdomain in SSL, you can create a distribution for your root domain and www subdomain.

That’s it, the migration is complete.

Bonus content for the brave:

Most sites have a “contact me” form of some sort, that sends an email when the user enters their info. This is easy to set up using AWS Lambda and SES, when I have time I’ll write another blog post to cover this workflow.

Leave a Comment

Your email address will not be published. Required fields are marked *