Creating an S3 website bucket exposed over HTTPS with CloudFormation
I show how to create an S3 bucket set up to serve a static website, and expose it over HTTPS via CloudFront, using the same SSL certificate we created in the previous post—all via CloudFormation.
This post builds on the template I created in my previous post, where I created an S3 redirection bucket supported by an SSL certificate. You may find it helpful to read that post first. Furthermore, this post is part of a series where I add SSL to this blog. If you like this post, you might like the others too.
Table of Contents
Since the bucket we're creating this time is going to host our site it
might be interesting to capture access logs1. S3
buckets are great for logging, and it's super easy to configure them
for this. The below snippet does exactly that2. The key thing
about log buckets is to remember setting the
AccessControl property to
1: Resources: 2: [...] 3: LogBucket: 4: Type: 'AWS::S3::Bucket' 5: Properties: 6: AccessControl: LogDeliveryWrite 7: BucketName: !Join 8: - '.' 9: - ['www', !Ref DomainName, 'logs']
Next we create the bucket that will hold our website. This time we
won't redirect anything, though we'll specify that when someone asks
for the root of the bucket they'll get the
on error they'll get the
404.html document3. Finally we configure this bucket to ship logs to the
LogBucket we created in the previous step.
1: Resources: 2: [...] 3: SiteBucket: 4: Type: 'AWS::S3::Bucket' 5: Properties: 6: BucketName: !Join ['.', ['www', !Ref DomainName]] 7: WebsiteConfiguration: 8: IndexDocument: index.html 9: ErrorDocument: 404.html 10: LoggingConfiguration: 11: DestinationBucketName: !Ref LogBucket
Finally we set up a CloudFront distribution. This is identical to the
setup in the previous post except our Alias now has a
Nevertheless, this is a short post so I'll include it here for
completeness. Note that this also uses the
SSL certificate we set up
in the previous post, since we created that with a
SubjectAlternativeName that would work for our
www domain. I also
Output so we can get at the CloudFormation domain for
testing, since the DNS still points to GitHub Pages.
1: Resources: 2: [...] 3: SiteCloudFront: 4: Type: 'AWS::CloudFront::Distribution' 5: Properties: 6: DistributionConfig: 7: Aliases: 8: - !Join ['.', ['www', !Ref DomainName]] 9: Enabled: True 10: Origins: 11: - DomainName: !Select 12: - 1 13: - !Split ["//", !GetAtt SiteBucket.WebsiteURL] 14: Id: origin 15: CustomOriginConfig: 16: OriginProtocolPolicy: http-only 17: DefaultCacheBehavior: 18: TargetOriginId: origin 19: DefaultTTL: 5 20: MaxTTL: 30 21: ForwardedValues: 22: QueryString: false 23: ViewerProtocolPolicy: redirect-to-https 24: ViewerCertificate: 25: AcmCertificateArn: !Ref SSL 26: SslSupportMethod: sni-only 27: Outputs: 28: [...] 29: SiteCloudFrontDomain: 30: Value: !GetAtt SiteCloudFront.DomainName
To test this setup we have to upload some files to the S3 bucket. First I did that like this:
aws s3 sync \ --exclude '*' \ --include '*.html' \ --include '*.png' \ --include '*.css' \ ~/blog/ s3://www.superloopy.io
However, I kept getting permissions errors. I wasted a lot of time
investigating bucket permissions until I realised I need to add public
read object permissions too.
aws s3 sync --acl public-read will do
that, so I
touch-ed all the files and re-uploaded them like this:
aws s3 sync --acl public-read \ --exclude '*' \ --include '*.html' \ --include '*.png' \ --include '*.css' \ ~/blog/ s3://www.superloopy.io
I was able to get the
SiteCloudFrontDomain value from my
CloudFormation Stack and visit that domain in a browser. It redirects
me to the HTTPS version of the same site, as expected, and shows the
index.html document. If we go to a path that doesn't exist, we get the
expected 404 page. Success!
The thing that caused me most grief with this setup was not CloudFormation itself but learning that each S3 object in my bucket had to have public read permissions too. Novice mistake, I'm sure! And I'm actually really happy that objects are private by default. That is a good and sensible default! (Even if it did cause me a bit of a headache today.)
You don't have to specify a
BucketName, but I like to as it
makes finding the right bucket in the S3 console a lot easier.