Getting a blog running with Jekyll, GitHub Actions, AWS S3, and CloudFront
I have been planning on making a technical blog for a long time. I gave it a couple of tries before but never got it finished. Now, I made the final stretch and have one up. From the title, you might have seen the technologies used, and you might ask, “Why Jekyll?” The reason is that static websites are small, cheap, and fast. I am fascinated by them, and I wouldn’t like to have heavy components running for such a small blog.
Then, you could ask, “What are you using GitHub Actions for?” Because I keep the blog on GitHub, and each time I push, I would like it to build my blog, upload it to S3, and invalidate the CloudFront cache. It was the easiest way to accomplish this for me. I could have used AWS CodePipeline and AWS CodeBuild, but in case something went wrong, I wouldn’t like to be charged a lot (even if I set up a budget, I can’t stop the services if the budget is exceeded), and learning to use it is also a bit more high-friction than GitHub Actions.
Lastly, you might ask, “Why are you using S3 and CloudFront instead of GitHub Pages?” which would be a great question, and I don’t exactly know the answer apart from customizability. With GitHub Pages, I don’t have much say in how the site is built and deployed. With one repository, I would like to keep both of my subdomains (this blog and my gallery), and I am not sure how I would handle that. I had a couple of small issues setting up Jekyll with GitHub Pages in the past, but I know this has improved a lot. Also, it becomes just a tiny bit difficult to test things locally. That’s why I decided that I could host the website and the gallery on one S3 bucket, serve it with two different CloudFront distributions.
Jekyll
First of all, I installed Ruby and Jekyll.
gem install jekyll bundler
To create a new Jekyll project, run the following commands:
jekyll new myblog
cd myblog
Next, edit the _config.yml
file to your needs and delete the following line:
theme: minima
Also, delete the following lines from the Gemfile
:
# This is the default theme for new Jekyll sites. You may change this to anything you like.
gem "minima", "~> 2.5"
We deleted these lines because we will create a basic theme ourselves.
After deleting the lines, run the following command to update your Gemfile.lock
file:
bundle install
Creating a Barebones Jekyll Theme
After deleting the line from the _config.yml
file, we no longer depend on the minima theme. Therefore, we need to create new folders that Jekyll can use to generate our static website. To do this, create a new folder:
mkdir _layouts
Then, create a new file called default.html
in the _layouts
folder:
_layouts/default.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="data:;base64,iVBORw0KGgo=" />
<title>
{%- if page.title -%}
{{- page.title | append: ' | ' | append: site.title | xml_escape -}}
{%- else -%}
{{- site.title | xml_escape -}}
{%- endif -%}
</title>
<link rel="stylesheet" href="{{- '/assets/css/main.css' | relative_url -}}" />
</head>
<body>
<header>
<nav>
<ul>
<li><a href="{{- '/' | relative_url -}}">Blog</a></li>
<li><a href="{{- '/about/' | relative_url -}}">About Me</a></li>
</ul>
</nav>
</header>
<main>
{{- content -}}
</main>
<footer>
<p>© {{ site.author | xml_escape }} {{ site.time | date: "%Y" | xml_escape }}</p>
</footer>
</body>
</html>
We could separate the header, footer, and head under _includes
folder, however, for the sake of simplicity, I will skip that in this blog post. Now that we have a default layout, next, we will create our main CSS file. Let’s create a new directory:
mkdir -p assets/css
and create a main.css
file inside this directory:
assets/css/main.css
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
body {
margin: 40px auto;
max-width: 650px;
line-height: 1.6;
font-size: 18px;
color: #444;
padding: 0 10px;
}
h1,h2,h3 {
line-height: 1.2;
}
a {
color: #0074D9;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
.header {
display: flex;
justify-content: space-between;
align-items: center;
background-color: #333;
color: #fff;
padding: 10px;
}
.header h1 {
margin: 0;
font-size: 24px;
}
nav {
display: flex;
justify-content: space-between;
align-items: center;
background-color: #333;
color: #fff;
padding: 1rem;
}
nav a {
color: #fff;
text-decoration: none;
font-weight: bold;
}
nav ul {
display: flex;
list-style: none;
}
nav li {
margin-left: 1rem;
}
nav li:first-child {
margin-left: 0;
}
We can create a post
layout by creating a post.html
file under the _layouts
folder:
_layouts/post.html
1
2
3
4
5
6
7
8
9
10
11
---
layout: default
---
<article>
<header>
<h1>{{- page.title -}}</h1>
<p class="post-meta">{{- page.date | date: "%b %-d, %Y" -}}</p>
</header>
{{- content -}}
</article>
We can also create a home
layout for our main page by creating a home.html
file under the _layouts
folder:
_layouts/home.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
layout: default
---
<section class="post-list">
<h1>Latest Posts</h1>
{%- for post in site.posts -%}
<div class="post-container">
<h2><a href="{{- post.url -}}">{{- post.title -}}</a></h2>
<p class="post-date">{{- post.date | date: "%b %d, %Y" -}}</p>
<div class="post-excerpt">{{- post.excerpt -}}</div>
{%- if post.content.size > post.excerpt.size -%}
<p><a href="{{- post.url -}}">(more...)</a></p>
{%- endif -%}
</div>
{%- endfor -%}
</section>
If you run bundle exec jekyll serve
and go to http://127.0.0.1:4000/, you will see that we have a very basic site running locally.
In the terminal, you will notice that the page
layout for about.markdown doesn’t exist. We can fix that quickly by creating a page.html
file in the _layouts
folder:
_layouts/page.html
1
2
3
4
5
6
7
8
---
layout: default
---
<article>
<h1>{{- page.title -}}</h1>
{{- content -}}
</article>
We can stop thinking about Jekyll for now. Towards the end of the post, I will give a couple of tips about Jekyll.
AWS S3
To use AWS, we need to create a new S3 bucket that will host our blog later. Open the AWS S3 page and click on Create bucket
.

Choose a bucket name and select the region where you want your bucket to be located.

AWS Certificate Manager
If you want to set your domain to the CloudFront distribution we will create, you need to generate a certificate for your domain. Go to the AWS Certificate Manager page and click on Request a certificate
.

Since we want to request a public certificate, click on Next
:

Enter your Fully Qualified Domain Name (FQDN). For this example, I will use idle-babushka.burakcankus.com
. You can add more domains for this certificate if you’d like. I will keep the DNS validation - recommended
method and change the key algorithm to ECDSA P 256
. I chose ECDSA P 256
mainly for its performance.

AWS Lightsail DNS (or Route 53)
Now, we need to prove the ownership of the DNS we entered in the last step. I have previously done this using Route 53, but by default, if you create a Route 53 zone, you get charged 0.50 euros per month (+ 0.10 euros tax). So now I have moved to Lightsail DNS. You can host three DNS zones in Lightsail for free; however, it doesn’t have the easy integration with other parts of AWS. We can’t just click on Create records in Route 53
and have the records ready. It will take a couple more steps. First, let’s create a DNS zone in Lightsail.

Click on Create DNS Zone
and enter your domain name.


Update your nameservers on your registrar (I use Namecheap).

Back on Lightsail, go to the DNS records
tab and click Add record
. Now we need to take the CNAME record details from AWS Certificate Manager.

Click on your certificate on AWS Certificate Manager and view the Domains
section. We have CNAME name and value, which we need to copy and create a CNAME record with.
A small sidetrack here, DNS records must end with a dot; however, some providers (in this case, Lightsail DNS) remove the dot from the end of the record. That is not an issue, I recommend adding the dot and trying it out that way first.


Once we save this record, it shouldn’t take too long for our certificate to be accepted. You can go to AWS Certificate Manager and refresh to see the status change from Pending validation
to Issued
.

For me, it took about 15 minutes.
AWS CloudFront distribution
Next, we need to create a CloudFront distribution for this S3 bucket. Head over to the AWS CloudFront page and click Create a CloudFront distribution
.

For the Origin domain
, we choose our S3 bucket. If you would like to have your blog under a directory, you can specify it under Origin path
, but in this post, I will keep it in the root.
Origin access
will be Origin access control settings (recommended)
. This is because we left the selection Block all public access
while creating our bucket. We don’t want any public access or any access other than from our CloudFront distribution. Next, click on Create control setting
under Origin access
, and click Create
. The default values are okay.

After creating the Origin access control, you will see the warning:
You must update the S3 bucket policy
CloudFront will provide you with the policy statement after creating the distribution.
This is not an issue, and we will fix it later. For the rest of the settings, I changed them to:
Viewer protocol policy: Redirect HTTP to HTTPS
Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Response headers policy - optional: SecurityHeadersPolicy
Price class: Use only North America and Europe
Alternate domain name (CNAME) - optional: domain name I would like to use (blog.burakcankus.com)
Custom SSL certificate - optional: Certificate we created previously
Supported HTTP versions: HTTP/2 & HTTP/3
Default root object - optional: index.html

Now that we created our distribution, we need to go back to Lightsail DNS and add a CNAME record that points to the distribution. Click on your distribution name and copy the Distribution domain name
.

In Lightsail DNS, create a new CNAME record and fill in the details.

Connection between S3 and CloudFront
In CloudFront, there are 2 tabs that are quite important. The first one is Origins
, and it is related to how the CloudFront distribution will fetch the files. The second one is Behaviors
, and it is related to how the users will request files. Now, we need to allow our S3 bucket to allow CloudFront to fetch files. For this, go to the Origins
tab, select your origin, and click Edit
.

Scroll down, and under Bucket policy
, click on the Copy policy
button.

You will get a policy similar to the following:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::idle-babushka/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::261649116962:distribution/E2YEMAQQRE4QBD"
}
}
}
]
}
Now, go to S3, enter your bucket settings, click on the Permissions
tab, and under Bucket policy
, click Edit
.

Paste the copied policy and Save changes
.

If you open your blog URL, you will see a 403 response. This is because we only gave our CloudFront distribution the GetObject
permission. When our distribution requests something that doesn’t exist, S3 will respond with 403 Access Denied because it doesn’t want to leak any information. We don’t need to give extra permissions to fix this. We can go to our CloudFront distribution, Error pages
tab, and click on Create custom error response
.

Select 403, and customize the response. We don’t have anything in our bucket yet, but we will store our 404 page in the root directory. For Response page path
, type in /404.html
, and for HTTP Response code
, select 404: Not Found. This way, each time CloudFront gets 403 Access Denied, it will return 404.html
to users.
I will also create a new response for 404 and respond with the same file.

Now that we have fixed our error pages, the only thing left is to upload the _site
directory to S3. Just for testing things out, let’s do this manually.
S3 Upload
To build the _site
directory, run the following command:
bundle exec jekyll build
Next, go to your S3 Bucket and click on the Upload
button. Open the _site
directory that was created after running the previous command, select all the files and drag and drop them onto the S3 Upload page.

After the upload is successful, you can click on Close
.

Now if you go to your domain, you should see the blog. However, if you click on the About Me
page, it will fail with a 404 error. This is because the about.markdown
file is hardcoded to /about/
in our default.html
file:
_layouts/default.html
<!--...-->
<li><a href="/">Blog</a></li>
<li><a href="/about/">About Me</a></li>
<!--...-->
We could manually change this to /about/index.html
, or we could write a CloudFront Function so that whenever our CloudFront distribution requests a folder, it will actually request the index.html
file under that folder from S3.
CloudFront Functions
While creating this function, I followed this post and you are welcome to follow up from there.
Head over to CloudFront and click on Functions
in the sidebar, and click on the Create function
button.

Enter any name, I will enter RewriteDefaultIndexRequest
and click Create function
. For the function code, I will copy the code from the blog post:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function handler(event) {
var request = event.request;
var uri = request.uri;
// Check whether the URI is missing a file name.
if (uri.endsWith('/')) {
request.uri += 'index.html';
}
// Check whether the URI is missing a file extension.
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Paste this into the Development
section and click on Save changes
.

After saving the changes, go to the Publish
tab and click on Publish function
.

After publishing the function, we can go back to our distribution, Behaviors
tab, and select our Default (*)
behavior and click Edit
.

Scroll down to the bottom of the page, and under Function associations - optional
for Viewer request
, change Function type
to CloudFront Functions
and change Function ARN / Name
to the function we just created.

After this selection, click Save changes
.
Now if you go over to your About me
section, it will load your /about/index.html
.
The most difficult part is done; now we just need to connect it to GitHub Actions so that each time we push, we can upload our site to S3.
AWS IAM
Head over to AWS IAM page and click on Policies
. We will add a user for GitHub Actions to use, and we will make it quite strict so that it can only do some actions and in case our secrets are leaked, the damage can be minimized. First, we need to create 2 policies that we will attach to our user.
Click on Create policy
. Click on the JSON
tab and paste the following policy.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::your-s3-bucket-name",
"arn:aws:s3:::your-s3-bucket-name/*"
]
}
]
}
Replace both occurrences of your-s3-bucket-name with your actual S3 bucket name.

Click on Next: Tags
, Next: Review
and give your policy a name and a description. I will name it github-s3-write-delete-policy
and describe it as It allows the policyholder to ListBucket, PutObject and DeleteObject from S3 bucket. Access is only granted for your-s3-bucket-name S3 bucket.
and click Create policy
.
After that, we need to create another policy for invalidating CloudFront caches. Invalidating caches means that CloudFront will start serving directly from S3 in case there are changes and won’t use the old caches. Again, click on Create policy
, on the visual editor, select the service CloudFront
, Actions CreateInvalidation
and for Resources, paste your CloudFront Distribution ID for Distribution id
field. In the end, the JSON policy should be similar to:
1
2
3
4
5
6
7
8
9
10
11
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::<account-id>:distribution/<distribution-id>"
}
]
}

Click on Next: Tags
, Next: Review
and name your policy. I chose the name github-cloudfront-cache-invalidation-policy
and description It allows the policyholder to create CloudFront Cache Invalidation for CloudFront distribution E2YEMAQQRE4QBD.
and clicked Create policy
.

Now that we have our two policies, we can create a user and attach these two policies. Click on the Users
tab and click on Add users
.

I will name my user github-actions-user
and click Next
.

Select Attach policies directly
and check the policies we have created previously. If you don’t see the policies, you might need to order by Type
and look at Customer managed
policies.

After selecting the policies, click Next
and Create user
.

After creating the user, we need to generate secrets for this user. Click on your user, go to the Security credentials
tab, scroll down, and click on Create access key
.

Click on Other
, and then click Next
. You can set a description tag, such as GitHub Actions user Access Key
or something similar, and then click on Create access key
.

Now that we have our Access Key, we can continue with the next step. Don’t close this page yet.

Github Actions
On GitHub, create a new repository and push your blog to this repository:
git init .
git add --all
git commit -m "Initial commit"
git branch -M main
git remote add origin git@github.com:<your-github-username>/<your-git-repository-name>.git
git push -u origin main
Now, on GitHub, go to Settings > Security > Secrets and variables > Actions.

Next, click on New repository secret
, name the secret AWS_ACCESS_KEY_ID
, and copy the Access Key
from the previous page to the value field.

Then, create another secret named AWS_SECRET_ACCESS_KEY
, and copy the Secret access key
from the previous page to the value field.

After attaching these two, you can now close the IAM page by pressing Done
. It will give a warning Continue without viewing or downloading?
, but we have already copied them to GitHub secrets, and we don’t want to access them again.
(Of course, you shouldn’t share your secret key with anyone. It is in the screenshot for me, but that Access key is already deactivated and deleted.)
We create another secret and name it CLOUDFRONT_DISTRIBUTION_ID
and paste your CloudFront distribution id.

And lastly, we create a new secret and call it S3_BUCKET
and paste your S3 bucket name with the s3://
prefix.

Now we can create a new file in our repository /.github/workflows/deploy_website.yml
and paste the following:
.github/workflows/deploy_website.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
name: Deploy website to S3
on:
workflow_dispatch:
push:
branches:
- main
jobs:
deploy-blog:
runs-on: ubuntu-latest
steps:
- name: Install minify
uses: awalsh128/cache-apt-pkgs-action@1850ee53f6e706525805321a3f2f863dcf73c962
with:
packages: minify
version: 1.0
- name: Checkout repository
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v3.5.2
- name: Set up Ruby
uses: ruby/setup-ruby@55283cc23133118229fd3f97f9336ee23a179fcf # v1.146.0
with:
bundler-cache: true # runs 'bundle install' and caches installed gems automatically
- name: Build Jekyll site
run: |
bundle exec jekyll build --destination _site
- name: Minify .html files
run: |
minify -r -o ./ --html-keep-document-tags --html-keep-end-tags --html-keep-default-attrvals --match="\.html$" _site
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@e1e17a757e536f70e52b5a12b2e8d1d1c60e04ef # v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Deploy to S3
run: |
aws s3 sync _site ${{ secrets.S3_BUCKET }} --delete
- name: Invalidate CloudFront Cache
run: |
aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"
And create a .ruby-version
file to set up Ruby:
echo "3.2.2" > .ruby-version
Then push it using:
git add --all
git commit -m "Create CI/CD"
git push
After pushing this file, the blog should automatically build, get uploaded to S3, and invalidate the CloudFront cache.
Jekyll Minifying CSS
In our workflow, I have added the minify package with caching, so that we can minify the .html
files, but we don’t minify our .css
files. Jekyll has a built-in method to minify the CSS files. Open your Jekyll _config.yml
file and add the following:
_config.yml
# ...
sass:
style: compressed
sourcemap: never
# ...
Now, whenever there are Sass (Syntactically Awesome Stylesheet) files, Jekyll will automatically compress them. We can turn our main.css
file into Sass format quite easily. Rename main.css
to main.scss
and add three dashes in the first two lines of the file.
mv assets/css/main.css assets/css/main.scss
sed -i '1 i\---\n---\n' assets/css/main.scss
Now, each time Jekyll builds the site, it will minify the SCSS files.
Conclusion
Now we have a simple blog with CI/CD that will build our website automatically. While there are several potential areas for improvement, such as using CloudFormation to automate AWS side of things, I am satisfied with the current state.