Getting a blog running with Jekyll, GitHub Actions, S3, and CloudFront

I have been planning on making a technical blog for a long time. I gave it a couple of tries before but never got it finished. Now, I made the final stretch and have one up. From the title, you might have seen the technologies used, and you might ask, “Why Jekyll?” The reason is that static websites are small, cheap, and fast. I am fascinated by them, and I wouldn’t like to have heavy components running for such a small blog.

Then, you could ask, “What are you using GitHub Actions for?” Because I keep the blog on GitHub, and each time I push, I would like it to build my blog, upload it to S3, and invalidate the CloudFront cache. It was the easiest way to accomplish this for me. I could have used AWS CodePipeline and AWS CodeBuild, but in case something went wrong, I wouldn’t like to be charged a lot (even if I set up a budget, I can’t stop the services if the budget is exceeded), and learning to use it is also a bit more high-friction than GitHub Actions.

Lastly, you might ask, “Why are you using S3 and CloudFront instead of GitHub Pages?” which would be a great question, and I don’t exactly know the answer apart from customizability. With GitHub Pages, I don’t have much say in how the site is built and deployed. With one repository, I would like to keep both of my subdomains (this blog and my gallery), and I am not sure how I would handle that. I had a couple of small issues setting up Jekyll with GitHub Pages in the past, but I know this has improved a lot. Also, it becomes just a tiny bit difficult to test things locally. That’s why I decided that I could host the website and the gallery on one S3 bucket, serve it with two different CloudFront distributions.

Jekyll

First of all, I installed Ruby and Jekyll.

$ gem install jekyll bundler

To create a new Jekyll project, run the following commands:

$ jekyll new myblog
$ cd myblog

Next, edit the _config.yml file to your needs and delete the following line:

_config.yml
theme: minima

Also, delete the following lines from the Gemfile:

# This is the default theme for new Jekyll sites. You may change this to anything you like.
gem "minima", "~> 2.5"

We deleted these lines because we will create a basic theme ourselves.

After deleting the lines, run the following command to update your Gemfile.lock file:

$ bundle install

Creating a Barebones Jekyll Theme

After deleting the line from the _config.yml file, we no longer depend on the minima theme. Therefore, we need to create new folders that Jekyll can use to generate our static website. To do this, create a new folder:

$ mkdir _layouts

Then, create a new file called default.html in the _layouts folder:

_layouts/default.html
 1<!DOCTYPE html>
 2<html lang="en">
 3  <head>
 4    <meta charset="UTF-8" />
 5    <link rel="icon" href="data:;base64,iVBORw0KGgo=" />
 6    <title>
 7      {%- if page.title -%}
 8        {{- page.title | append: ' | ' | append: site.title | xml_escape -}}
 9      {%- else -%}
10        {{- site.title | xml_escape -}}
11      {%- endif -%}
12    </title>
13    <link rel="stylesheet" href="{{- '/assets/css/main.css' | relative_url -}}" />
14  </head>
15  <body>
16    <header>
17      <nav>
18        <ul>
19          <li><a href="{{- '/' | relative_url -}}">Blog</a></li>
20          <li><a href="{{- '/about/' | relative_url -}}">About Me</a></li>
21        </ul>
22      </nav>
23    </header>
24    <main>
25      {{- content -}}
26    </main>
27    <footer>
28      <p>&copy; {{ site.author | xml_escape }} {{ site.time | date: "%Y" | xml_escape }}</p>
29    </footer>
30  </body>
31</html>

We could separate the header, footer, and head under _includes folder, however, for the sake of simplicity, I will skip that in this blog post. Now that we have a default layout, next, we will create our main CSS file. Let’s create a new directory:

$ mkdir -p assets/css

and create a main.css file inside this directory:

assets/css/main.css
 1body {
 2  margin: 40px auto;
 3  max-width: 650px;
 4  line-height: 1.6;
 5  font-size: 18px;
 6  color: #444;
 7  padding: 0 10px;
 8}
 9
10h1,h2,h3 {
11  line-height: 1.2;
12}
13  
14a {
15  color: #0074D9;
16  text-decoration: none;
17}
18
19a:hover {
20  text-decoration: underline;
21}
22
23.header {
24  display: flex;
25  justify-content: space-between;
26  align-items: center;
27  background-color: #333;
28  color: #fff;
29  padding: 10px;
30}
31
32.header h1 {
33  margin: 0;
34  font-size: 24px;
35}
36  
37nav {
38  display: flex;
39  justify-content: space-between;
40  align-items: center;
41  background-color: #333;
42  color: #fff;
43  padding: 1rem;
44}
45
46nav a {
47  color: #fff;
48  text-decoration: none;
49  font-weight: bold;
50}
51
52nav ul {
53  display: flex;
54  list-style: none;
55}
56
57nav li {
58  margin-left: 1rem;
59}
60
61nav li:first-child {
62  margin-left: 0;
63}

We can create a post layout by creating a post.html file under the _layouts folder:

_layouts/post.html
 1---
 2layout: default
 3---
 4
 5<article>
 6  <header>
 7    <h1>{{- page.title -}}</h1>
 8    <p class="post-meta">{{- page.date | date: "%b %-d, %Y" -}}</p>
 9  </header>
10  {{- content -}}
11</article>

We can also create a home layout for our main page by creating a home.html file under the _layouts folder:

_layouts/home.html
 1---
 2layout: default
 3---
 4
 5<section class="post-list">
 6  <h1>Latest Posts</h1>
 7  {%- for post in site.posts -%}
 8  <div class="post-container">
 9    <h2><a href="{{- post.url -}}">{{- post.title -}}</a></h2>
10    <p class="post-date">{{- post.date | date: "%b %d, %Y" -}}</p>
11    <div class="post-excerpt">{{- post.excerpt -}}</div>
12    {%- if post.content.size > post.excerpt.size -%}
13    <p><a href="{{- post.url -}}">(more...)</a></p>
14    {%- endif -%}
15  </div>
16  {%- endfor -%}
17</section>

If you run bundle exec jekyll serve and go to http://127.0.0.1:4000/, you will see that we have a very basic site running locally.

In the terminal, you will notice that the page layout for about.markdown doesn’t exist. We can fix that quickly by creating a page.html file in the _layouts folder:

_layouts/page.html
1---
2layout: default
3---
4
5<article>
6  <h1>{{- page.title -}}</h1>
7  {{- content -}}
8</article>

We can stop thinking about Jekyll for now. Towards the end of the post, I will give a couple of tips about Jekyll.

AWS S3

To use AWS, we need to create a new S3 bucket that will host our blog later. Open the AWS S3 page and click on Create bucket.

A screenshot of the AWS S3 management console homepage.
AWS S3 homepage.

Choose a bucket name and select the region where you want your bucket to be located.

A screenshot of the AWS S3 bucket creation wizard, with fields for naming the bucket, selecting a region, and setting configuration options.
Creating a new AWS S3 bucket by filling out the fields in the bucket creation wizard.

AWS Certificate Manager

If you want to set your domain to the CloudFront distribution we will create, you need to generate a certificate for your domain. Go to the AWS Certificate Manager page and click on Request a certificate.

A screenshot of the AWS Certificate Manager management console homepage.
AWS Certificate Manager homepage.

Since we want to request a public certificate, click on Next:

A screenshot of the AWS Certificate Manager certificate request wizard.
Requesting an SSL/TLS certificate from AWS Certificate Manager.

Enter your Fully Qualified Domain Name (FQDN). For this example, I will use idle-babushka.burakcankus.com. You can add more domains for this certificate if you’d like. I will keep the DNS validation - recommended method and change the key algorithm to ECDSA P 256. I chose ECDSA P 256 mainly for its performance.

A screenshot of the AWS Certificate Manager certificate request wizard, with fields for selecting the domain name, verifying ownership, and configuring the certificate.
Requesting an SSL/TLS certificate from AWS Certificate Manager by filling out the fields in the certificate request wizard.

AWS Lightsail DNS (or Route 53)

Now, we need to prove the ownership of the DNS we entered in the last step. I have previously done this using Route 53, but by default, if you create a Route 53 zone, you get charged 0.50 euros per month (+ 0.10 euros tax). Now I have moved to Lightsail DNS and with Lightsail DNS you can host three DNS zones in Lightsail for free; however, it doesn’t have the easy integration with other parts of AWS. We can’t just click on Create records in Route 53 and have the records ready. It will take a couple more steps. First, let’s create a DNS zone in Lightsail.

A screenshot of the AWS Lightsail management console showing the Domains and DNS page.
AWS Lightsail Domains and DNS page.

Click on Create DNS Zone and enter your domain name.

A screenshot of the AWS Lightsail DNS zone creation wizard, with a field for specifying the domain name.
Creating a DNS zone for a domain in AWS Lightsail.
A screenshot of the AWS Lightsail management console showing the newly created DNS zone page, displaying the name servers that must be set.
AWS Lightsail DNS zone page.

Update your nameservers on your registrar (I use Namecheap).

A screenshot of the Namecheap domain management console, displaying the nameservers.
Namecheap domain management console displaying the nameservers.

Back on Lightsail, go to the DNS records tab and click Add record. Now we need to take the CNAME record details from AWS Certificate Manager.

A screenshot of the AWS Certificate Manager management console displaying the status of an SSL/TLS certificate as 'pending validation'.
Viewing the status of an SSL/TLS certificate as ‘pending validation’ in AWS Certificate Manager.

Click on your certificate on AWS Certificate Manager and view the Domains section. We have CNAME name and value, which we need to copy and create a CNAME record with.

A small sidetrack here, DNS records must end with a dot; however, some providers (in this case, Lightsail DNS) remove the dot from the end of the record. That is not an issue, I recommend adding the dot and trying it out that way first.

A screenshot of the AWS Certificate Manager management console showing details for the created certificate emphasizing the status as 'pending validation.'
AWS Certificate Manager domain validation status for an SSL/TLS certificate showing as ‘pending validation’.
A screenshot of the AWS Lightsail DNS zone records page, displaying a CNAME record for domain validation of an SSL/TLS certificate.
Creating a CNAME record in AWS Lightsail DNS zone records for domain validation of an SSL/TLS certificate.

Once we save this record, it shouldn’t take too long for our certificate to be accepted. You can go to AWS Certificate Manager and refresh to see the status change from Pending validation to Issued.

A screenshot of the AWS Certificate Manager management console displaying the status of an SSL/TLS certificate as 'issued'.
Viewing the status of an SSL/TLS certificate as ‘issued’ in AWS Certificate Manager.

For me, it took about 15 minutes.

AWS CloudFront distribution

Next, we need to create a CloudFront distribution for this S3 bucket. Head over to the AWS CloudFront page and click Create a CloudFront distribution.

A screenshot of the AWS CloudFront management console homepage.
AWS CloudFront homepage.

For the Origin domain, we choose our S3 bucket. If you would like to have your blog under a directory, you can specify it under Origin path, but in this post, I will keep it in the root. Origin access will be Origin access control settings (recommended). This is because we left the selection Block all public access while creating our bucket. We don’t want any public access or any access other than from our CloudFront distribution. Next, click on Create control setting under Origin access, and click Create. The default values are okay.

A screenshot of the AWS CloudFront distribution creation wizard, showing configuration options for granting access to the S3 bucket origin.
Configuring access to the S3 bucket origin in AWS CloudFront distribution creation wizard.

After creating the Origin access control, you will see the warning:

You must update the S3 bucket policy
CloudFront will provide you with the policy statement after creating the distribution.

This is not an issue, and we will fix it later. For the rest of the settings, I changed them to:

Viewer protocol policy: Redirect HTTP to HTTPS
Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Response headers policy - optional: SecurityHeadersPolicy
Price class: Use only North America and Europe
Alternate domain name (CNAME) - optional: domain name I would like to use (blog.burakcankus.com)
Custom SSL certificate - optional: Certificate we created previously
Supported HTTP versions: HTTP/2 & HTTP/3
Default root object - optional: index.html
A screenshot of the AWS CloudFront distribution creation wizard, displaying options for creating a new distribution.
Creating a new distribution in AWS CloudFront distribution wizard.

Now that we created our distribution, we need to go back to Lightsail DNS and add a CNAME record that points to the distribution. Click on your distribution name and copy the Distribution domain name.

A screenshot of the AWS CloudFront distribution settings page emphasizing the distribution domain name assigned to the distribution.
Viewing the domain name assigned to an AWS CloudFront distribution.

In Lightsail DNS, create a new CNAME record and fill in the details.

A screenshot of the AWS Lightsail DNS zone records page displaying a CNAME record created for an AWS CloudFront distribution.
Creating a CNAME record in AWS Lightsail DNS zone records for an AWS CloudFront distribution.

Connection between S3 and CloudFront

In CloudFront, there are 2 tabs that are quite important. The first one is Origins, and it is related to how the CloudFront distribution will fetch the files. The second one is Behaviors, and it is related to how the users will request files. Now, we need to allow our S3 bucket to allow CloudFront to fetch files. For this, go to the Origins tab, select your origin, and click Edit.

A screenshot of the AWS CloudFront distribution settings page displaying the default origin configured for the distribution.
Viewing the list of origins configured for an AWS CloudFront distribution in the distribution settings page.

Scroll down, and under Bucket policy, click on the Copy policy button.

A screenshot of the AWS CloudFront distribution origin settings page with the 'Copy policy' button highlighted. This button is used to add a policy to an S3 bucket to grant access to the AWS CloudFront distribution origin.
Adding a policy to the AWS S3 bucket permissions to grant access to the AWS CloudFront distribution origin by copying the policy from AWS CloudFront origin settings

You will get a policy similar to the following:

 1{
 2  "Version": "2008-10-17",
 3  "Id": "PolicyForCloudFrontPrivateContent",
 4  "Statement": [
 5      {
 6          "Sid": "AllowCloudFrontServicePrincipal",
 7          "Effect": "Allow",
 8          "Principal": {
 9              "Service": "cloudfront.amazonaws.com"
10          },
11          "Action": "s3:GetObject",
12          "Resource": "arn:aws:s3:::idle-babushka/*",
13          "Condition": {
14              "StringEquals": {
15                "AWS:SourceArn": "arn:aws:cloudfront::261649116962:distribution/E2YEMAQQRE4QBD"
16              }
17          }
18      }
19  ]
20}

Now, go to S3, enter your bucket settings, click on the Permissions tab, and under Bucket policy, click Edit.

A screenshot of the permissions tab of the S3 bucket properties page.
S3 bucket permissions tab.

Paste the copied policy and Save changes.

A screenshot of the S3 edit bucket policy page showing an imported policy allowing access from the CloudFront distribution
Importing a policy to the S3 bucket to allow access from the CloudFront distribution

If you open your blog URL, you will see a 403 response. This is because we only gave our CloudFront distribution the GetObject permission. When our distribution requests something that doesn’t exist, S3 will respond with 403 Access Denied because it doesn’t want to leak any information. We don’t need to give extra permissions to fix this. We can go to our CloudFront distribution, Error pages tab, and click on Create custom error response.

A screenshot of the custom error page response creation wizard of AWS CloudFront distribution, customize error page option is selected and response error path is set to 404.html
Setting a custom error page to reply with a 404 when CloudFront distribution gets 403 from the origin

Select 403, and customize the response. We don’t have anything in our bucket yet, but we will store our 404 page in the root directory. For Response page path, type in /404.html, and for HTTP Response code, select 404: Not Found. This way, each time CloudFront gets 403 Access Denied, it will return 404.html to users.

I will also create a new response for 404 and respond with the same file.

A screenshot of the custom error page response creation wizard of AWS CloudFront distribution, customize error page option is selected and response error path is set to 404.html
Setting a custom error page to reply with a 404 when CloudFront distribution gets 403 from the origin

Now that we have fixed our error pages, the only thing left is to upload the _site directory to S3. Just for testing things out, let’s do this manually.

S3 Upload

To build the _site directory, run the following command:

$ bundle exec jekyll build

Next, go to your S3 Bucket and click on the Upload button. Open the _site directory that was created after running the previous command, select all the files and drag and drop them onto the S3 Upload page.

A screenshot of the AWS S3 bucket showing the blog post ready to be uploaded
Uploading the static blog files to the AWS S3 bucket

After the upload is successful, you can click on Close.

A screenshot of the AWS S3 bucket showing a successful upload message for the blog files
Verifying the successful upload of the static blog files to the AWS S3 bucket

Now if you go to your domain, you should see the blog. However, if you click on the About Me page, it will fail with a 404 error. This is because the about.markdown file is hardcoded to /about/ in our default.html file:

_layouts/default.html
19          <li><a href="{{- '/' | relative_url -}}">Blog</a></li>
20          <li><a href="{{- '/about/' | relative_url -}}">About Me</a></li>

We could manually change this to /about/index.html, or we could write a CloudFront Function so that whenever our CloudFront distribution requests a folder, it will actually request the index.html file under that folder from S3.

CloudFront Functions

While creating this function, I followed this post and you are welcome to follow up from there.

Head over to CloudFront and click on Functions in the sidebar, and click on the Create function button.

A screenshot of the Functions page in AWS CloudFront
Viewing the Functions page in AWS CloudFront

Enter any name, I will enter RewriteDefaultIndexRequest and click Create function. For the function code, I will copy the code from the blog post:

 1function handler(event) {
 2    var request = event.request;
 3    var uri = request.uri;
 4    
 5    // Check whether the URI is missing a file name.
 6    if (uri.endsWith('/')) {
 7        request.uri += 'index.html';
 8    } 
 9    // Check whether the URI is missing a file extension.
10    else if (!uri.includes('.')) {
11        request.uri += '/index.html';
12    }
13
14    return request;
15}

Paste this into the Development section and click on Save changes.

A screenshot of the code editor in AWS CloudFront Functions
Editing the code for a function in AWS CloudFront Functions

After saving the changes, go to the Publish tab and click on Publish function.

A screenshot of the publish dialog in AWS CloudFront Functions
Publishing a function in AWS CloudFront Functions

After publishing the function, we can go back to our distribution, Behaviors tab, and select our Default (*) behavior and click Edit.

A screenshot of the Behaviors page in AWS CloudFront
Viewing the Behaviors page in AWS CloudFront

Scroll down to the bottom of the page, and under Function associations - optional for Viewer request, change Function type to CloudFront Functions and change Function ARN / Name to the function we just created.

A screenshot of the settings for a behavior in AWS CloudFront, showing the association with a CloudFront Function
Associating a CloudFront Function with a behavior in AWS CloudFront

After this selection, click Save changes.

Now if you go over to your About me section, it will load your /about/index.html.

The most difficult part is done; now we just need to connect it to GitHub Actions so that each time we push, we can upload our site to S3.

AWS IAM

Head over to AWS IAM page and click on Policies. We will add a user for GitHub Actions to use, and we will make it quite strict so that it can only do some actions and in case our secrets are leaked, the damage can be minimized. First, we need to create 2 policies that we will attach to our user.

Click on Create policy. Click on the JSON tab and paste the following policy.

 1{
 2    "Version": "2012-10-17",
 3    "Statement": [
 4        {
 5            "Effect": "Allow",
 6            "Action": [
 7                "s3:PutObject",
 8                "s3:ListBucket",
 9                "s3:DeleteObject"
10            ],
11            "Resource": [
12                "arn:aws:s3:::your-s3-bucket-name",
13                "arn:aws:s3:::your-s3-bucket-name/*"
14            ]
15        }
16    ]
17}

Replace both occurrences of your-s3-bucket-name with your actual S3 bucket name.

A screenshot of the AWS IAM Policies page showing the first step of creating a new policy
Creating a new policy in AWS IAM

Click on Next: Tags, then Next: Review and then give your policy a name and a description. I will name it github-s3-write-delete-policy and describe it as It allows the policyholder to ListBucket, PutObject and DeleteObject from S3 bucket. Access is only granted for your-s3-bucket-name S3 bucket. and click Create policy.

After that, we need to create another policy for invalidating CloudFront caches. Invalidating caches means that CloudFront will start serving directly from S3 in case there are changes and won’t use the old caches. Again, click on Create policy, on the visual editor, select the service CloudFront, Actions CreateInvalidation and for Resources, paste your CloudFront Distribution ID for Distribution id field. In the end, the JSON policy should be similar to:

 1{
 2    "Version": "2012-10-17",
 3    "Statement": [
 4        {
 5            "Sid": "VisualEditor0",
 6            "Effect": "Allow",
 7            "Action": "cloudfront:CreateInvalidation",
 8            "Resource": "arn:aws:cloudfront::<account-id>:distribution/<distribution-id>"
 9        }
10    ]
11}
A screenshot of the AWS IAM Policies page showing the Visual editor ARN setting for CloudFront distribution
Continuing to create a new policy in AWS IAM

Once more click on Next: Tags, then Next: Review and then name your policy. I chose the name github-cloudfront-cache-invalidation-policy and description It allows the policyholder to create CloudFront Cache Invalidation for CloudFront distribution E2YEMAQQRE4QBD. and clicked Create policy.

A screenshot of the AWS IAM Policies page showing a list of policies
Viewing a list of policies in AWS IAM

Now that we have our two policies, we can create a user and attach these two policies. Click on the Users tab and click on Add users.

A screenshot of the AWS IAM Users page
Creating a user in AWS IAM

I will name my user github-actions-user and click Next.

Screenshot of the first step in creating a new IAM user in the AWS Management Console, asking for user name and if user access to the aws console should be permitted
Step 1: Start creating a new IAM user

Select Attach policies directly and check the policies we have created previously. If you don’t see the policies, you might need to order by Type and look at Customer managed policies.

Screenshot of the second step in creating a new IAM user in the AWS Management Console, 'Attach policies directly' is selected and two customer managed policies are also selected
Step 2: Configure user policies

After selecting the policies, click Next and Create user.

Screenshot of the review page in creating a new IAM user in the AWS Management Console
Step 3: Review user details and create user

After creating the user, we need to generate secrets for this user. Click on your user, go to the Security credentials tab, scroll down, and click on Create access key.

Screenshot of the Access keys section in IAM user details page in the AWS Management Console
Access keys section in IAM user details page

Click on Other, and then click Next. You can set a description tag, such as GitHub Actions user Access Key or something similar, and then click on Create access key.

Screenshot of the process to create an Access key for an IAM user in the AWS Management Console, asking for a description tag
Creating a new Access key for an IAM user

Now that we have our Access Key, we can continue with the next step. Don’t close this page yet.

A screenshot of the AWS IAM console showing the newly created access key for the user with buttons to copy the access key and the secret access key
AWS IAM console showing the newly created access key for the user

Github Actions

On GitHub, create a new repository and push your blog to this repository:

$ git init .
$ git add --all
$ git commit -m "Initial commit"
$ git branch -M main
$ git remote add origin git@github.com:<your-github-username>/<your-git-repository-name>.git
$ git push -u origin main

Now, on GitHub, go to Settings > Security > Secrets and variables > Actions.

A screenshot of the GitHub repository settings page showing the Secrets with no secrets added
GitHub repository settings page showing the Secrets tab

Next, click on New repository secret, name the secret AWS_ACCESS_KEY_ID, and copy the Access Key from the previous page to the value field.

A screenshot of the GitHub repository settings page showing the process of adding an AWS access key ID secret
GitHub repository settings page showing the process of adding an AWS_ACCESS_KEY_ID secret

Then, create another secret named AWS_SECRET_ACCESS_KEY, and copy the Secret access key from the previous page to the value field.

A screenshot of the GitHub repository settings page showing the process of adding an AWS secret access key secret
GitHub repository settings page showing the process of adding an AWS_SECRET_ACCESS_KEY secret

After attaching these two, you can now close the IAM page by pressing Done. It will give a warning Continue without viewing or downloading?, but we have already copied them to GitHub secrets, and we don’t want to access them again.

(Of course, you shouldn’t share your secret key with anyone. It is in the screenshot for me, but that Access key is already deactivated and deleted.)

We create another secret and name it CLOUDFRONT_DISTRIBUTION_ID and paste your CloudFront distribution id.

A screenshot of the GitHub repository settings page showing the process of adding a CloudFront distribution ID secret
GitHub repository settings page showing the process of adding a CLOUDFRONT_DISTRIBUTION_ID secret

And lastly, we create a new secret and call it S3_BUCKET and paste your S3 bucket name with the s3:// prefix.

A screenshot of the GitHub repository settings page showing the process of adding an S3 bucket name secret
GitHub repository settings page showing the process of adding an S3_BUCKET secret

Now we can create a new file in our repository /.github/workflows/deploy_website.yml and paste the following:

.github/workflows/deploy_website.yml
 1name: Deploy website to S3
 2on:
 3  workflow_dispatch:
 4  push:
 5    branches:
 6      - main
 7jobs:
 8  deploy-blog:
 9    runs-on: ubuntu-latest
10    steps:
11      - name: Install minify
12        uses: awalsh128/cache-apt-pkgs-action@1850ee53f6e706525805321a3f2f863dcf73c962
13        with:
14          packages: minify
15          version: 1.0
16      - name: Checkout repository
17        uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v3.5.2
18      - name: Set up Ruby
19        uses: ruby/setup-ruby@55283cc23133118229fd3f97f9336ee23a179fcf # v1.146.0
20        with:
21          bundler-cache: true # runs 'bundle install' and caches installed gems automatically
22      - name: Build Jekyll site
23        run: |
24          bundle exec jekyll build --destination _site          
25      - name: Minify .html files
26        run: |
27          minify -r -o ./ --html-keep-document-tags --html-keep-end-tags --html-keep-default-attrvals --match="\.html$" _site          
28      - name: Configure AWS Credentials
29        uses: aws-actions/configure-aws-credentials@e1e17a757e536f70e52b5a12b2e8d1d1c60e04ef # v2
30        with:
31          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
32          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
33          aws-region: us-east-1
34      - name: Deploy to S3
35        run: |
36          aws s3 sync _site ${{ secrets.S3_BUCKET }} --delete          
37      - name: Invalidate CloudFront Cache
38        run: |
39          aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"          

And create a .ruby-version file to set up Ruby:

$ echo "3.2.2" > .ruby-version

Then push it using:

$ git add --all
$ git commit -m "Create CI/CD"
$ git push

After pushing this file, the blog should automatically build, get uploaded to S3, and invalidate the CloudFront cache.

Jekyll Minifying CSS

In our workflow, I have added the minify package with caching, so that we can minify the .html files, but we don’t minify our .css files. Jekyll has a built-in method to minify the CSS files. Open your Jekyll _config.yml file and add the following:

_config.yml
# ...
sass:
  style: compressed
  sourcemap: never
# ...

Now, whenever there are Sass (Syntactically Awesome Stylesheet) files, Jekyll will automatically compress them. We can turn our main.css file into Sass format quite easily. Rename main.css to main.scss and add three dashes in the first two lines of the file.

$ mv assets/css/main.css assets/css/main.scss
$ sed -i '1 i\---\n---\n' assets/css/main.scss

Now, each time Jekyll builds the site, it will minify the SCSS files.

Conclusion

Now we have a simple blog with CI/CD that will build our website automatically. While there are several potential areas for improvement, such as using CloudFormation to automate AWS side of things, I am satisfied with the current state.