So me and my girlfriend have been playing with the idea of starting a blog for quite some time now. Having a day off I’ve decided to put it to good use and start creating one. Being a developer I’ve had a few concerns when it came to choosing a blog framework:

  • I’m a control freak so I’d prefer hosting it myself. Blogging platforms are out of the question.
  • I like markdown - the blog must support it.
  • It must be lightweight.
  • It must be easily deployable to s3 static website hosting.
  • I should not need to use anything else than s3 for hosting. Just a text editor and a CLI for a good blogging experience.

After a bit of research and a tip from a friend I’ve landed on It seems to tick all of my boxes. Plugin support seems to be excellent as well. Being based on node it means I’ll mostly be able to find anything my heart might desire. If not - I can code it myself.

Let’s get to work!

A quick disclaimer - you’ll need an AWS account if you want to follow along and (maybe?) some coding knowledge.

Installing hexo

First things first - installing hexo. Well - that’s as easy as following the few first steps on their docs. After running npm install -g hexo-cli && hexo init blog I check it out with hexo generate && hexo server. Looks pretty good, but it only contains the hello world post and I want something with images on it to see what those would look like.

Creating a post

Let’s create one then, shall we? hexo new post test creates a new file under source/_posts/. I fill it up with some random words and add an image. Do note that assets are added as follows:

{% asset_img test_image_thumb.jpg Alt text. %}

Upon completing this step and verifying it works on localhost I immediately realize that I don’t like the file structure hexo uses by default. If all the post assets are going to be added to the source/_posts/ it’s going to become a mess quite rapidly. A quick google reveals that changing a flag in _config.yml makes hexo create a sub-directory under source/_posts/ for each new post. A simple switch of the post_asset_folder flag in _config.yml makes me a happy panda. I recreate the post with the image now lying snuggly in the post’s asset folder.


Then I switch my attention to getting the blog hosted on AWS S3 Static website hosting ASAP. Why S3? Mostly because I’m used to AWS and they have good documentation on how to host a static website there. When creating the bucket all that’s needed is to set the name, I’ll change the settings later. The configuration itself is not hard. I need to set the static website hosting to enabled under the properties tab. On the permissions tab, open up bucket policy and paste this into it:

"Version": "2008-10-17",
"Statement": [
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::exampleBucket/*"

Be sure to change the exampleBucket to your bucket name if you are following along. This policy allows for public access to my bucket and I want that. With this, it’s time to deploy my blog.

Deploying to S3

The problem with hexo though is that it does not come with S3 deploy support out of the box. Time to see how good the plugin library is. A google search reveals a candidate:

A quick npm i -S hexo-deployer-aws-s3 installs the dependency. While that’s running time to get my keys from AWS. I’ll need those to be able to upload to S3. AWS is kind enough to provide us documentation on how to do that. Once I have the keys downloaded I export them in my terminal as env variables:

export AWS_SECRET_ACCESS_KEY=yourkeyhere
export AWS_ACCESS_KEY_ID=youridhere

All that’s left to do is append a section to our _config.yml by adding these 3 lines:

type: aws-s3
region: yourregion # eu-west-1 < this is mine
bucket: yourbucketname

And bam - we’re ready to deploy. I first run hexo generate to generate the static content. To deploy it to s3 - hexo deploy. With the content deployed, I check if everything works by accessing the url Everything works like charm but it’s not time to blog yet.


What is great about AWS is that it allows seamless use of many of its tools. Since we have a bucket sitting in a specific region, we might encounter slow load times if our bucket is hosted in asia and the client is opening the website from europe. To avoid that, I leverage the AWS CloudFront CDN. It will distribute the blog to edge nodes in other regions resulting in faster load times for those who will access the blog from a different region. It also comes with many great features out of the box such as caching, http -> https redirects, compression and more. I create a web distribution for cloudfront, select my bucket as the origin domain name and basically leave all the other settings as default for now. I’ll play with them a bit later. Once the distribution is created, it will take 10-30 minutes to get deployed to edge locations, you’ll see the status change to Deployed once that is complete. The distribution will come with an ugly domain name, one that cloudfront provides. To find it, open your CloudFront distribution and it will be there under Domain Name. Once the distribution is deployed, open the url and check that everything is working. I need to fix the domain name now though.


Luckily, AWS has tools for everything. Amazon Route 53 is their cloud DNS service. I did not have a domain name registered yet, so I could register one with AWS. It will make things easier. In case you do own a domain but have it registered somewhere else, it might be a good idea to transfer it to Route 53. To do that, follow the instructions provided by Amazon. Once the domain gets verified(and there might be a couple of steps required to do that if you transfer it from another registrar), it’s time to point it to our cloudfront distribution. To do it, open up the cloudfront distribution first and then edit the configuration. What needs changing is the alternate domain names. Fill all the domain names you want your blog to be accessible through. The set up contains the following entries:


Save the distribution and head to Route 53. Once there, what needs to be done is a couple of dns entries added. Time to create a new record set. The type selected should be IPv4, the Alias set to Yes. Clicking the alias target will show a drop down and there I click the cloudfront distribution. Once created, the record set takes some time to propagate, so you might not be able to see the changes at first, but give it some time and it will work. I can now access my blog through Nice.


In 2017 there is no excuse for not having ssl on your website. It boosts SEO ratings and makes your website look much more professional. It is super easy to do with AWS certificate manager. I go to my cloudfront distribution and edit it again. There I switch to Custom SSL Certificate and click Request or Import a Certificate with ACM(please note that certificates should always go under us-east-1 region). I add the following domains:


Since my domain is hosted under Route 53 I select the DNS validation. The ACM makes it easy to validate the domain via a button on the validation screen. It takes a couple of minutes, but once the certificate is issued I can apply it onto my cloudfront distribution. After that is complete, this blog now becomes accessible through To ensure that users use the https instead of http, under the configuration for cloud front distribution I create a behaviour by changing the Viewer Protocol Policy to Redirect HTTP to HTTPS. This now ensures that you get redirected to https if you open up the website through http.


Hexo seems to be a perfect choice for a simplistic blog for the techy types. Easy to get into and super easy to deploy, since all it does is bake static webpages. With AWS S3 it’s trivial to deploy a static website, make it available in all regions and set up both a domain name and SSL. With this done, it’s time to start blogging, right? Well.. I’ll probably follow this up with my quest for PageSpeed Insights. Edit: The post on page speed optimization is now live.