Home how-to-deploy-a-blog-with-2-commands


How to deploy a blog with 2 commands using Terraform and Ansible

I am running my blog on Linode, behind Cloudflare and deployed with Ansible. To speed up deployment and make it easily repeatable, I wrote a ton of Python to automate the Cloudflare and Linode configurations. Some for fun, and some because I thought it was a good learning experience.

Don’t get me wrong, I learned a ton about creating repeatable configurations. My issue is that when people say Infrastructure as code (IAC), they don’t mean infrastructure as Python. They mean IAC tools. I got sick of reinventing the wheel so I thought it was time for a change.

I had 2 goals in mind:

  1. Moving away from Python to manage my infra
  2. End-to-end HTTPS from Cloudflare to Linode’s edge

By the end of it, I wanted it all to look like this:


So when I said “deploy a blog with 2 commands”, it fails to mention a bit of configuration beforehand. We’ll get into the 2 commands a bit later, but let’s address the configurations first.

Automated initial server configuration

Linode, sadly, doesn’t use cloud-init (at the time of writing anyway), so I use their offering, stackscripts.

I showed off the stackscript in my last post so I won’t go into too much detail again here stackscripts/configurealpineweb.sh

Stack script to upload to linode
# Create the default user
adduser -D -s /bin/ash -h /home/<your username> <your username again> 
# Create empty password for SSH key only auth 
usermod -p '*' <your username>
# Make an SSH folder
mkdir -p /home/<your username>/.ssh
# Add public keys
wget https://github.com/<your github username>.keys -O /home/<your username>/.ssh/authorized_keys

# Configure openssh (it is preinstalled on the image)
sed -i -e 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i -e 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i -e 's/PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i -e 's/#UseDNS no/UseDNS no/' /etc/ssh/sshd_config
/etc/init.d/sshd restart

# Install and configure docker
apk add docker
addgroup <your username> docker
rc-update add docker boot
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" >> /etc/fstab
service docker start

Cloud services configuration

I had heard of this magical tool terraform before but for some reason, never used it. Something at work popped up and I had an opportunity to play with it. It was love at first sight. It’s such a smooth experience and being able to rapidly scale deployments is incredibly satisfying. I have attached my configurations and explained them in the accompanying Youtube video.

terraform {
    required_providers {
        linode = {
            source  = "linode/linode"
        cloudflare = {
            source  = "cloudflare/cloudflare"
            version = "~> 3.0"
provider "linode" {
    # A file with your linode API key
    token = chomp(file("../keys/linode_terraformkey.key"))
# Linodes
resource "linode_instance" "blog_vms" {
    # Create 2 VM's
    count = "2"
    # Set to your local region. Full list here: https://api.linode.com/v4/regions
    region = "ap-southeast"
    # This is the specs of the VM to use. Full list here: https://api.linode.com/v4/linode/types
    type = "g6-standard-1"
    # The VM names
    label = "web-alpine-${count.index + 1}"
    # If you want to assign them to a group
    group = "blog"
    # This is required for the load balancer
    private_ip = true
    # Sends you emails if it is rebooted
    watchdog_enabled = true
    # Boots the machine on start up
    booted = true
    # The base image you want to use. Full list here:
    image = "linode/alpine3.16"
    # The root password for the VM. I'll never use it so its a giant random string
    root_pass = chomp(file("../keys/cloudtemplateroot.key"))
    # The ID of the stack script you created before
    stackscript_id = 1048966
# NodeBalancers
resource "linode_nodebalancer" "blog_nb" {
    # A label in linode
    label = "blog_nb"
    # Regions again, see above for list
    region = "ap-southeast"
    # Clients can only make 20 requests per second before being dropped
    client_conn_throttle = 20
resource "linode_nodebalancer_config" "blog_nb_config" {
    nodebalancer_id = linode_nodebalancer.blog_nb.id
    # The port to expose from the load balancer
    port = 443
    protocol = "https"
    # The status check for the load balancer to your blog 
    check = "http"
    check_path = "/"
    check_attempts = 3
    stickiness = "http_cookie"
    algorithm = "source"
    check_timeout = 10
    check_interval = 60
    # The SSL cert to upload to the Load balancer
    ssl_cert = cloudflare_origin_ca_certificate.cf_origin_cert.certificate
    ssl_key = tls_private_key.cf_origin_tls_key.private_key_pem
resource "linode_nodebalancer_node" "blog_nb_node" {
    # Creates a record for each VM you create
    count = length(linode_instance.blog_vm)
    nodebalancer_id = linode_nodebalancer.blog_nb.id
    config_id = linode_nodebalancer_config.blog_nb_config.id
    # The URL that you server you blog traffic from
    address = "${element(linode_instance.blog_vm.*.private_ip_address, count.index)}:80"
    label = "blog_nb_node"
    weight = 50
# Firewall
resource "linode_firewall" "blog_fw" {
    label = "blog_fw"
    inbound {
      label    = "allow-all-home"
      action   = "ACCEPT"
      protocol = "TCP"
      ipv4     = ["<my home IP address>"]
    inbound {
      label    = "allow-node-balancers"
      action   = "ACCEPT"
      protocol = "TCP"
      ipv4     = [""]
    inbound_policy = "DROP"
    outbound_policy = "ACCEPT"
    linodes = linode_instance.blog_vm.*.id
provider "cloudflare" {
    # More API tokens in files
    api_token = chomp(file("../keys/cloudflare_tf.key"))
    api_user_service_key = chomp(file("../keys/cloudflareoriginca.key"))
# A preconfigured DNS zone in cloudflare
data "cloudflare_zone" "cf_zone" {
  name = "<DNS Zone name>"
# A
# Create an A record for the root of your domain
resource "cloudflare_record" "cf_root_record" {
    zone_id = data.cloudflare_zone.cf_zone.id
    name    = "<your domain name>"
    # This IP is generated from the Load balancer
    value   = linode_nodebalancer.blog_nb.ipv4
    type    = "A"
    proxied = true
    allow_overwrite = true
# Add a CNAME record for blog.<youdomain>
resource "cloudflare_record" "cf_blog_record" {
  zone_id = data.cloudflare_zone.cf_zone.id
  name    = "blog"
  value   = cloudflare_record.cf_root_record.name
  type    = "CNAME"
  proxied = true
  allow_overwrite = true
# Origin SSL keys
resource "tls_private_key" "cf_origin_tls_key" {
  algorithm = "RSA"
resource "tls_cert_request" "origin_tls_cert" {
  private_key_pem = tls_private_key.cf_origin_tls_key.private_key_pem
  subject {
    organization = "<add this if you like>"
resource "cloudflare_origin_ca_certificate" "cf_origin_cert" {
    csr                = tls_cert_request.origin_tls_cert.cert_request_pem
    hostnames          = [ "*.<your domain name>", "<your domain name>" ]
    request_type       = "origin-rsa"
    requested_validity = 5475

These don’t have to be in separate files. I just like to do it so it is easier to find things later.

The Github action and Ansible

The ansible and GitHub actions configurations haven’t changed since my initial post. I’ll just put the configurations here and you can learn more from the original post or from watching the video.

# tasks file for blog
- name: Login to ghcr
    username: ""
    password: ""
    registry: ghcr.io
    state: present
- name: Pulling and starting latest build from ghcr
    hostname: jekyell-site
    pull: true
    image: ghcr.io/aidanhall34/jekyll-site:main
    restart_policy: unless-stopped
    memory: 1.5g
    name: jekyell-site
    state: started
- name: Log out of ghcr
    state: absent
    registry: ghcr.io
    username: ""
    password: ""
- name: Install pip
    name: py3-pip
    update_cache: yes
  become: true
  become_method: su
- name: Install docker-compose
    name: docker-compose
    update_cache: yes
    state: present 
  become: true
  become_method: su
- name: Copy the requirements file over
    src: requirements.txt
    dest: /tmp/requirements.txt
- name: Install the python modules
    requirements: /tmp/requirements.txt
- name: Clean up the requirements file
    state: absent
    path: /tmp/requirements.txt

The commands

Finally, we are ready for the commands. To deploy the cloud servers, we need to run:

Terraform apply

Then to start the ansible job:

ansible-playbook ansible/playbooks/deploy_cloudinfra.yml -i ansible/inventories/production/ --vault-password-file keys/vaultpass.key

Now the blog is deployed and your infrastructure is managed, when you push new content to the blog, re-run the Ansible playbook and let it do its magic.

I hope you all enjoyed this and someone finds it useful. Please email me or send me a DM on Twitter to let me know what you think!

Have a good one everybody!

This post is licensed under CC BY 4.0 by the author.