GitLab Publishing Pipeline for Dreamhost, Part Two
Build a custom Docker image to speed up the deployment of your Jekyll site
Welcome back! In Part One we looked at a pretty simple deployment pipeline that sacrificed speed and efficiency for ease of reading and understanding. In this article we’re going to look at optimizing this for speed and efficiency.
Oooo! Are We Using Alpine Linux?!
Sorry my friend, but I generally don’t use Alpine Linux and it largely boils down to musl
. Alpine implemented its own C standard library called musl
which is lighter weight than glibc
(which is what most of the Linux OS that have been around longer are using). I’ve run into numerous issues with code throwing segfaults on Alpine but compiles perfectly fine on Ubuntu, Debian, RHEL, etc. I’m not here to poo-poo anyone’s OS choices, and Alpine is a really great lightweight OS. All that said, it’s been the source of enough problems for me personally though that I stopped using it and went back to using Debian-derivatives.
So what’s the plan then?
I thought I might walk you through how to build your own Docker container that you can then leverage to build your static Jekyll site. If you want to skip straight to the code you can head over to my GitLab Repository to poke around.
Building the Docker image
If you strip everything else out of my code repo that isn’t documentation-related, you’re left with just a Dockerfile and a GitLab pipeline. If you want a good intro on writing a Dockerfile, the Docker Docs has a great intro. But let’s pick apart mine:
1
2
3
4
5
6
7
8
9
10
11
12
FROM debian:bookworm AS base
RUN echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/00-jekyll-builder
RUN echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf.d/00-jekyll-builder
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y ruby-full build-essential curl rsync openssh-client git
RUN curl -fsSL https://deb.nodesource.com/setup_22.x -o nodesource_setup.sh
RUN bash nodesource_setup.sh
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y nodejs
RUN gem install rubygems-update
RUN update_rubygems
RUN gem install jekyll bundler
Line by line I’m doing the following:
- Telling Docker that I want to use the current (at the time of this article) version of Debian, which is Debian Bookworm as my base image. This pins me to that version of Debian but will continue to pull newer versions of Bookworm as that tag is updated
- Setting APT to not install suggested packages as we’re trying to keep things as small as possible.
- Setting APT to not install recommended packages as we’re trying to keep things as small as possible.
- Running apt update but telling it that this is a non-interactive shell (since I’m running in a pipeline).
- Running apt upgrade but telling it that this is a non-interactive shell (since I’m running in a pipeline).
- Install the needed apt packages for Jekyll.
- Download the current (at the time of this article) repo setup script for the LTS version of NodeJS.
- Run the NodeJS repo setup script.
- Install NodeJS.
- Install the rubygems-update package so I can pull any critical updates to gems more easily.
- Install Jekyll and Bundler
Running the GitLab Pipeline
The pipeline now takes the Dockerfile, builds it on a runner, and then pushes it to Docker Hub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
stages:
- build
default:
image: docker:28
services:
- name: docker:28-dind
variables:
HEALTHCHECK_TCP_PORT: "2375"
before_script:
- docker info
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
build:
stage: build
tags:
- shared
allow_failure: false
rules:
- if: $CI_COMMIT_BRANCH == 'main'
before_script:
- echo ${DOCKER_PAT} | docker login --username ${DOCKER_USER} --password-stdin
- docker info
script:
- docker build -t "${DOCKER_USER}/${DOCKER_PROJECT}" .
- docker push "${DOCKER_USER}/${DOCKER_PROJECT}"
In this case since I am using a runner that is installed in a Kubernetes cluster, I need to use both the docker image as well is the “Docker In Docker (DIND)” image as well. So the front matter sets the port the system can use to verify a Docker Engine is available and then the rest is the build pipeline. This says if you push to the main branch, login to the Docker Hub using a PAT, pull Docker Info to record the version information as spew, and then move on to building. The build is all of two commands - build the container, push it to the Hub. This setup will keep a “latest” version constantly moving forwards, however best practices should have you tagging your images with version information. Because this was a simple tool I was building just for my own use (and to write this article) I did not do so, but there are many articles out there on tagging taxonomy for your Docker images that I strongly advise you follow if you plan to implement anything like this in a production capacity.