All about Jenkins architecture

Motivation

Recently, I’ve delved deeper into self-hosted CI/CD systems. Throughout my career, I’ve primarily used managed CI/CD solutions such as GitHub Actions, Azure Pipelines, and AWS CodePipeline.

I’ve also noticed a trend where many companies are transitioning to managed solutions and now inquire about Jenkins during interviews.

While I’m familiar with the CI/CD philosophy, which is a broad topic that can be discussed at length, I’ve previously utilized Jenkins pipelines: writing a Jenkinsfile, storing it in version control, and running the pipeline via the GUI either manually or through webhooks. However, I’m interested in delving deeper into the architecture of Jenkins itself and understanding how it handles pipeline processes. Plus I’ve decided to spice it up because everything will be running as containers on my WSL2, so I wanted to explore a bit more of WSL2 this time as well.

You can find my project here:
https://github.com/ptisma/jenkins-agents/

Different types of architecture

Generally, we can distinguish two main types of architecture: master and master-slave. In the master architecture, we run our Jenkins controller (Jenkins, the app itself) on one node, and we also perform all our builds on it. In production, this is a very bad practice, but for smaller projects, it is fast, cheap, and easy to set up. In the master-slave architecture, we use worker nodes where we run our builds. The Jenkins controller receives the build requests and distributes them to the workers.

These worker nodes can come in all shapes and forms: static VMs, cloud VMs, local/remote Docker hosts, local/remote Kubernetes pods, etc.

Single master instance

First, I wanted to setup the easiest one, running a Jenkins master-only controller inside the Docker container which used the Docker daemon from host’s machine.

Note the container where our Jenkins controller instance is running doesn’t have the Docker daemon installed on it, only the Docker CLI, through env. var we inject the Docker Host configuration and we build our images inside the Jenkins jobs on our host machine.

The picture shows the following architecture:

You can find the Dockerfile and commands inside the https://github.com/ptisma/jenkins-agents/tree/main/docker/single-instance

Static VM nodes

Naturally, the next step would be to offload the workload from the Jenkins controller instance, which handles builds, and move it to the worker nodes.

I’ve opted to run my static nodes as separate containers using the jenkins/ssh-agent:alpine-jdk17 image. A prerequisite was to generate the SSH key for each static node and copy its public key to the Jenkins master controller using the GUI.

Similar to the previous setup, the static worker node also utilizes the Docker daemon (dockerd) located on the host machine (in our case, WSL2). The following image illustrates this setup:

Note that, as in the previous case, we had to configure environment variables during the static node setup on the Jenkins controller GUI—not during Docker run—since the user jenkins might not have visibility into them. This configuration ensures that Docker builds in our pipeline do not fail.

DOCKER_HOST=tcp://docker:2376 DOCKER_CERT_PATH=/certs/client DOCKER_TLS_VERIFY=1

Because I did not want to use the Jenkins Master anymore for builds, I’ve disabled it:

Navigate to Manage Jenkins » Nodes and Clouds. Then select Built-In Node from the list, then choose Configure from the menu. After set the number of executors to 0 and save the configuration.

More info here:
https://github.com/ptisma/jenkins-agents/tree/main/docker/nodes

Cloud

The static nodes are functional, but they have their limitations. One downside is that you have to predict how many nodes you’ll need based on your future workload. And let’s face it, most of the time they’ll just be sitting there idle, not doing much.

But don’t worry! That’s where the Cloud setup comes in handy. It allows Jenkins to automatically set up agents in the cloud or virtual environments when there’s work to be done, and then scale them down when things quiet down.

Now, we could take the easy route and use any cloud provider’s managed container service and link it up with our Jenkins controller. But where’s the fun in that? We’re all about customization here. So, instead, we’re going to set up our own custom version of this setup right on our WSL2 machine.

Take a look at the sketch below to see how it all fits together:

We first setup the Docker network and spin up the two containers: one with Jenkins controller like in the Single Master Instance chapter and the other one DIND, aka the Docker in Docker, it’s basically a container which has installed the dockerd, this container is going to serve us as our Docker Build Server, all the docker image builds which worker containers are going to initiate will be happening in this container, this was done so that we do not clutter the base image we are using for our dynamically spun worker containers (they do not contain the dockerd).

After that has been done we then setup the a cloud with a Docker host in Jenkins Controller using “Docker” plugin, which allows Jenkins to dynamically provision build agents using Docker.

More info here:
https://github.com/ptisma/jenkins-agents/tree/main/docker/cloud

Kubernetes

This approach is very similar to the previous one and is also a part of the “Cloud” setup. However, since it involves Kubernetes, I wanted to dedicate a separate chapter to it. Our Jenkins controller is now deployed as a pod as part of the deployment, and it persists its configuration and settings on the Kubernetes volumes. Every time we trigger a build, the Jenkins controller spawns a Kubernetes job, which then spawns a Kubernetes pod where our working container resides. After the build is complete, the pod is shut down, and the job is finished.

We deploy the Jenkins controller using the Helm chart jenkinsci/jenkins with a slightly modified values.yaml file. Specifically, we add the DIND sidecar in every pod, which represents the worker.

In contrast to Docker Cloud, where all the working containers share the same build server, this time we have a different architecture:

Every working container inside the worker pod has a DIND sidecar, where we perform our Docker image builds.

More info here:
https://github.com/ptisma/jenkins-agents/tree/main/kubernetes/helm