A quick glance at the CNCF Landscape reveals the wide range of CI/CD solutions that currently exist. Each of these has its own API, configuration (YAML) format, definitions, pricing structures and quirks for creating pipelines.
In spite of all this variety, all solutions have certain requirements in common. They need to:
- access source code to execute tests and tasks
- declare the order of executing tests and builds
- pull and push build artifacts based on the tested source code
- publish the built source code to a live environment
Instead of declaring these underlying concepts using a domain-specific/company-specific implementation, wouldn’t it be nice to map these CI/CD pipeline components to Kubernetes objects?
Introducing Tekton
Tekton has its roots in the Knative serverless platform, but has since been spun off into its own project. Tekton was released earlier in 2019 to help users build flexible, extensible workflows for CI/CD pipelines. Tekton achieves this by mapping CI/CD components to Kubernetes primitives; its components live in a Kubernetes cluster, and can be used to deploy to Kubernetes, VMs, bare metal and other platforms.
Tekton1 was one of four projects donated to the CD Foundation at the time of its launch earlier in the year, alongside Spinnaker (Jetstack blog post), Jenkins, and Jenkins X. The CD Foundation serves to promote best practices around Continuous Delivery and establish specifications that will help to ensure portability and interoperability between CI/CD solutions.
Tekton’s APIs are currently in Alpha, so are liable to change, but the 2019 Roadmap includes a 1.0 release.
A GitHub repo accompanies this article for readers who wish to try out these concepts for themselves on Google Cloud Platform: tekton-demo
How Tekton builds on Kubernetes primitives
The table below outlines how Tekton uses Kubernetes objects, primarily Pod
and CustomResourceDefinition
objects, to form the building blocks of a CI/CD pipeine:
Functionality | Implementation as a Kubernetes primitive |
Task defines steps that needs to be executed | A Task is effectively a Pod, while each step is a container within that Pod |
ClusterTask is available across all of the cluster namespaces | Same as a Task , but can be referenced from any namespace in the cluster |
TaskRun takes the name(s) of Task object(s) and executes them | CRD referencing Task objects |
Pipeline takes the name(s) and order of execution of TaskRun object(s) | CRD referencing TaskRun objects |
PipelineRun takes the name(s) of Pipeline objects(s) and executes them | CRD referencing Pipeline objects. Spawns TaskRun objects |
The inputs and outputs of a Pipeline are defined using the PipelineResource
CRD. A PipelineResource
can be:
- source code (either a Pull Request or a specified git-repo’s branch and revision)
- a container image to be pulled or pushed
- a cloud storage bucket
- a separate cluster to which you want to deploy
As you can see from the table above, PipelineRun
takes the name of a Pipeline
and creates TaskRun
objects that run Task
objects (Pods), which run steps
(Containers). Definitions can be nested, for example a Task
could be defined inside of a TaskRun
, but it’s generally easier to keep track of of them if they are defined as separate objects and applied by reference.
Since a Task
is little more than a Kubernetes Pod, we can define Pod scheduling rules in TaskRun
, so that when TaskRun
spawns a Pod, annotations are added to the it for the benefit of kube-scheduler2. Also, as a Tekton Run is just another Kubernetes object, its outputs that can be logged and read like any other resource using kubectl get <POD_NAME> -o yaml
. We can also follow the Pod’s logs using kubectl logs -f
. This means that we don’t need to log in to the website of a CI/CD provider to view build logs.
Tekton components
When Tekton is installed in a Kubernetes cluster, two Pods are deployed to the tekton-pipelines
namespace:
- a Webhook Service for resource validation
- a Controller Service that handles scheduling the pipeline events and creating the Pods
Triggering builds
Now that we have these CI/CD primitives, we need a way of triggering tests, builds and deployments. In the table above, we can see that this is equivalent to creating TaskRun
or PipelineRun
CRD objects in the cluster… Regrettably, this is where Tekton is (currently) limited. Triggering runs based on CloudEvents is on the Tekton 2019 Roadmap, and development on the features has recently been moved into a new triggers repo, but aren’t actually in place yet. In order to do Continuous Deployment, a separate CD application is required to trigger an action based on an event (such as Prow, JenkinsX, Zuul). Currently we are required to apply a TaskRun
or PipelineRun
manually.
We can, however, perform Continuous Development (manually triggering tests and builds). Using a Makefile and a templating solution 3, we can incrementally bump patch/minor/major versions as we make improvements to our code and trigger an image build. At the same time, we can ensure that every successfully built image is deployed to a live environment.
Tekton is an extremely interesting project that has already made significant advances in functionality since it was first released earlier in 2019. The project’s 2019 Roadmap highlights the project’s ambitions.
Demonstration
A GitHub repo accompanies this article for readers who wish to try out these concepts for themselves on Google Cloud Platform: tekton-demo. In the demo, a simple website is served from a GKE cluster, with the container image being built (using Tekton) in a different namespace on the same cluster.
Tips on getting started
- Tekton has established an open-source library of pipeline definitions, maintained in the Tekton catalog.
- Commands for watching and debugging builds: https://github.com/tektoncd/pipeline/blob/master/docs/labels.md#finding-pods-for-a-specific-pipelinerun
- Make use of Kubernetes labels to keep track of your Tekton resources - this was a major takeaway from a Meetup talk given by IBM’s Andrea Frittoli at the Bristol SRE Meetup.
- Tekton is Greek for ‘carpenter’ or ‘builder’'
- this can be seen in the tekton-demo repo, where Tekton Pods are scheduled on GCP f1-micro nodes
- in this case I used jsonnet and kubecfg
When our experts are your experts, you can make the most of Kubernetes
Machine Identity Security Summit 2024
Help us forge a new era of cybersecurity
☕ We're spilling all the machine identiTEA Oct. 1-3, but these insights are too valuable to just toss in the harbor! Browse the agenda and register now.