30. 06. 2025 Oscar Zambotti Automation, Development, DevOps

“Pipeline as Code” Quest Unlocked: A Grizzled Beginner Leveling Up in CI/CD

After 17 years in software development, mostly crafting UIs (do you know Google Web Tookit? Or Angular, since version 1? And now Vue.js? I do), occasionally diving into mobile apps, and even wearing the sysadmin hat, I thought I’d seen my fair share of tech. But recently, I stepped into a completely new arena: Pipeline as Code using Tekton Pipelines on OpenShift.

In tech, there’s always a new dungeon to explore, and truth be told, that’s what keeps it exciting!

This post is not meant to be a tutorial or a step-by-step guide. There are already great resources out there for that. Instead, this is a reflection on my first encounter with these tools: what felt intuitive, what made me scratch my head, and what I learned along the way. If you’re a seasoned developer stepping into this world for the first time, maybe you’ll find something here that resonates.

The Goal

The main picture included:

  1. Deploy the application with a database alongside it
  2. Make the deployment section reusable, even in a production setting
  3. Run automated tests against the deployed app

Simple in theory. In practice? A bit more like a boss fight.

The Tools of the Trade

To achieve my goal, I used:

  • Tekton Pipelines to define the CI/CD workflow as code
  • PipelineRun to execute the pipeline with specific parameters
  • Custom Tekton Task to run oc CLI commands and interact with the OpenShift cluster, and test execution
  • OpenShift Templates, with Deployments and Jobs to manage the lifecycle of the app, and define reusable configurations for the application and database

Understanding the Pipeline Logic: Crafting the Spellbook

Coming from a frontend background, I liked the declarative nature of Tekton. Defining tasks and steps in YAML felt similar to configuring build tools or defining components. It’s a great approach, even if you have to be careful with indentation: one misplaced block and the pipeline miserably fails.

One of the first things I appreciated was the PipelineRun metadata annotations. These annotations let you define Tekton-specific options – like run conditions – right at the top level. Even more useful, they allow you to reference and import external tasks from remote bundles or catalogs, or simply defined in files located near your pipeline definition within the same repository.

This made it easier to reuse existing logic (like applying OpenShift templates or running tests) without reinventing the wheel, but it also helps keep your pipeline files shorter and easier to manage, two things I care a lot about when building and maintaining any kind of codebase.

It also took me just a failed run to realize that parameters and workspaces defined in the spec need to be explicitly exposed to the pipelineSpec. Just declaring them isn’t enough, they have to be referenced properly in the pipeline structure to be usable by tasks. It’s one of those small details that can easily trip you up when you’re new to Tekton.

That said, building the pipeline itself required a shift in mindset. Tekton doesn’t assume task order, you have to explicitly define the execution flow. If a task depends on another one, you need to declare that, even with conditional logic (e.g., only run tests if deployment succeeds). This level of control is powerful, but it also means you need to think like an orchestrator. Forgetting a runAfter or misplacing a condition can lead to subtle bugs or unexpected behavior.

Defining a task directly within the pipeline – rather than referencing an external one – initially left me perplexed, particularly regarding the params section. This section appears in both the taskSpec and the task usage, yet it serves different purposes in each context.

In the taskSpec, you declare which parameters the task expects, essentially defining its interface. Then, when you use the task in a pipeline, you provide the actual values for those parameters. It’s a simple concept once it clicks, but I’ll admit the similar structure in both places made me lose more time than I wanted to just trying to figure out why my parameters weren’t behaving as expected.

Aside from the quirks I mentioned, the rest of the setup was quite smooth. I also really liked the finally section in the PipelineRun: it gave me a clean way to define cleanup and post-run tasks, like tearing down resources and sending notifications, without cluttering up the main pipeline logic.

Applying Templates with oc: A Side Quest in Shell Scripting

One of my tasks used the oc CLI to apply an OpenShift template, a quick way to deploy both the app and its database. The command itself was simple (like oc process -f template.yaml | oc create -f -) but there was a catch: there’s no built-in way to wait for the resources to be fully ready.

Tekton doesn’t track the status of what oc create triggers, so I had to script a manual check to poll the deployment or job status before moving on. It worked, but felt a bit non-intuitive, I definitely missed having a more declarative way to simply say “wait until ready.”

Along the way, a helpful tip came from my colleague Alessandro, who’s more experienced with Tekton. While I could’ve passed parameters directly into the script, he suggested converting them into environment variables instead, mainly for security reasons. This kept sensitive values out of the command line and made them easier to use inside the script.

OpenShift Templates: Another Manual Waiting Challenge

Part of the goal was to create an OpenShift template that could be reused in a production scenario, something clean, repeatable, and environment-agnostic. The template included both a Deployment and a Job. The idea was to have the deployment initialize a volume, and then run a job that depended on that volume being ready.

In theory, this should’ve been a smooth, declarative flow. But once again, I hit a familiar snag: there’s no built-in way to wait for the volume to be fully created and mounted before the job starts. I had to script around it, polling the deployment status and checking for volume readiness before triggering the job.

Again, it worked, but I found myself wishing for a more declarative way to express these kinds of dependencies: something like “run this job only after the volume is ready” would’ve made the whole setup feel more production-friendly and easier to maintain.

Wrapping Up the Quest: Gained XP in Patience and Pipelines

Honestly? Watching the pipeline and seeing each task light up green as it runs smoothly is like rolling a natural 20. That moment when everything just works? Pure satisfaction.

Stepping into the world of Tekton Pipelines and OpenShift was like unlocking a whole new skill tree. It wasn’t always smooth, but it was absolutely worth it. I enjoyed the challenge more than I expected. Even the manual waiting phases, which at first felt like a chore, became part of the rhythm.

I’m still no DevOps wizard, but I’ve definitely leveled up, and next time I face a boss fight in YAML or a readiness check, I’ll be a bit more prepared.

These Solutions are Engineered by Humans

Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.

Oscar Zambotti

Oscar Zambotti

Software Engineer in the R&D Team of the IT System & Service Management Solutions group, Würth Phoenix. Random trivia: from Trentino (Italy), in love with everything is technology, gamer, TV shows enthusiast, AC Milan supporter, once a DJ, but most of all father and husband.

Author

Oscar Zambotti

Software Engineer in the R&D Team of the IT System & Service Management Solutions group, Würth Phoenix. Random trivia: from Trentino (Italy), in love with everything is technology, gamer, TV shows enthusiast, AC Milan supporter, once a DJ, but most of all father and husband.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive