31. 03. 2026 Csaba Remenar NetEye

Automating Icinga 2 Agent Builds for IBM Power (ppc64le)

Not long ago, I received an interesting request from one of our client’s Unix teams: They wanted a URL where the latest version of the Icinga 2 agent is always available. An important requirement was that this version should stay in sync with the current NetEye server version, enabling fully automated installation and updates. I found this to be a great idea, and the R&D team agreed – they started working on a solution right away.

There was just one catch: The requirement also applied to IBM Power-based (ppc64le) systems. From a professional standpoint, this made perfect sense. Recently, more and more AIX systems have been phased out (where we previously relied solely on SSH-based checks instead of using the Icinga agent), and it has become clear that, going forward, all AIX systems will be replaced with RHEL on ppc64le.

Since Icinga officially supports only the x86_64 architecture, we had been building ppc64le packages manually whenever the need arose. However, with the growing demand for automation and continuous updates – and seeing the increasing number of ppc64le systems – manual work was no longer sustainable. We needed an automated solution that could detect new releases, trigger the build process, and verify that the resulting packages actually work as expected.

In this post, I’ll show how we built a CI/CD pipeline that uses emulation on x86_64 hosts to produce these critical RPM packages.

Why Emulation?

It’s a fair question. If we already have native ppc64le machines available, why not run the build process directly on them? The answer lies in the network infrastructure and the maturity of the environment. Our x86_64 build host is already a well-established, fully configured setup: the firewall and corporate proxy are tuned to allow seamless access to all required external URLs.

This stability is critical for several reasons:

  • Docker Hub: We pull base OS images from here (such as AlmaLinux 8/9), which are binary-compatible with the corresponding RHEL versions. This lets us benefit from free distributions without dealing with licensing in the build environment.
  • Repositories: Access to EPEL, CRB, and AlmaLinux repositories is essential for installing build dependencies.
  • Source servers: Access to NetEye mirrors is required to download the Icinga 2 source RPMs – although these are also reachable from client machines.
  • Jenkins plugins: And here’s the reveal – the whole process is driven by Jenkins, so access to and updates from the jenkins.io repository are a must.

Since all these integrations were already working flawlessly on the build server, it was faster and simpler to extend the existing x86_64 host for cross-platform builds, leveraging its stable connectivity and ready-to-use configuration. And, to be honest, there was also some professional curiosity involved – I wanted to see how well emulation and Docker perform together in a setup like this. That said, moving the entire build process to a native IBM Power environment remains a possible next step in the future.

Preparing the Host

Next, let’s look at how we prepared the host for the build. The heart of the setup is QEMU user-static emulation. For the x86_64 host kernel to be able to run ppc64le binaries, we need to register the appropriate binfmt_misc handlers. The binfmt_misc mechanism lets the kernel recognize binaries built for another architecture and hand them over to the right interpreter, in this case QEMU.

docker run --privileged --rm tonistiigi/binfmt --install ppc64le

But how does it work? By default, the kernel doesn’t know how to interpret machine code compiled for a different architecture. With this setup, Docker effectively “introduces” the emulator to the kernel, which then recognizes ppc64le files and runs them through QEMU automatically.

Since many modern binfmt containers, including tonistiigi, use the F flag, the kernel loads the emulator into memory at registration time, so it remains available even after the container that registered it has already stopped and been removed.

cat /proc/sys/fs/binfmt_misc/qemu-ppc64le                       
enabled
interpreter /usr/bin/qemu-ppc64le-static
flags: F
offset 0
magic 7f454c4602010100000000000000000002001500
mask ffffffffffffff00fffffffffffffffffeffff00

It’s important to keep in mind that this configuration lives only in kernel memory, so a system reboot clears it and the Docker command has to be run again. The Jenkins agent responsible for orchestrating the process will also run on this host, inside another Docker container.

By mounting the host’s Docker socket into this container, it controls the builds using DooD (Docker outside of Docker). This way, the build containers launched by the Jenkins agent are created directly on the host and can access the server’s CPU, memory, and the previously configured QEMU emulator without any additional virtualization layer or performance overhead.

Automation in Jenkins

Our goal was to create a Jenkins pipeline that triggers the build process fully automatically, without any manual intervention, as soon as a new Icinga 2 software version becomes available. The main components are the following:

  • Jenkins cron: Every day at 3 a.m., the pipeline starts and checks whether a new version is available.
triggers {
        // Poll daily at 3:00 AM CET
        cron('H 3 * * *')
    } 
  • Dynamic version tracking: The pipeline queries the latest available version via a custom JSON API. An embedded jq script filters out only the stable releases, and the build is triggered only if the version number differs from the latest successfully built one, which is stored in a STATE_FILE.
  • Matrix build and stability: When a new version is detected, the build is automatically started for both RHEL 8 and RHEL 9. Using the Matrix feature, we build the packages in parallel within a single pipeline.
stage('Build RPMs') {
    matrix {
        axes {
            axis {
                name 'OS_TARGET'
                // Target el8 and el9
                values 'el8', 'el9'
            }
        }
            stage('Compile RPMs (ppc64le)') {
                steps {
                    sh "docker run -d --name ${CONTAINER_NAME} --platform linux/ppc64le ${IMAGE_NAME} sleep infinity"
                    sh "docker exec ${CONTAINER_NAME} rpmbuild -v -ba --define '_smp_mflags -j1' /root/rpmbuild/SPECS/icinga2.spec"
                }
            }

During the build, we use the –define ‘_smp_mflags -j1’ RPM flag. This is necessary because parallel builds (compilation across multiple threads) in the emulated environment can be extremely unstable, often leading to memory overflows or unexpected QEMU failures. The -j1 flag forces serial execution, which is slower but ensures a successful build. Since the build starts at 3 a.m., execution time is not a critical factor.

The Dockerfile

Instead of cluttering the Jenkins host with various compilers and development libraries, we moved the entire build process into an isolated Docker container. Our Dockerfile acts like an intelligent recipe: for every build, it spins up a fresh, clean environment through the following steps:

  1. Based on parameters passed from Jenkins, the script pulls the appropriate AlmaLinux base image (v8 or v9) and configures proxy rules required by the corporate network.
  2. It enables the necessary repositories (such as EPEL and CRB) and installs the essential build environment – GCC, CMake, and various libraries.
  3. Here comes the most interesting part: The container doesn’t rely on hardcoded links; when it runs, it dynamically discovers the latest source RPM from the NetEye mirror based on the version number received from a JSON call, then downloads it. It unpacks the package, sets up the standard RPM directory structure, and prepares the files for the actual build.
# 3. Dynamic Web-Scraping to find the exact Source RPM
ARG ICINGA_VERSION
ARG NETEYE_VERSION
ARG RHEL_VER

WORKDIR /root/rpmbuild/SOURCES
RUN echo "Fetching source RPM for Icinga2 ${ICINGA_VERSION} from NetEye ${NETEYE_VERSION} (RHEL ${RHEL_VER})..." && \
    # Construct the directory URL
    BASE_URL="https://$NETEYE_REPO/icinga2-agents/neteye-${NETEYE_VERSION}/subscription/rhel-${RHEL_VER}/Packages/i/" && \
    # Scrape HTML directory index, to STRICTLY match the ICINGA_VERSION from the JSON!
    EXACT_RPM=$(curl -sL "${BASE_URL}" | grep -oE "href=\"icinga2-${ICINGA_VERSION}-[0-9]+[^\"]*\.src\.rpm\"" | cut -d'"' -f2 | sort -V | tail -n 1) && \
    # Fail fast if nothing is found
    if [ -z "$EXACT_RPM" ]; then echo "ERROR: Could not find any icinga2-${ICINGA_VERSION}-*.src.rpm at ${BASE_URL}"; exit 1; fi && \
    echo "Found requested source file: ${EXACT_RPM}" && \
    # Download the exact file
    wget "${BASE_URL}${EXACT_RPM}" -O icinga2.src.rpm && \
    # Extract the downloaded src.rpm
    rpm2cpio icinga2.src.rpm | cpio -idmv && \
    mv icinga2.spec ../SPECS/

This approach ensures that we always build from the official, up‑to‑date source and that required changes are automatically applied with every single build.

Deployment and Icinga API validation

At the end of the build process, the generated RPMs are automatically deployed to the target ppc64le VMs over SSH. However, deployment by itself is not enough; we don’t judge success with a simple “is the process running?” check.

The pipeline concludes with a Verify stage, where we confirm actual, functional connectivity through the Icinga 2 API itself:

  • We use a curl call to test whether the service is alive and the API responds at all after authentication. Instead of simple text based searches in the raw output, we parse the response with jq to examine the data in a structured, reliable way.
  • This is the most important part, where we implement a dual safety check. The script first verifies that num_conn_endpoints (the number of active endpoints) is greater than zero, confirming that the agent has successfully reached its parent zone. Then it also checks num_not_conn_endpoints (the number of failed connections). If we have at least one active connection and the number of failed connections is strictly zero, we can be confident that the TLS handshake succeeded, there are no “disconnected” endpoints, and the agent has cleanly joined the zone.
def connectionCheckStatus = sh(
    script: """
        ssh -o StrictHostKeyChecking=no root@${targetIp} '
            set -e
            
            # Execute API call. 
            RESPONSE=\$(curl -k -s -u "${env.ICINGA_API_CREDS_USR}:${env.ICINGA_API_CREDS_PSW}" \\
                -H "Accept: application/json" \\
                "https://localhost:5665/v1/status/ApiListener")
            
            echo "Raw Response: \$RESPONSE"
            
            # Use JQ to perform the strict double-check:
            # 1. Agent has active connections (num_conn_endpoints > 0)
            # 2. Agent has ZERO disconnected/failed endpoints (num_not_conn_endpoints == 0)
            echo "\$RESPONSE" | jq -e ".results[0].status.api.num_conn_endpoints > 0 and .results[0].status.api.num_not_conn_endpoints == 0" > /dev/null
        '
    """,
    returnStatus: true 
)

Summary: Was it worth it?

This project perfectly shows how creatively combining modern DevOps tools – Docker, QEMU, and Jenkins – can remove hardware architecture differences as a bottleneck. Even though the build still runs on an x86_64 server due to our infrastructure constraints, the end result is a completely uncompromised, stable, native RPM package optimized for ppc64le systems.

The key takeaways:

  • Builds in the emulated environment are slower (especially because of the -j1 driven serial execution), but in return we get a stable solution that integrates smoothly into the existing corporate network. Containerization also makes the setup highly portable.
  • Instead of static configurations, we rely on dynamic source handling: We have automatic version tracking and downloads, so we can follow the official Icinga updates immediately and without manual intervention.
  • A CI/CD pipeline without proper validation is only a partial solution. The real confidence comes from intelligent, API based checks after deployment – especially the thorough inspection of active and failed connections – so these packages can be deployed to production with peace of mind.

I hope you found this summary useful and that it gave you a couple of new ideas for improving your own environment as well.

These Solutions are Engineered by Humans

Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth IT Italy.

Csaba Remenar

Csaba Remenar

Technical Consultant at Würth IT Italy

Author

Csaba Remenar

Technical Consultant at Würth IT Italy

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive