Not long ago, I received an interesting request from one of our client’s Unix teams: They wanted a URL where the latest version of the Icinga 2 agent is always available. An important requirement was that this version should stay in sync with the current NetEye server version, enabling fully automated installation and updates. I found this to be a great idea, and the R&D team agreed – they started working on a solution right away.
There was just one catch: The requirement also applied to IBM Power-based (ppc64le) systems. From a professional standpoint, this made perfect sense. Recently, more and more AIX systems have been phased out (where we previously relied solely on SSH-based checks instead of using the Icinga agent), and it has become clear that, going forward, all AIX systems will be replaced with RHEL on ppc64le.
Since Icinga officially supports only the x86_64 architecture, we had been building ppc64le packages manually whenever the need arose. However, with the growing demand for automation and continuous updates – and seeing the increasing number of ppc64le systems – manual work was no longer sustainable. We needed an automated solution that could detect new releases, trigger the build process, and verify that the resulting packages actually work as expected.
In this post, I’ll show how we built a CI/CD pipeline that uses emulation on x86_64 hosts to produce these critical RPM packages.
It’s a fair question. If we already have native ppc64le machines available, why not run the build process directly on them? The answer lies in the network infrastructure and the maturity of the environment. Our x86_64 build host is already a well-established, fully configured setup: the firewall and corporate proxy are tuned to allow seamless access to all required external URLs.
This stability is critical for several reasons:
Since all these integrations were already working flawlessly on the build server, it was faster and simpler to extend the existing x86_64 host for cross-platform builds, leveraging its stable connectivity and ready-to-use configuration. And, to be honest, there was also some professional curiosity involved – I wanted to see how well emulation and Docker perform together in a setup like this. That said, moving the entire build process to a native IBM Power environment remains a possible next step in the future.
Next, let’s look at how we prepared the host for the build. The heart of the setup is QEMU user-static emulation. For the x86_64 host kernel to be able to run ppc64le binaries, we need to register the appropriate binfmt_misc handlers. The binfmt_misc mechanism lets the kernel recognize binaries built for another architecture and hand them over to the right interpreter, in this case QEMU.
docker run --privileged --rm tonistiigi/binfmt --install ppc64le
But how does it work? By default, the kernel doesn’t know how to interpret machine code compiled for a different architecture. With this setup, Docker effectively “introduces” the emulator to the kernel, which then recognizes ppc64le files and runs them through QEMU automatically.
Since many modern binfmt containers, including tonistiigi, use the F flag, the kernel loads the emulator into memory at registration time, so it remains available even after the container that registered it has already stopped and been removed.
cat /proc/sys/fs/binfmt_misc/qemu-ppc64le
enabled
interpreter /usr/bin/qemu-ppc64le-static
flags: F
offset 0
magic 7f454c4602010100000000000000000002001500
mask ffffffffffffff00fffffffffffffffffeffff00
It’s important to keep in mind that this configuration lives only in kernel memory, so a system reboot clears it and the Docker command has to be run again. The Jenkins agent responsible for orchestrating the process will also run on this host, inside another Docker container.
By mounting the host’s Docker socket into this container, it controls the builds using DooD (Docker outside of Docker). This way, the build containers launched by the Jenkins agent are created directly on the host and can access the server’s CPU, memory, and the previously configured QEMU emulator without any additional virtualization layer or performance overhead.
Our goal was to create a Jenkins pipeline that triggers the build process fully automatically, without any manual intervention, as soon as a new Icinga 2 software version becomes available. The main components are the following:
triggers {
// Poll daily at 3:00 AM CET
cron('H 3 * * *')
}
stage('Build RPMs') {
matrix {
axes {
axis {
name 'OS_TARGET'
// Target el8 and el9
values 'el8', 'el9'
}
}
stage('Compile RPMs (ppc64le)') {
steps {
sh "docker run -d --name ${CONTAINER_NAME} --platform linux/ppc64le ${IMAGE_NAME} sleep infinity"
sh "docker exec ${CONTAINER_NAME} rpmbuild -v -ba --define '_smp_mflags -j1' /root/rpmbuild/SPECS/icinga2.spec"
}
}
During the build, we use the –define ‘_smp_mflags -j1’ RPM flag. This is necessary because parallel builds (compilation across multiple threads) in the emulated environment can be extremely unstable, often leading to memory overflows or unexpected QEMU failures. The -j1 flag forces serial execution, which is slower but ensures a successful build. Since the build starts at 3 a.m., execution time is not a critical factor.
Instead of cluttering the Jenkins host with various compilers and development libraries, we moved the entire build process into an isolated Docker container. Our Dockerfile acts like an intelligent recipe: for every build, it spins up a fresh, clean environment through the following steps:
# 3. Dynamic Web-Scraping to find the exact Source RPM
ARG ICINGA_VERSION
ARG NETEYE_VERSION
ARG RHEL_VER
WORKDIR /root/rpmbuild/SOURCES
RUN echo "Fetching source RPM for Icinga2 ${ICINGA_VERSION} from NetEye ${NETEYE_VERSION} (RHEL ${RHEL_VER})..." && \
# Construct the directory URL
BASE_URL="https://$NETEYE_REPO/icinga2-agents/neteye-${NETEYE_VERSION}/subscription/rhel-${RHEL_VER}/Packages/i/" && \
# Scrape HTML directory index, to STRICTLY match the ICINGA_VERSION from the JSON!
EXACT_RPM=$(curl -sL "${BASE_URL}" | grep -oE "href=\"icinga2-${ICINGA_VERSION}-[0-9]+[^\"]*\.src\.rpm\"" | cut -d'"' -f2 | sort -V | tail -n 1) && \
# Fail fast if nothing is found
if [ -z "$EXACT_RPM" ]; then echo "ERROR: Could not find any icinga2-${ICINGA_VERSION}-*.src.rpm at ${BASE_URL}"; exit 1; fi && \
echo "Found requested source file: ${EXACT_RPM}" && \
# Download the exact file
wget "${BASE_URL}${EXACT_RPM}" -O icinga2.src.rpm && \
# Extract the downloaded src.rpm
rpm2cpio icinga2.src.rpm | cpio -idmv && \
mv icinga2.spec ../SPECS/
This approach ensures that we always build from the official, up‑to‑date source and that required changes are automatically applied with every single build.
At the end of the build process, the generated RPMs are automatically deployed to the target ppc64le VMs over SSH. However, deployment by itself is not enough; we don’t judge success with a simple “is the process running?” check.
The pipeline concludes with a Verify stage, where we confirm actual, functional connectivity through the Icinga 2 API itself:
num_conn_endpoints (the number of active endpoints) is greater than zero, confirming that the agent has successfully reached its parent zone. Then it also checks num_not_conn_endpoints (the number of failed connections). If we have at least one active connection and the number of failed connections is strictly zero, we can be confident that the TLS handshake succeeded, there are no “disconnected” endpoints, and the agent has cleanly joined the zone.
def connectionCheckStatus = sh(
script: """
ssh -o StrictHostKeyChecking=no root@${targetIp} '
set -e
# Execute API call.
RESPONSE=\$(curl -k -s -u "${env.ICINGA_API_CREDS_USR}:${env.ICINGA_API_CREDS_PSW}" \\
-H "Accept: application/json" \\
"https://localhost:5665/v1/status/ApiListener")
echo "Raw Response: \$RESPONSE"
# Use JQ to perform the strict double-check:
# 1. Agent has active connections (num_conn_endpoints > 0)
# 2. Agent has ZERO disconnected/failed endpoints (num_not_conn_endpoints == 0)
echo "\$RESPONSE" | jq -e ".results[0].status.api.num_conn_endpoints > 0 and .results[0].status.api.num_not_conn_endpoints == 0" > /dev/null
'
""",
returnStatus: true
)
This project perfectly shows how creatively combining modern DevOps tools – Docker, QEMU, and Jenkins – can remove hardware architecture differences as a bottleneck. Even though the build still runs on an x86_64 server due to our infrastructure constraints, the end result is a completely uncompromised, stable, native RPM package optimized for ppc64le systems.
The key takeaways:
I hope you found this summary useful and that it gave you a couple of new ideas for improving your own environment as well.
Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth IT Italy.