The year had just come to an end, and after a long stretch of work I really needed some rest. But before closing it out completely, there was one last thing I wanted to share, and it needed a bit of context about the project.
We’re building a distributed application made of several components — database, backend, frontend, along other additional components — each packaged as its own Docker image. We also rely on additional images that run as jobs for database initialization, test data loading, and automated test execution.
Everything is deployed on OpenShift. Not just development and production environments, but also our CI/CD pipelines. Because of this, we made an explicit decision early on: for local development, we didn’t want to rely on Docker Compose or a parallel setup. Instead, we wanted developers to work against an OpenShift cluster, using the same Helm charts and the same deployment logic across all environments, with only minimal configuration differences.
For local development, we use CodeReady Containers (CRC). This gives us a real OpenShift cluster on our machines, and it allows us to deploy the entire stack exactly as we do elsewhere. From an architectural and operational point of view, this has been a big win. There is a single execution path, a single set of manifests, and far fewer “it works on my machine” surprises.
However, this choice immediately raises a practical question: how do you actually develop code efficiently if everything is already running in the cluster as prebuilt container images?
When you deploy on Kubernetes or OpenShift, the development loop often becomes the first real problem you feel. Change some code, rebuild the image, push it, redeploy, wait. Repeat. The environment is realistic, but the feedback loop is slow: hot reload is gone, debugging becomes painful, attaching a debugger is awkward at best.
In other words, we had production parity, but we were paying for it with developer experience.
This is where Telepresence comes in.
Telepresence allows you to connect your local machine to a Kubernetes or OpenShift cluster in a way that makes your local processes behave as if they were running inside the cluster.
In our case, the key feature is replace.
With Telepresence, we can take an existing workload, for example the backend deployment, and replace its pod with a Telepresence proxy. From the cluster’s point of view, that pod is still there. It has the same service, the same labels, the same network identity. But instead of running the original container, traffic is forwarded to a process running locally on the developer’s machine.
This means that when the frontend calls the backend service, or when the backend connects to Keycloak or the database, all of that traffic flows through the cluster exactly as before. The only difference is that the backend code is running locally, for example with the classic cargo run for the rust backend or npm run dev for the Vue.js frontend.
Feels like home, huh? From the application’s perspective, nothing changed. From the developer’s perspective, everything changed.
The workflow is intentionally simple.
telepresence replace command targeting that deployment.At that point, the cluster routes traffic to your local process. You can use your editor, your debugger, your hot reload setup. You can set breakpoints, inspect variables, and iterate quickly, while still interacting with real cluster services, real configuration, and real networking.
If you stop Telepresence, the original pod comes back. There is no need to redeploy or clean up manually.
The biggest advantage is that we don’t maintain two execution paths. There is no “Docker Compose version” of the system and no special local-only wiring. Helm charts are the source of truth everywhere.
Telepresence lets us keep that consistency without sacrificing productivity. We get fast local builds, proper debugging, and realistic integration with the rest of the system.
Another important benefit is confidence. When something works locally, it is already working in an OpenShift environment. The gap between development and deployment is much smaller, which reduces surprises later in the pipeline.
Finally, the cognitive load for developers is low. You don’t need to understand every detail of Kubernetes networking to be productive. Replacing a service is a single command, and after that you just run your code like you always have.
Using OpenShift even for local development can sound intimidating, and without the right tools it can be frustrating. Telepresence, especially the replace workflow, has been the missing piece for us.
It allows us to combine production-like environments with a fast, pleasant development experience. For teams that already deploy on Kubernetes or OpenShift and want to avoid maintaining parallel local setups, this approach is well worth exploring.
That was the last thing I wanted to share before fully stepping into the new year. The rest has already started, and if there are still a few days left, it’s the perfect time to take advantage of them and come back renewed.
Did you find this article interesting? Does it match your skill set? Programming is at the heart of how we develop customized solutions.
In fact, we’re currently hiring for roles just like this and others here at Würth IT.