As our company’s NetEye cloud solution NetEye.cloud expands, we’re deploying compute nodes not only in our own data centers but, on customer premises across the globe – connected through satellite links. This hybrid, geo-distributed model creates a very tough challenge:
How can we manage configuration across hundreds of remote machines reliably, and at scale?
Today our team uses Ansible to configure and update the machines that power our satellite-distributed cloud solution, NetEye.cloud. It’s served us well, but as our fleet grows, we’re hitting some very real scaling pain:
These issues are affecting our day-to-day operations now, so we’re actively searching for a better long-term approach.
One promising path we’re evaluating – and that we’ve tried in some of our POCs – is to re-platform our configuration management on Kubernetes. The idea is to treat each customer site as a small Kubernetes cluster (for example with lightweight distros like k3s, MicroK8s or Microshift). Kubernetes would give us:
If we go down this road, Argo CD looks like the natural companion. By storing all manifests in Git and letting each cluster pull its desired state, we could eliminate many of Ansible’s scale and latency issues:
Better yet, Argo CD can even manage other Argo CD instances. This “App-of-Apps” pattern would let us run a central, master Argo CD in our core environment that defines and updates the configuration of all the satellite clusters’ own Argo CD installations.
Each remote cluster would still reconcile locally – preserving the pull-based, GitOps model – but we’d gain a single control plane to orchestrate policies, bootstrap new sites, and roll out global and local changes across the entire constellation.
Customer sites will inevitably need their own tweaks: network ranges, optional features, performance tuning, and so on. One of Kubernetes’ biggest strengths is the sheer variety of ways to express and layer configurations.
Tools such as Kustomize overlays, ConfigMap generators, Helm charts, or even custom operators give us a huge toolbox for structuring manifests. The beauty of this ecosystem is that we don’t have to pick just one approach: we can mix and match the methods that best fit each scenario, while still keeping everything declarative and version-controlled in Git.
This flexibility means we can support hundreds – or even thousands – of unique deployments without drowning in duplicated files or brittle scripting.
To be clear, Kubernetes and Argo CD are still only under consideration. We’ve run a couple of proof-of-concept tests, but the production environment is still entirely Ansible today.
What’s certain is that Ansible’s limits are real and pressing, and any future architecture will need to handle:
Exploring Kubernetes + Argo CD is our way of testing whether a declarative, pull-based model can meet those needs.
Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.