30. 09. 2025 Luigi Miazzo DevOps, Kubernetes, Uncategorized

Envisioning Satellite-Distributed Management with Kubernetes and Argo CD for NetEye.cloud

As our company’s NetEye cloud solution NetEye.cloud expands, we’re deploying compute nodes not only in our own data centers but, on customer premises across the globe – connected through satellite links. This hybrid, geo-distributed model creates a very tough challenge:

How can we manage configuration across hundreds of remote machines reliably, and at scale?

Why and Where Ansible Began to Struggle

Today our team uses Ansible to configure and update the machines that power our satellite-distributed cloud solution, NetEye.cloud. It’s served us well, but as our fleet grows, we’re hitting some very real scaling pain:

  • High latency and flaky links – satellite connections make centralized, push-based runs slow and sometimes unreliable
  • Logic complexity – coordinating hundreds of hosts means increasingly elaborate playbooks to handle the dozens of customizations made to tailor customer needs
  • Drift risk – few-to-no drift detection rules to ensure what needs to be up and running actually is

These issues are affecting our day-to-day operations now, so we’re actively searching for a better long-term approach.

Our Possible Next Step: Kubernetes as the Control Plane

One promising path we’re evaluating – and that we’ve tried in some of our POCs – is to re-platform our configuration management on Kubernetes. The idea is to treat each customer site as a small Kubernetes cluster (for example with lightweight distros like k3s, MicroK8s or Microshift). Kubernetes would give us:

  • A declarative API for workloads and configuration
  • Continuous reconciliation so each site can self-heal even when connectivity drops
  • A thriving ecosystem of operators and custom resources

GitOps with Argo CD

If we go down this road, Argo CD looks like the natural companion. By storing all manifests in Git and letting each cluster pull its desired state, we could eliminate many of Ansible’s scale and latency issues:

  • Clusters sync on their own schedule, which is ideal for satellite links
  • Every change is version-controlled and auditable
  • Rollbacks become as simple as reverting a Git commit

Better yet, Argo CD can even manage other Argo CD instances. This “App-of-Apps” pattern would let us run a central, master Argo CD in our core environment that defines and updates the configuration of all the satellite clusters’ own Argo CD installations.

Each remote cluster would still reconcile locally – preserving the pull-based, GitOps model – but we’d gain a single control plane to orchestrate policies, bootstrap new sites, and roll out global and local changes across the entire constellation.

Managing Hundreds of Variations

Customer sites will inevitably need their own tweaks: network ranges, optional features, performance tuning, and so on. One of Kubernetes’ biggest strengths is the sheer variety of ways to express and layer configurations.

Tools such as Kustomize overlays, ConfigMap generators, Helm charts, or even custom operators give us a huge toolbox for structuring manifests. The beauty of this ecosystem is that we don’t have to pick just one approach: we can mix and match the methods that best fit each scenario, while still keeping everything declarative and version-controlled in Git.

This flexibility means we can support hundreds – or even thousands – of unique deployments without drowning in duplicated files or brittle scripting.

Where We Stand Now

To be clear, Kubernetes and Argo CD are still only under consideration. We’ve run a couple of proof-of-concept tests, but the production environment is still entirely Ansible today.

What’s certain is that Ansible’s limits are real and pressing, and any future architecture will need to handle:

  • Large, geo-distributed fleets
  • Intermittent, high-latency links
  • Fine-grained, site-specific customization

Exploring Kubernetes + Argo CD is our way of testing whether a declarative, pull-based model can meet those needs.

These Solutions are Engineered by Humans

Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.

Luigi Miazzo

Luigi Miazzo

Software Developer - IT System & Service Management Solutions at Würth Phoenix

Author

Luigi Miazzo

Software Developer - IT System & Service Management Solutions at Würth Phoenix

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive