12. 05. 2025 Matteo Cipolletta Log Management, Log-SIEM

Keeping Elastic Agents Updated in the Dark: A Fully Offline Upgrade Workflow

Updating Elastic Agents is usually straightforward—unless you’re working in a secure, air-gapped environment where machines can’t access the internet (and so, the Elastic Artifact Repository). That was exactly our challenge. We needed a way to keep Elastic Agents up to date across a fleet of systems, without exposing production servers to the outside world.

Following our previous blog post: Enabling Elastic Agents Upgrades in Restricted or Closed Networks | www.neteye-blog.com
We evolved this approach by moving the architecture “down” to the NetEye Satellites instead of the NetEye Master Nodes, removing the use of the NetEye Share for security purposes

Here’s how we built a reliable and automated system to handle agent updates using Python, Ansible, and NGINX, leveraging NetEye Satellites as controlled distribution points.


The Challenge

Elastic’s standard upgrade mechanisms assume internet connectivity—whether to pull packages, fetch signatures, or validate versions. In highly regulated or secure environments, that’s not always an option. We needed a way to:

  • Fetch Elastic Agent binaries.
  • Verify the integrity of the packages.
  • Distribute them internally in a consistent and secure way.
  • Automate the process to minimize manual effort.

The Architecture

To solve this, we adopted a hybrid approach that separates responsibilities across three layers:

A Python script for downloading and verifying artifacts

The script performs the following actions:

  • Connects to the local Elasticsearch API to determine the current version.
  • Downloads Elastic Agent packages for Linux, Windows, and macOS.
  • Verifies their SHA-512 checksums and GPG signatures.
  • Organizes everything into a local artifact repository that mimics the structure of the public Elastic downloads.

The script is fully automated, so once it’s run, you’re left with a clean and verified set of agent installers—ready to distribute.

An Ansible Playbook for Internal Distribution

Next, we created a simple Ansible playbook that:

  • Syncs the verified artifact directory to one or more satellite nodes.
  • Adjusts file permissions so NGINX can serve the files.
  • Adds an NGINX location block to expose the internal repository as an HTTPs endpoint.
  • Restarts NGINX to make the changes active.

NGINX as the Internal Mirror

Each satellite machine effectively becomes a mirror of Elastic’s public repository—but one that is 100% internal. Agents in air-gapped networks can point to this mirror to retrieve updates, install new versions, or bootstrap themselves into a Fleet setup.

The Flow in Action

  1. Run the Python script to download and verify the agent packages.
  2. Use the Ansible playbook to push the files to your satellites.
  3. Access the internal repository from your agents using a standard HTTP URL like:
https://satellite-host/elastic-artifacts-registry/beats/elastic-agent/elastic-agent-<version>-linux-x86_64.tar.gz

It’s fast, repeatable, and completely offline.
You also need to make sure to have configured properly your Fleet Settings on Elasticsearch with the new Elastic Artifacts Repository.

Wrap up

Why It Works Well

This solution gave us a number of important benefits:

  • Security First: No internet access is required on critical systems.
  • Automation-Friendly: Once set up, everything is a single command away.
  • Flexible: Works across multiple platforms and operating systems.
  • Scalable: Additional satellites can be added with minimal effort.

Future Improvements

There’s room to evolve the system further:

  • Automating notifications when new agent versions are released.
  • Integrating CI/CD pipelines for approval before release.
  • Supporting agent upgrade directly from the Master Nodes instead using Satellites (for small deployments with no satellites)

Conclusion

If you’re running Elastic in an air-gapped or tightly controlled environment, this approach can save time, reduce risk, and bring more consistency to how you manage agents. It’s lightweight, auditable, and built entirely with open source tools—no black boxes, no surprises.

Matteo Cipolletta

Matteo Cipolletta

I'm an IT professional with a strong knowledge of Security Information and Event Management solutions. I have proven experience in multiple Enterprise contexts with managing, designing, and administering Security Information and Event Management (SIEM) solutions (including log source management, parsing, alerting and data visualizations), its related processes and on-premises and cloud architectures, as well as implementing Use Cases and Correlation Rules to enable SOC teams to detect and respond to cyber threats.

Author

Matteo Cipolletta

I'm an IT professional with a strong knowledge of Security Information and Event Management solutions. I have proven experience in multiple Enterprise contexts with managing, designing, and administering Security Information and Event Management (SIEM) solutions (including log source management, parsing, alerting and data visualizations), its related processes and on-premises and cloud architectures, as well as implementing Use Cases and Correlation Rules to enable SOC teams to detect and respond to cyber threats.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive