A Simple Way to Deploy Linux Agents Using the Icinga 2 API
The Agent’s distribution is probably one of those more time-consuming tasks. This can be for various reasons: different operating systems, network segregation, administrative credentials that are difficult to obtain, or even more simply, a large number of Agents to install.
We know that the Agent installation on Windows servers is made easier by this PowerShell script made available by the community: https://github.com/Icinga/icinga2-powershell-module. In addition, it’s possible to generate an authentication token at the Host Templates level. This clearly facilitates the deployment methods.
For Linux Operating Systems, the situation is more complicated: it is not possible to generate a token at the host template level, so each host type object will have a different authentication token. This will increase significantly the installation times.
Fortunately, the APIs that Icinga makes available will help us.
When creating a Linux host, it becomes possible to download a bash script by accessing the “Agent” tab on the host screen. There is a dedicated script for every single host.
The two parameters to be customized are the following:
ICINGA2_NODENAME=’linux_agent.domain’ (FQDN of the remote server)
ICINGA2_CA_TICKET=’aq1sw2de3fr4gt5hy6ju7ki8lo9′ (the ticket released by NetEye master)
The value for the first field is easily to find (for example, it could correspond to the hostname -f command executed on the remote server)
The curl command will return a JSON response like this:
{ "results": [ { "code": 200.0, "status": "Generated PKI ticket 'aq1sw2de3fr4gt5hy6ju7ki8lo9aq1sw2de3fr4'for common name 'linux_agent.domain'.", "ticket": "aq1sw2de3fr4gt5hy6ju7ki8lo9aq1sw2de3fr4" } ] }
We only have to parse the content in order to get just the authentication token.
It isn’t necessary for the host object to already be present on Director, we can create it later. If you have a large number of hosts to set up, I recommend that you use a configuration management tool (puppet, rundeck, etc…) that can execute commands on all remote servers.
Dear all, I'm Stefano and I was born in Milano.
Since I was a little boy I've always been fascinated by the IT world. My first approach was with a 286 laptop with a 16 color graphic adapter (the early '90s).
Before joining Würth Phoenix as SI consultant, I worked first as IT Consultant, and then for several years as Infrastructure Project Manager, with a strong knowledge in the global IT scenarios: Datacenter consolidation/migration, VMware, monitoring systems, disaster recovery, backup system.
My various ITIL and TOGAF certification allowed me to be able to cooperate in the writing of many ITSM Processes.
I like to play guitar, soccer and cycling, but... my very passion are my 3 baby and my lovely wife that has always encouraged me and helped me to realize my dreams.
Author
Stefano Bruno
Dear all, I'm Stefano and I was born in Milano.
Since I was a little boy I've always been fascinated by the IT world. My first approach was with a 286 laptop with a 16 color graphic adapter (the early '90s).
Before joining Würth Phoenix as SI consultant, I worked first as IT Consultant, and then for several years as Infrastructure Project Manager, with a strong knowledge in the global IT scenarios: Datacenter consolidation/migration, VMware, monitoring systems, disaster recovery, backup system.
My various ITIL and TOGAF certification allowed me to be able to cooperate in the writing of many ITSM Processes.
I like to play guitar, soccer and cycling, but... my very passion are my 3 baby and my lovely wife that has always encouraged me and helped me to realize my dreams.
Running Ollama locally or on dedicated hardware is straightforward until you need to know whether a model is actually loaded in RAM, how fast it generates tokens under load, or when memory consumption reaches a threshold that affects other workloads. Read More
Hi everyone! Today I'd like to share with you an investigation we undertook related to ingesting Open Telemetry data in Elasticsearch, while maintaining tenant segregation from start to end. The Scenario Let's imagine we have multiple customers, where in this Read More
SNMP monitoring is the standard method for obtaining information and metrics from network devices. Typically, we focus on extracting data from a single interface to monitor its status, traffic, or errors. But in many cases, we’re only interested in getting Read More
In the ITOA module we fixed a bug that prevented the Performance Graphs to be shown in the Monitoring host and service page. List of updated packages grafana, grafana-autosetup, grafana-configurator and grafana-neteye-config to version 12.4.1_neteye3.29.2-1
Creating a GitHub organization is easy. Creating a public one that is actually well-structured, secure, and maintainable over time… not so much. At the beginning, it feels like a simple task: create the org, push some repositories, maybe define a Read More