Do you ever need to reboot or do maintenance on your Windows Server? Then if the server is monitored by NetEye, you’ll surely want to mark it in downtime so as not to have notifications sent out for problems arising due to maintenance, and/or to be able to have the correct SLA marked for your Server Uptime and Service Availability.
You’ll have to make a few modifications to the script in order to get it to work:
Set the NetEye/Icinga 2 Hostname (or IP) [$icingaApiHost]
Set the NetEye/Icinga 2 API username [$icingaApiUser]
Set the NetEye/Icinga 2 API user-password [$icingaApiPassword]
Add the contents of the NetEye Root CA certificate (.crt ) to the script inside the BEGIN/END Certificate Lines indented as shown in the file. This needs to be the CA you made the API Certificate with.
DMIN: Downtime minutes starting from now COMMENT: The downtime comment to add HOSTNAME: If not given the script will use the local FQDN host name in lower case.
Now, calling the script manually or within a reboot or maintainance script will set the Host and ALL its services on the NetEye/Icinga2 API into Downtime.
I hope you can use and enjoy this script for your daily work.
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Author
Juergen Vigna
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Scenario NetEye 4 provides a graphical engine to represent time series monitoring data stored in an Influx database: the Grafana engine accessible through the ITOA menu on the left hand side. Grafana is very powerful: it consists of a dashboard Read More
Alerts are critical signals that demand immediate attention to minimize disruptions and maintain smooth operations. Proactively managing alerts throughout their lifecycle is key to effective event-driven workflows, incident response, and business continuity. By leveraging alerting tools within Jira Service Management Read More
Hello everyone! As you may remember, a topic I like to discuss a lot on this blog is the Proof of Concept (POC) about how we could enhance search within our online NetEye User Guide. Well, we're happy to share Read More
In the ever-evolving landscape of IT monitoring and management, the ability to efficiently handle multi-dimensional namespaces is crucial. Within NetEye, Log-SIEM (Elastic), provides a comprehensive solution for managing the single namespace dimension with the namespace of a data_stream. This blog Read More
Hey everyone! We played around a bit last time with our radar data to build a model that we could train outside Elasticsearch, loading it through Eland and then applying it using an ingest pipeline. But since our data is Read More