First of all, I’ll briefly explain what the “Tornado” in NetEye actually is.
Tornado is a Complex Event Processor that receives reports of events from data sources such as monitoring, email, and SNMP Traps, matches them against rules you’ve configured, and executes the actions associated with those rules, which can include sending notifications, logging to files, and annotating events in a time series graphing system like Grafana.
Recently I had a customer who wanted to display their incoming SNMP traps as alerts in NetEye monitoring, and at the same time store them in their Elastic database. (I should mention in advance that the SNMP traps reach the master monitoring system via various NetEye satellites.)
In order to be able to implement this requirement using the standard NetEye installation, I decided to use Tornado. First of all, I created a new data stream called snmptraps-archive in Kibana and gave the user “Tornado” write access to it.
I then created an additional rule in Tornado under the ruleset snmptrap and set up the new Elasticsearch action. The action definition can be seen in the following screenshot:
Let me annotate what you’re seeing here.
#endpoint: The local Elasticsearch server must be specified as the endpoint.
#index: The index in which Tornado should store the SNMP traps is specified — here it’s the new snmptraps-archive data stream we defined above.
#data: Here the content of the document must be defined in Elastic, i.e. which fields are written to Elastic by the SNMP trap. In my example I added the @timestamp and a username so I know that this document was written by Tornado, and add the entire trap.
#auth: In this section we have to set up authentication with Elastic. Since the certificates for the Tornado user are already defined in NetEye, I use them to authenticate myself in Elastic.
As soon as this rule is activated, the traps are written to the desired index.
Have fun trying it out.
These Solutions are Engineered by Humans
Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth Phoenix.
I started my professional career as a system administrator.
Over the years, my area of responsibility changed from administrative work to the architectural planning of systems.
During my activities at Würth Phoenix, the focus of my area of responsibility changed to the installation and consulting of the IT system management solution WÜRTHPHOENIX NetEye.
In the meantime, I take care of the implementation and planning of customer projects in the area of our unified monitoring solution.
Author
Tobias Goller
I started my professional career as a system administrator.
Over the years, my area of responsibility changed from administrative work to the architectural planning of systems.
During my activities at Würth Phoenix, the focus of my area of responsibility changed to the installation and consulting of the IT system management solution WÜRTHPHOENIX NetEye.
In the meantime, I take care of the implementation and planning of customer projects in the area of our unified monitoring solution.
We all know that NetEye Upgrades are boring activities. Upgrading is important and useful because it brings you bug fixes and new features, but nonetheless it's extremely expensive in terms of time. The most boring, tiring and lengthy part is Read More
Hey everyone! We played around a bit last time with our radar data to build a model that we could train outside Elasticsearch, loading it through Eland and then applying it using an ingest pipeline. But since our data is Read More
Hi all, it's been a while. I'm deeply sorry not to have sent out some blog posts lately, so now I'll try to get back your trust by providing some useful information. Not only that, I'll even go out of Read More
Elasticsearch limits the number of open shards per node with the max_shards_per_node cluster setting, which defaults to 1000. The limit on the total number of shards is then calculated from this setting with this formula: total_max_number_of_shards = cluster.max_shards_per_node * number Read More
We fixed a few bugs in Tornado: Fixed a display problem in Firefox where arguments of SCRIPT actions were limited to 23 characters instead of 45. Corrected the default starting value when creating a new SMART_MONITORING_CHECK_RESULT action. Fixed a visual Read More