Some time ago I published an article about how to store the NetEye SMS Protocol log into an ELK environment. Now, after using it some times, I discovered that it was not completely correct as the time/date functions for the Logstash filters are a bit more complicated. In particular, it was that the date was written in the SMS protocol file in this way:
June 29th 2016, 10:30:22 CEST 2016
And we used this Logstash date filter to convert it:
date {
locale = "en"
match = [ "sms_timestamp_text", "EEE MMM dd HH:mm:ss" ]
}
Now it seemed that it would work, but after some time (some days until the start of the next month) we discovered that the date in the first days of the month would look like:
July 1th 2016, 10:30:22 CEST 2016
As we had a textual timezone and date filters do not support this, in the first draft we had this rule to be able to parse the sms_timestamp_text:
match =>[ "message", "%{SMS_TIMESTAMP_SHORT:sms_timestamp_text}
%{WORD:timezone} %{YEAR}:%{INT:sms_phonenumber}:%{GREEDYDATA:sms_text}"
We discovered that our filter would not work with that as we had “dd” for 2 digit day. Now, how would we do this as the date cannot be matched by “d”, neither by “dd”? After studying the filter rules I discovered the solution. It is possible to have “or” rules inside the date and so being able to match more than one date format. So we changed the filter in this way:
date {
locale = "en"
match => [ "sms_timestamp_text", "EEE MMM dd HH:mm:ss Z yyyy", "EEE MMM d HH:mm:ss Z yyyy" ]
}
You see the new Z and yyyy parameters because if we do not match a complete date it will not work correctly. To be able to parse this correctly now I discovered that the pattern match had to change in this way:
match => [ "message", "%{SMS_TIMESTAMP:sms_timestamp_text}:%{INT:sms_phonenumber}:%{GREEDYDATA:sms_text}" ]
As I told earlier, Logstash cannot parse textual time zones, but this is what we have here. What should we do? We know that our date is in Western Europe so we have a solution for this mutate tag in this way:
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Author
Juergen Vigna
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix. Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
In NetEye, 'business processes' is a module used to model and monitor the business process hierarchy to obtain a high-level view of the status of critical applications. In short, they allow monitoring controls of individual components to be aggregated into Read More
If you've worked with Elastic APM, you're probably familiar with the APM Server: a component that collects telemetry data from APM Agents deployed across your infrastructure. But what happens when you need to segregate that data by tenant, especially in Read More
When working with Logstash in production, one of the often-overlooked areas is the Dead Letter Queue (DLQ). This queue stores events that Logstash cannot process, usually due to parsing errors, mapping conflicts, or pipeline misconfigurations. While the DLQ is useful Read More
In the first part we created hosts and services to monitor a sequence of script using Tornado. The Tornado Rule Now let's continue with the creation of a Tornado rule: open the NetEye web interface and select Tornado dashboard, then Read More
Some time ago, my colleague Giuseppe Di Garbo published this article on the NetEye Blog, where he explained how to integrate NetEye notifications with Telegram. It was a great starting point, and in fact many of us used it to Read More