Grok is a plug-in installed by default in Logstash, which is supplied with the Elastic package (the ELK – Elasticsearch, Logstash and Kibana), one of the integrated modules in our NetEye Unified Monitoring solution.
What is this plug-in for?
First of all, Grok is an English neologism that means “to understand profoundly, intuitively or by empathy, to establish a rapport with”. Grok (the plug-in) is the best way to parse an unstructured log by translating it into structured data.
In fact, it’s thanks to Grok that we are able to interpret the logs received from Logstash, where it extrapolates the fields that will be indexed in Elasticsearch and displayed in Kibana.
By making matches, Grok detects the correspondences in each line in a log and stores the data we need by inserting it into a database field.
When does Grok help us?
Suppose we add a new firewall in the NetEye Log Management that will send its logs to rsyslog. These logs, according to the applied configurations, will be saved in the directory /var/log/rsyslog/ and read by Logstash. However, Logstash on its own is not able to interpret a log. Grok, instead, will read the content line by line to extrapolate the fields that we have defined.
How are a Logstash file and the syntax of Grok constructed?
I admit that the syntax of a Logstash file is not that simple, but with a little work, we can soon understand it.
filter {
if [type] == “syslog” and [message] =~ /.*Safed\[\d+\]\[\d+\].*/ {
grok {
match => [ “message”, “%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} Safed\[%{POSINT}\]\[%{POSINT}\]:%{SPACE}+ %{GREEDYDATA:message}” ]
add_field => [ “received_at”, “%{@timestamp}” ]
add_tag => “SAFED”
overwrite => [ “message” ]
break_on_match => false
}
In this excerpt from the NetEye Logstash files, we see how the configuration file is structured:
filter – identifies a Logstash component that is able to process the logs it receives
grok – identifies the plug-in used within Logstash to interpret the log and extract the data
This will allow us to build our Grok rules by checking in real time the match with the log line.
One word of advice is to always start with GREEDYDATA (which matches everything) and gradually remove the other parts of the log starting from the beginning.
With a little practice, you’ll realize that it’s simpler than it seems.
Hi, my name is Massimiliano and I'm the youngest SI Consultant in Würth Phoneix (or my colleagues are very old).
I like: my son Edoardo (when he doesn’t cry), my pet-son Charlie, photography, mountains, linux os, open-source technology and everything I don't know.
I don't like: giving up, the blue screen of Windows, the buffering while I’m watching a movie, latecomers and fake news on internet.
I worked for the VEGA project of the European Space Agency and now I'm very happy about being landed in this company.
I'm ready to share all of my knowledge and my passion whit our customers.
Author
Massimiliano De Luca
Hi, my name is Massimiliano and I'm the youngest SI Consultant in Würth Phoneix (or my colleagues are very old).
I like: my son Edoardo (when he doesn’t cry), my pet-son Charlie, photography, mountains, linux os, open-source technology and everything I don't know.
I don't like: giving up, the blue screen of Windows, the buffering while I’m watching a movie, latecomers and fake news on internet.
I worked for the VEGA project of the European Space Agency and now I'm very happy about being landed in this company.
I'm ready to share all of my knowledge and my passion whit our customers.
Scenario NetEye 4 provides a graphical engine to represent time series monitoring data stored in an Influx database: the Grafana engine accessible through the ITOA menu on the left hand side. Grafana is very powerful: it consists of a dashboard Read More
Alerts are critical signals that demand immediate attention to minimize disruptions and maintain smooth operations. Proactively managing alerts throughout their lifecycle is key to effective event-driven workflows, incident response, and business continuity. By leveraging alerting tools within Jira Service Management Read More
Hello everyone! As you may remember, a topic I like to discuss a lot on this blog is the Proof of Concept (POC) about how we could enhance search within our online NetEye User Guide. Well, we're happy to share Read More
In the ever-evolving landscape of IT monitoring and management, the ability to efficiently handle multi-dimensional namespaces is crucial. Within NetEye, Log-SIEM (Elastic), provides a comprehensive solution for managing the single namespace dimension with the namespace of a data_stream. This blog Read More
Hey everyone! We played around a bit last time with our radar data to build a model that we could train outside Elasticsearch, loading it through Eland and then applying it using an ingest pipeline. But since our data is Read More