Some time ago I was able to use the machine learning functionality in Elastic for the first time. I was astonished at how easy it is to use, and how fast it calculates historical data.
In my particular case, I loaded Netflow data into the Elastic database. I wanted to use this data to evaluate the utilization of the various WAN lines, to promptly recognize any problems, and to calculate forecasts.
In order to achieve these goals, in addition to machine learning, I used the Netflow module from Logstash, which provides you with the standard Netflow dashboards.
By using the Netflow Logstash Module, the Netflow information is stored in Elastic with the required fields. With these fields I created a “single metric” job over the “bytes” field within Elastic’s machine learning module. That way you can easily specify the span for the detailed calculation of the data, for example 5 minutes.
Finally, you need to define a name for the job, and the period over which the calculation should run. After a short time, you will already be able to evaluate the result in the “Anomaly Explorer”. As soon as you click on a span unit, you will get the details for the period and you can open the corresponding “Single Metric Viewer”, which displays in graphical form the data line (the actual values) and the base line (upper and lower bounds predicted by the machine learning algorithm) as calculated by Elastic.
In the “Single Metric Viewer”, you can immediately see the “Forecast” button, which makes it easy to calculate forecasts.
I especially appreciate the possibility of using the machine learning functionality in Elastic to create an analysis with forecasts over all stored historical data.
To conclude, I just can’t emphasize enough how simple and intuitive it is to use machine learning in Elastic – one of those rare times when a surprise is truly a positive thing.