NetEye & EriZone User Group
Sfide e opportunità per l’IT Management 4.0
Connectbay, Mantova, Giovedì 19 ottobre 11:00 – 17:00
Siamo lieti di invitarvi il 19 ottobre al NetEye & EriZone User Group. L’evento vi offrirà un’occasione unica per scoprire le ultime novità nell’IT System & Service Management, individuare i requisiti necessari per adeguarsi al GDPR (General Data Protection Regulation) e partecipare attivamente alla definizione della fase evolutiva delle nostre soluzioni.
Machine learning and anomaly detection are being mentioned with increasing frequency in performance monitoring. But what are they and why is interest in them rising so quickly?
From Statistics to Machine Learning
There have been several attempts to explicitly differentiate between machine learning and statistics. It is not so easy to draw a line between them, though.
For instance, different experts have said:
- “There is no difference between Machine Learning and Statistics” (in terms of maths, books, teaching, and so on)
- “Machine Learning is completely different from Statistics.” (and the only future of both)
- “Statistics is the true and only one” (Machine Learning is a different name used for part of statistics by people who do not understand the real concepts of what they are doing)
In short we will not answer this question here. But for monitoring people it is still relevant that the machine learning and statistics communities currently focus on different directions and that it might be convenient to use methods from both fields. The statistics community focuses on inference (they want to infer the process by which data were generated) while the machine learning community puts emphasis on the prediction of what future data are expected to look like. Obviously the two interests are not independent. Knowledge about the generating model could be used for creating an even better predictor or anomaly detection algorithm.
Network traffic keeps becoming more and more heterogeneous. In many cases, it is not enough to monitor a system as we have done in the past. Here I will present the key ingredients according to Würth Phoenix for successful state of the art performance monitoring and proactive analysis of those applications that are critical for your business.
Combining User Experience and Performance Metrics for new Insights
User experience is a very important factor. If your measurements seem in the right range, BUT end users complain about slow applications, you need to act. For this reason, user experience combined with an overview of all the servers being put under monitoring is the right place to start. In our opinion it is of vital importance to know when critical business applications begin to slow down before your users start to complain. You can achieve this by running continuous checks via Alyvix – our active user experience monitoring solution. Test cases can be written specifically for the most vital parts of your applications, and the functionality and speed of those very parts can be checked as often as needed. The outcome in terms of performance of each individual user interaction tested is then saved into the same central time series data base as the performance metrics registered from all original sources of interest (such as Perfmon data, ESX performance data, etc.) It is then possible to perform a multiserver zoom and with a single click to navigate to the most interesting servers during time periods where Alyvix detected problems.
Synthetic Application Monitoring:
Allows monitoring applications from the user’s point of view by simulating transaction sequences, followed by the measurement and recording of the perceived performance data.
Would you like to be independent from subjective statements as “application XY is slow” or outage indications from your users? In this case, the concept of synthetic application monitoring and the corresponding monitoring tool Alyvix are right for you. If you are interested to get to know this concept and tool, the Synthetic Monitoring Training offered by Würth Phoenix might be the right choice.
Synthetic Monitoring Training 2017
13th to 14th June – Bolzano/Italy
20th to 21st June – Niedernhall/Germany
Who really knows what are the protocols used in the local network? Usually with netflow you can distinguish traffic per l4 port (80=http,443=https,..) but this is no more sufficient. Some applications use dynamic ports (see nfs, ftp, routed sap, …), several applications use the same ports, how can we distinguish them?
Applications grow and change really fast (like all stuff in IT world) and it is not easy to keep your netflow analysis tool aligned with this evolution.
Ntopng is able to automatically detect the applications that are generating the traffic without having to define and use filters.