So you have MSSQL databases and you’d like to keep an eye on the performance of your DB. Using NetEye this is quite easy. The tools you need are already available on your NetEye server: InfluxDB, the Telegraf agent, and Grafana for visualizing your Dashboards.
The SQL Server Input Plugin provides metrics for your SQL Server instance. It currently works with SQL Server versions 2008 and later. Recorded metrics are lightweight, and employ the dynamic management views supplied by SQL Server. What you need then is a login for your SQL Server created more or less with the following commands:
USE master;
GO
CREATE LOGIN [neteye] WITH PASSWORD = N'mystrongpassword';
GO
GRANT VIEW SERVER STATE TO [neteye];
GO
GRANT VIEW ANY DEFINITION TO [neteye];
GO
On your NetEye server, create a config file (/etc/telegraf/telegraf_mssql.conf) for Telegraf using this template:
Now start a Telegraf agent using that config file. Telegraf will immediately start sending data to InfluxDB. Then go to Grafana and import an “official” Dashboard using the number 4730. You will now have a very nice dashboard with all the performance data for your DB instance.
The dashboard provides KPI’s and graphs for metrics collected in real time by the Telegraf agent and stored in InfluxDB:
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix (now Würth IT Italy). Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Author
Juergen Vigna
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix (now Würth IT Italy). Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
With Elastic Observability we can create alerts on all data we collect, such as logs, metrics, application services and synthetic monitoring. However, NetEye represents the main operational console from which to monitor the entire infrastructure. By sending alarms from Elastic Read More
Node export in the Tornado Processing Tree was broken on Firefox The bug was caused by a divergence between Firefox and Chrome in blob handling with CSP. Issue resolved, behavior is now consistent across both browsers. List of updated packages Read More
Processing Tree Rendering Issue We shipped a fix for a rendering bug in the Tornado UI Processing Tree. Under specific conditions, navigating back to the dashboard after expanding tree nodes caused the tree to render incorrectly nodes would appear collapsed, Read More
Role Search Now Works in Access Control We've fixed the search functionality in the Roles view under Configuration - Access Control, so you can now find roles instantly without any errors. List of updated packages To solve the issues mentioned Read More
Running Ollama locally or on dedicated hardware is straightforward until you need to know whether a model is actually loaded in RAM, how fast it generates tokens under load, or when memory consumption reaches a threshold that affects other workloads. Read More