Monitoring-Benachrichtigungen über Telegram oder Microsoft Teams versenden
Üblicherweise werden auftretende Probleme mittels E-Mail oder SMS an die zuständigen IT-Mitarbeiter gemeldet. Aber sind das tatsächlich schon alle möglichen Kanäle um Benachrichtigungen zu versenden?
Natürlich nicht. Zwei sehr geschickte Beispiele sind diese hier:
Benachrichtigungen über die Telegram App (CLI)
Benachrichtigungen über die Office 365 Web API (Microsoft Teams)
Telegram
Wie Sie wahrscheinlich wissen, ermöglicht die WhatsApp-ähnliche App “Telegram” das Versenden von Nachrichten an Kontakte, Gruppen und Channels. Im Gegensatz zu WhatsApp stellt Telegram eine installierbares Command Line Interface bereit, durch welches Sie das Versenden von Nachrichten vom Computer aus skripten können. Sie müssen dazu die Telegram Binaries herunterladen und auf Ihrem Monitoring Server installieren. Anschließend definieren Sie den Benachrichtigungsbefehl im Monitoring folgendermaßen:
Natürlich sollten Sie vorher Ihren Monitoring-Host als Telegram-Sender registrieren (wie Sie es auch auf Ihrem Smartphone tun würden). Hierfür können Sie die selbe Telefonnummer verwenden wie für den SMS-Versand.
Office 365
Hier haben Sie die Möglichkeit Gruppen zu registrieren. Für die Benachrichtigung benötigen Sie genau diese GUID, danach können Sie diesen Befehl nutzen um die Benachrichtigungen zu versenden:
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix (now Würth IT Italy). Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
Author
Juergen Vigna
I have over 20 years of experience in the IT branch. After first experiences in the field of software development for public transport companies, I finally decided to join the young and growing team of Würth Phoenix (now Würth IT Italy). Initially, I was responsible for the internal Linux/Unix infrastructure and the management of CVS software. Afterwards, my main challenge was to establish the meanwhile well-known IT System Management Solution WÜRTHPHOENIX NetEye. As a Product Manager I started building NetEye from scratch, analyzing existing open source models, extending and finally joining them into one single powerful solution. After that, my job turned into a passion: Constant developments, customer installations and support became a matter of personal. Today I use my knowledge as a NetEye Senior Consultant as well as NetEye Solution Architect at Würth Phoenix.
With Elastic Observability we can create alerts on all data we collect, such as logs, metrics, application services and synthetic monitoring. However, NetEye represents the main operational console from which to monitor the entire infrastructure. By sending alarms from Elastic Read More
Node export in the Tornado Processing Tree was broken on Firefox The bug was caused by a divergence between Firefox and Chrome in blob handling with CSP. Issue resolved, behavior is now consistent across both browsers. List of updated packages Read More
Processing Tree Rendering Issue We shipped a fix for a rendering bug in the Tornado UI Processing Tree. Under specific conditions, navigating back to the dashboard after expanding tree nodes caused the tree to render incorrectly nodes would appear collapsed, Read More
Role Search Now Works in Access Control We've fixed the search functionality in the Roles view under Configuration - Access Control, so you can now find roles instantly without any errors. List of updated packages To solve the issues mentioned Read More
Running Ollama locally or on dedicated hardware is straightforward until you need to know whether a model is actually loaded in RAM, how fast it generates tokens under load, or when memory consumption reaches a threshold that affects other workloads. Read More