Georg Kostner itroduces the latest NetEye enhancements at the Open Source Conference in Milan
Georg last week presented its enhanced IT Service Management offer with NetEye at the yearly Open Source Conference in Milan. The breakout session pointed out the advantages in implementing NetEye for the Network Traffic Monitoring and focused also on Al´exa. Al’exa, which has recently been integrated in the NetEye offer simulates the end user behavior to control the availability and reliability of all IT services. With this recent enhancement NetEye allows to identify also on outsourced services which applications are consuming which bandwidth and could be the cause for possible performance losses.
Luca Deri, ntop founder, showed the main features offered by its solution, integrated with NetEye. The network monitoring with ntop allows users to improve their network visibility, extending the standard metrics (i.e., packets, bytes) and analyzing in more detail the protocols used in the network (i.e., email, VoIP, Citrix / RDC).
With Elastic Observability we can create alerts on all data we collect, such as logs, metrics, application services and synthetic monitoring. However, NetEye represents the main operational console from which to monitor the entire infrastructure. By sending alarms from Elastic Read More
Node export in the Tornado Processing Tree was broken on Firefox The bug was caused by a divergence between Firefox and Chrome in blob handling with CSP. Issue resolved, behavior is now consistent across both browsers. List of updated packages Read More
Processing Tree Rendering Issue We shipped a fix for a rendering bug in the Tornado UI Processing Tree. Under specific conditions, navigating back to the dashboard after expanding tree nodes caused the tree to render incorrectly nodes would appear collapsed, Read More
Role Search Now Works in Access Control We've fixed the search functionality in the Roles view under Configuration - Access Control, so you can now find roles instantly without any errors. List of updated packages To solve the issues mentioned Read More
Running Ollama locally or on dedicated hardware is straightforward until you need to know whether a model is actually loaded in RAM, how fast it generates tokens under load, or when memory consumption reaches a threshold that affects other workloads. Read More