NetEye Install and Upgrades: Moving to a Parallel Architecture
Hello everyone!
Today, I’d like to share an exciting improvement we’ve made to the installation and upgrade procedures in NetEye, introducing a faster and more efficient parallel architecture!
Why Modernize the Installation and Upgrade Processes?
At Würth Phoenix, we strive to make NetEye not only powerful but also highly efficient and reliable for our users. Yet until recently, the core installation and upgrade procedures of NetEye revealed some issues in their efficiency and maintainability.
The original architecture relied on a sequential execution of scripts, resulting in long execution times, difficult maintenance, and challenges in scaling. For instance, installing NetEye in a cluster setup required manual intervention to orchestrate the scripts node-by-node, a time-intensive process.
We realized that in today’s fast-paced IT environments, where businesses demand agility and resilience, we needed a more modern, maintainable approach. That’s why we embarked on a journey to transition these operations to a parallel architecture.
How a Parallel Approach Transformed NetEye
The foundation of our transformation lies in Ansible, an agentless IT automation tool that enables repeatable, idempotent configurations. We began by replacing our sequential scripting framework with Ansible playbooks, ensuring consistency and reliability regardless of the system’s state.
The highlight of this effort was the creation of a Python-based module that orchestrates the parallel execution of services while respecting interdependencies. Each service defines its dependencies in a JSON file, and the module builds a dependency graph to determine the optimal execution sequence.
This approach significantly reduced execution times for us. For instance, processes that previously took minutes to complete now execute in just a few seconds, a very dramatic improvement.
Challenges Along the Way
Transitioning to a parallel architecture was not without its hurdles. As we migrated from our legacy system, ensuring team-wide adoption and minimizing disruptions became key priorities. We tackled these challenges using Agile methodologies:
Iterative Migration: We introduced the new command neteye install alongside the existing neteye_secure_install for a seamless transition
Continuous Feedback: We conducted frequent reviews and tests to validate changes before deprecating legacy components
Documentation and Training: We produced clear guides and tutorials to ensure both developers and users could adapt to the new procedures
These practices allowed us to manage change effectively and keep our development teams aligned.
What’s Next for NetEye?
While the transition to a parallel architecture has delivered significant performance gains and simplified maintenance, there are several areas where further improvements could enhance the system even more:
Comprehensive Documentation: Ensuring thorough and up-to-date documentation is crucial for maintaining continuity and for easing the onboarding process for new team members. Clear, detailed guides will support both developers and users in navigating the updated procedures effectively.
Continuous Optimization of Parallelization: Although the current implementation has significantly reduced execution times, there is still potential for refinement. Enhancing the logic behind dependency management and service configuration could lead to even greater reductions in execution time and resource utilization.
Advanced Monitoring and Logging: Introducing a more sophisticated monitoring and logging system for installation and upgrade processes will improve reliability by enabling quicker identification and resolution of potential issues. Enhanced logs and dashboards will provide deeper insights into performance and errors, making the system even more robust.
Our work demonstrates how modernizing infrastructure can unlock new efficiencies and make life easier for both developers and users.
Conclusion
The shift to a parallel architecture for NetEye installation and upgrades wasn’t just about improving performance – it was also about building a foundation for scalability, reliability, and innovation. At Würth Phoenix, we’re committed to continuously improving NetEye to meet the evolving needs of our customers.
Have you undertaken similar migrations in your own projects? Share your experiences with us in the comments!
Fix Kibana sysconfig migration We resolved a bug that occurred during the migration of Kibana to a multi-instance setup. The issue prevented the proper copying of custom configurations from /neteye/shared/kibana/conf/sysconfig/kibana-user-customization. As a result, these customizations were missing from the Kibana instances Read More
Fix monitoring object creation during deployment We fixed a bug that caused a monitoring object to be lost if it was created while a Diretor deployment was in progress. List of updated packages To solve the issues mentioned above, the Read More
Fix for Tornado Webhook collector We have fixed an issue where large incoming requests to the Webhook Collector could cause the system to become unresponsive over time. List of updated packages To solve the aforementioned issues, the following packages have Read More
Fixes for Elastic Stack Improved wait for cluster status during updates / upgrades When restarting the Elasticsearch cluster during the NetEye update / upgrade procedure, is it possible let the restart procedure going even if the cluster has a "yellow" Read More
Fix for Previously, database updates would fail if the GLPI DRBD resource was active on a node other than the primary, due to an inability to locate the necessary database configuration within the DRBD-managed volume. This fix ensures that GLPI Read More