Blog Entries

21. 03. 2026 Andrea Mariani AI

Reflections on Running LLMs Locally: Why It Is Worth Running Them on Your Own Infrastructure

Model selection, infrastructure sizing, vertical fine-tuning and MCP server integration. All explained without the fluff. Why Run AI on Your Own Infrastructure? Let’s be honest: over the past two years, LLMs have evolved from a tool perceived as experimental and reserved for researchers into something companies use every day for concrete, practical tasks. And with…

Read More
26. 12. 2025 Davide Sbetti AI, NetEye

The Model Context Protocol (MCP): Hands-on with NetEye!

Hi! Today I’d like to discuss a bit a quite hot topic in this world newly full of LLMs, namely MCP Servers! We’ll first see what MCP is and why it was created, moving then to a short hands-on with NetEye and in particular, the Elastic Stack feature module. Wait, what? MCP? What are we…

Read More
26. 09. 2025 Simone Ragonesi AI, Artificial Intelligence, Offensive Security, Red Team

The Evolving Security Landscape of MCP

Introduction: What is MCP? The Model Context Protocol is an emerging open standard that defines how large language models and AI agents interact with external tools, services, and data sources. Instead of every AI provider building its own proprietary “tool calling” system, MCP provides a common protocol (typically over JSON-RPC) to expose capabilities such as…

Read More

Archive