30. 03. 2026 Davide Sbetti APM, Log Management, Log-SIEM, NetEye

Sending OTel Data to Elasticsearch: Tenant Segregation through OAuth

Hi everyone!

Today I’d like to share with you an investigation we undertook related to ingesting Open Telemetry data in Elasticsearch, while maintaining tenant segregation from start to end.

The Scenario

Let’s imagine we have multiple customers, where in this case “multiple” may be well in the order of hundreds, who would like to send OTel data to our Elasticsearch cluster. Of course, to be able to grant access to the right customers and to only their data, we need to write them in different namespaces.

How to Tackle It?

When looking at the Elastic world, there are a couple of options which may reach our goal, such as using an APM Server.

This of course is great, it works well and is also the standard solution that is often applied in NetEye. With one caveat, though. Generally, each APM server can write to only a specific namespace (which for us is the equivalent of a tenant) and hence we can see it as being “dedicated” to a specific customer.

Running multiple APM servers? Yeah that works, except we can run only one APM integration for each Elastic Agent, which means we would need a separate Elastic Agent for each customer. This may already be the case, and it’s a good thing if other data are coming/being collected from that specific customer, but if the only goal is to receive OTel data, this may not scale well.

An Alternative Approach: OTEL Collector + Keycloak

We saw how the standard solution may not really scale well in all cases, which is what brought us to investigate the topic. Clearly, the main goal is not only to find a solution that scales well in terms of resources, but that is also secure and easy to manage.

That’s when we started looking at standard OTel collectors and found that, in the OTel collector contrib image, the OAuth extension is present by default. This gave us an alternative approach to look at!

We already have a central tool to manage authentication in NetEye (namely Keycloak) and, in any case, users for each customer should be present in Keycloak to then allow their access to the system and hence the ability to visualize their data. What if we could also use it for this purpose?

It turns out that we actually can. The OAuth extension of the OTEL contrib collector not only allows clients to authenticate to the collector using the OAuth protocol, but also enables access to fields present in the JWT header used in the process.

The Final Tested Approach

This brought us to the final approach that we tried: what about creating a client in Keycloak for each customer who needs to send data to our collector, with a claim representing the tenant and being hard-coded in the client configuration?

In this way, applications (or collectors) running on the client side can authenticate to our collector(s) using the OAuth protocol and then in the signed JWT token sharing with us the destination namespace. That then lets us force through the event that we are processing, hence routing the data directly to the correct namespace.

The collector configuration will thus resemble the following one:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
        auth:
          authenticator: oidc
        tls:
          cert_file: /path/to/cert.crt
          key_file: /path/to/cert.key

extensions:
    oidc:
        issuer_url: https://<your-issuer>
        issuer_ca_path: /path/to/issuer-CA.crt
        username_claim: username
        audience: account

processors:
  batch:
    send_batch_size: 1000
    timeout: 1s
    send_batch_max_size: 1500
  batch/metrics:
    send_batch_max_size: 0
    timeout: 1s
  attributes/tenant:
    actions:
      - key: data_stream.namespace
        from_context: auth.claims.tenant
        action: upsert

service:
  extensions: [oidc]
  pipelines:
    metrics:
     receivers: [otlp]
     processors: [attributes/tenant, batch/metrics]
     exporters: [elasticsearch]
    logs:
      receivers: [otlp]
      processors: [attributes/tenant, batch]
      exporters: [elasticsearch]
    traces:
      receivers: [otlp]
      processors: [attributes/tenant, batch]
      exporters: [elasticsearch]

exporters:
    elasticsearch:
        endpoints:
            - 'https://<elasticsearch-url>:9200'
        api_key: '<api-key-able-to-write>'
        mapping:
          mode: otel

This solution thus achieves secure multi-tenant ingestion with even a single collector, while still giving room for horizontal scaling in case multiple collectors are needed to split the load of a lot of incoming data, since each collector can process data coming from any tenant.

Furthermore, since it doesn’t rely on fields in the documents, it’s more secure than routing events based on document content.

Conclusion

In this blog post we saw how it’s possible to use OTel Collectors, together with the OAuth extension, to ensure data are indexed in the correct namespace, ensuring tenant segregation during the whole process, thus reducing the amount of resources needed for ingestion.

These Solutions are Engineered by Humans

Did you find this article interesting? Does it match your skill set? Our customers often present us with problems that need customized solutions. In fact, we’re currently hiring for [roles just like this](https://www.wuerth-phoenix.com/en/job/customer-solution-engineer/) and [others](https://www.wuerth-phoenix.com/en/all-job-offers/) here at Würth IT Italy.

Davide Sbetti

Davide Sbetti

Hi! I'm Davide and I'm a Software Developer with the R&D Team in the "IT System & Service Management Solutions" group here at Würth IT Italy. IT has been a passion for me ever since I was a child, and so the direction of my studies was...never in any doubt! Lately, my interests have focused in particular on data science techniques and the training of machine learning models.

Author

Davide Sbetti

Hi! I'm Davide and I'm a Software Developer with the R&D Team in the "IT System & Service Management Solutions" group here at Würth IT Italy. IT has been a passion for me ever since I was a child, and so the direction of my studies was...never in any doubt! Lately, my interests have focused in particular on data science techniques and the training of machine learning models.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive