16. 09. 2025 Davide Sbetti DevOps, Kubernetes

Monitoring DBs through PMM: a Migration to OpenShift

Hi 😀

Today I’d like to explore with you a migration that we performed to a service that’s used internally to monitor the performance of various DBs, gathering data that’s especially useful for troubleshooting.

This tool is the Percona Monitoring and Management (PMM) platform, which combines agents or direct access to various supported DBMS (MySQL, MongoDB and PostgreSQL) to monitor the performance of the DBs gathering data not just about the machine itself, but also about the various queries executed, in order to perform in-depth analysis to guide the optimization process.

Initially, we deployed PMM as a Podman container on a VM, but at the time we decided, together with the team performing the analysis, to migrate it to our OpenShift cluster, for better scalability and update handling.

Deploying PMM in OpenShift

To deploy PMM in OpenShift, we decided to use the official Helm Chart, provided by Percona on their GitHub repo.

Given that we often use Kustomize, our base file is similar to this:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: pmm
    releaseName: pmm
    repo: https://percona.github.io/percona-helm-charts/
    namespace: <our-namespace>
    version: 1.4.7
    valuesFile: values.yaml

Our values.yaml file is then used to specify some variables that allow us to slightly tune certain aspects of the deployment, such as avoiding the auto-generated secret for the admin password and the type of service used to expose it.

One notable aspect is that PMM needs Application-Layer Protocol Negotiation (ALPN) to be active, given that we have an Nginx instance in front of various services. This means that on OpenShift, when using a Route to expose the service, we cannot use TLS termination since ALPN is not supported. To be able to use ALPN, the route must be set to either re-encrypt or passthrough.

In our case, we decided on a passthrough for simplicity, and mounted the certificates and dhparams on the PMM container using a projected volume in our values.yaml:

extraVolumes:
  - name: pmm-certificates
    projected:
      sources:
        - secret:
            name: pmm-certs
            items:
              - key: certificate.crt
                path: certificate.crt
              - key: certificate.key
                path: certificate.key
              - key: dhparam.pem
                path: dhparam.pem
        - configMap:
            name: your-ca
            items:
              - key: your-ca.pem
                path: ca-certs.pem

This works since PMM does not overwrite or re-generate the certificates if it finds them already there.

Migrating Data from the Old PMM

Okay, once we had successfully deployed the PMM instance on OpenShift, how could we migrate the data from the previous instance? We weren’t sure if that was needed in our case or if the team performing the analysis preferred a fresh installation, so we prepared a procedure to migrate the data.

It turned out that we needed to first migrate the configurations and then the data, so that they would be linked correctly in the dashboards.

Percona offers a tool called PMM Dump, which we could use to export the data and the services configuration. However, it doesn’t allow us to also export the node configurations that we would like to link to the services. For that, we used the RestAPI that PMM offers!

To retrieve the list of nodes currently configured, we can use the following GET request:

curl -u '<user>:<password>' \ 
  https://old-pmm.mydomain/v1/inventory/nodes

We can then re-import the node in the new PMM using a POST request with the same content returned by the previous GET request, similar to this example:

curl -u "<user>:<password>" \ 
  https://new-pmm.mydomain/v1/inventory/nodes \ 
  -X POST -d '
{
    "generic": {
          "node_id":  "the-id-generated-by-the-pmm",
          "node_name":  "pmm",
          "address":  "<address>",
          "machine_id":  "the-id-of-the-machine",
          "distro":  "linux",
          "node_model":  "",
          "region":  "",
          "az":  "",
          "custom_labels":  {}
      }
}'

Unfortunately, the node_id is not preserved during the insertion and thus, to allow a match with the data imported afterwards, we need to modify it by executing the following command directly in the PMM container:

psql -U postgres -d pmm-managed -c \ 
"update nodes set node_id = '<node-id>' where node_name = '<node-name>';"

After having successfully imported the nodes, we can use the PMM Dump tool to export the data from the old PMM instance to the new one.

To export the data we can run:

pmm-dump export --dump-path=old-pmm-data \ 
  --pmm-url="https://<user>:<pass>@old-pmm.mydomain" \
 --ignore-load --export-services-info

And then to re-import them:

pmm-dump import --dump-path=old-pmm-data \ 
  --pmm-host https://new-pmm.mydomain --pmm-user <user> \
  --pmm-pass<password> --ignore-load

A not-so-side note: in the export command we used the --export-services-info flag, which lets us also export some metadata about the services being monitored. We found that the option actually had an issue with PMM 3 which lead the export process to fail.

But since we ❤️ Open Source and we believe in the sharing of knowledge, after fixing the issue locally by modifying the original Go implementation to patch the issue and fix some tests, we opened a PR on the official PMM Dump GitHub repository, currently under review, to push the fix upstream.

Happy monitoring 📊 !

These Solutions are Engineered by Humans

Did you find this article interesting? Does it match your skill set? Programming is at the heart of how we develop customized solutions. In fact, we’re currently hiring for roles just like this and others here at Würth Phoenix.

Davide Sbetti

Davide Sbetti

Hi! I'm Davide and I'm a Software Developer with the R&D Team in the "IT System & Service Management Solutions" group here at Würth Phoenix. IT has been a passion for me ever since I was a child, and so the direction of my studies was...never in any doubt! Lately, my interests have focused in particular on data science techniques and the training of machine learning models.

Author

Davide Sbetti

Hi! I'm Davide and I'm a Software Developer with the R&D Team in the "IT System & Service Management Solutions" group here at Würth Phoenix. IT has been a passion for me ever since I was a child, and so the direction of my studies was...never in any doubt! Lately, my interests have focused in particular on data science techniques and the training of machine learning models.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive