Recently, we needed to upload build artifacts to allow developers to visualize Playwright test recordings.
Initially, we used a simple PVC and an NGINX server with basic authentication, but this approach has a major drawback: it doesn’t allow uploads from different namespaces. As a result, we had to choose whether to deploy this service and its PVC in every namespace, or else completely change our approach.
After an initial investigation, we decided to use the S3 functionality provided by OpenShift Data Foundation (ODF).
To support our use case, we needed:
Storage
ODF provides a default storage class named ocs-storagecluster-ceph-rgw
, so the only resource we needed to create was an ObjectBucketClaim
, which can be done using the following manifest:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: build-artifacts-bucket
spec:
bucketName: build-artifacts-bucket
storageClassName: ocs-storagecluster-ceph-rgw
additionalConfig:
maxObjects: "1000"
bucketMaxSize: "1Gi"
This resource can be managed using any S3-compatible client. It’s important to note that the authentication credentials are automatically generated in a Secret, and you need access to this Secret in the namespace from which you intend to upload artifacts.
Uploading Artifacts
Since our OBC and pipeline run in different namespaces, we had to clone the Secret from one namespace to another. This can be done manually or by using a Kyverno ClusterPolicy
:
kind: ClusterPolicy
metadata:
name: bucket-secret-to-my-pipeline
spec:
generateExisting: true
rules:
- name: bucket-secret-to-my-pipeline
match:
resources:
kinds:
- Namespace
names:
- my-pipeline
generate:
apiVersion: v1
kind: Secret
name: build-artifacts-bucket
namespace: "{{request.object.metadata.name}}"
synchronize: true
clone:
namespace: web-proxy
name: build-artifacts-bucket
To upload artifacts, we used the amazon/aws-cli
image and invoked the AWS CLI. In our case, the integration is part of a Tekton pipeline, but the same approach works in any standard Pod.
We use envFrom
to mount the correct Secret (cloned above), because the AWS CLI expects the specific environment variables AWS_ACCESS_KEY_ID
and AWS_ACCESS_KEY_ID
. The target is S3_BUCKET_NAME
, and we create a directory with the pipeline run name to separate artifacts per pipeline run, making it easier for developers to find results. If you use the built-in storage class, the default S3_ENDPOINT_URL
is: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: upload-to-s3
spec:
params:
- name: pipelineRunName
type: string
- name: sourceDir
type: string
- name: workingDir
type: string
- name: s3-bucket-name
type: string
- name: s3-endpoint-url
type: string
workspaces:
- name: output
description: A workspace that contains the files to be uploaded
steps:
- name: s3-uploader
image: amazon/aws-cli
workingDir: $(params.workingDir)
envFrom:
- secretRef:
name: $(params.s3-bucket-name)
env:
- name: S3_BUCKET_NAME
value: $(params.s3-bucket-name)
- name: S3_ENDPOINT_URL
value: $(params.s3-endpoint-url)
- name: PIPELINE_RUN_NAME
value: $(params.pipelineRunName)
- name: SOURCE_DIR
value: $(params.sourceDir)
script: |
#!/usr/bin/env bash
set -ex
aws s3 cp ${SOURCE_DIR} s3://${S3_BUCKET_NAME}/${PIPELINE_RUN_NAME} --endpoint-url ${S3_ENDPOINT_URL} --recursive
Cleanup
S3 allows setting up a lifecycle configuration to delete files after a specified number of days. You can configure this in various ways, but the easiest method is via the API.
In our case, we apply the policy after each upload so that it’s easy to change through a parameter. However, it would be enough to configure it once.
echo '{"Rules":[{"ID":"BucketRetentionPolicy","Filter":{"Prefix":""},"Status":"Enabled","Expiration":{"Days":'$S3_RETENTION_DAYS'},"AbortIncompleteMultipartUpload":{"DaysAfterInitiation":1}}]}' > lifecycle.json
aws s3api put-bucket-lifecycle-configuration --bucket $S3_BUCKET_NAME --lifecycle-configuration file://lifecycle.json --endpoint-url $S3_ENDPOINT_URL
Expose Artifacts
Once the artifacts are available in S3, we needed a simple way to allow authenticated browsing. There are several S3 proxies available; we chose Oxynozeta S3-Proxy because:
You need to write a deployment that includes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-proxy
spec:
replicas: 1
selector:
matchLabels:
app: web-proxy
template:
metadata:
labels:
app: web-proxy
spec:
containers:
- name: web-proxy
image: oxynozeta/s3-proxy
envFrom:
- secretRef:
name: web-proxy-oidc
env:
- name: MY_BUILD_AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: build-artifacts-bucket
key: AWS_ACCESS_KEY_ID
- name: MY_BUILD_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: build-artifacts-bucket
key: AWS_SECRET_ACCESS_KEY
- name: SSL_CERT_FILE
value: /etc/ssl/custom-ca.pem
ports:
- containerPort: 8080
volumeMounts:
- name: proxy-conf
mountPath: /proxy/conf
- name: custom-ca
mountPath: /etc/ssl/custom-ca.pem
subPath: custom-ca.pem
volumes:
- name: proxy-conf
configMap:
name: web-proxy-conf
- name: custom-ca
configMap:
name: custom-ca
The configuration can be complex. Refer to the official documentation to tailor it to your circumstances.
Here’s the YAML configuration, which is stored in a configmap:
# Require authentication also for base path
listTargets:
enabled: true
mount:
path:
- /
resource:
path: /
provider: myoidc
methods:
- "GET"
oidc:
authorizationAccesses:
- group: system:authenticated
# Expose results from bucket build-artifacts-bucket under /build-artifacts
# Allow only group 'devs' to access it and configure env vars for credentials
targets:
build-artifacts:
mount:
path:
- "/build-artifacts/"
resources:
- path: "/build-artifacts/*"
provider: myoidc
methods:
- "GET"
oidc:
authorizationAccesses:
- group: devs
bucket:
name: build-artifacts-bucket
s3Endpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
credentials:
accessKey:
env: MY_BUILD_AWS_ACCESS_KEY_ID
secretKey:
env: MY_BUILD_AWS_SECRET_ACCESS_KEY
# Basic setup for server and logging
server:
ssl:
enabled: false
port: 8080
log:
level: info
format: text
# Setup an auth provider to grant authentication
authProviders:
oidc:
myoidc:
clientID: web-proxy
clientSecret:
env: WEB_PROXY_OAUTH_SECRET
issuerUrl: https://myoidc.example.com
redirectUrl: https://web-proxy.example.com
scopes:
- openid
- email
- profile
- groups
groupClaim: groups
emailVerified: false # keep this false if emails are not marked as verified
Using the S3 functionality provided by OpenShift Data Foundation, combined with the s3-proxy
from Oxyno-Zeta, offers a practical and flexible solution for managing temporary file uploads. It allows developers to easily store, access, and share build artifacts across namespaces, while keeping access controlled and lifecycle-managed.
Did you find this article interesting? Are you an “under the hood” kind of person? We’re really big on automation and we’re always looking for people in a similar vein to fill roles like this one as well as other roles here at Würth Phoenix.