Skip to content

Commit f2f17aa

Browse files
authored
Merge pull request #127 from cybertec-postgresql/syncupdatefixes
updates on helm-chart and doc
2 parents 59a5c95 + 8560531 commit f2f17aa

11 files changed

Lines changed: 226 additions & 94 deletions

File tree

charts/postgres-operator/values.yaml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,6 +131,8 @@ configKubernetes:
131131
enable_pod_disruption_budget: true
132132
# toogles readiness probe for database pods
133133
enable_readiness_probe: true
134+
# toogles liveness probe for database pods
135+
enable_liveness_probe: false
134136
# enables sidecar containers to run alongside Spilo in the same pod
135137
enable_sidecars: true
136138

@@ -203,7 +205,9 @@ configKubernetes:
203205

204206
# group ID with write-access to volumes (required to run Spilo as non-root process)
205207
# spilo_fsgroup: 103
206-
208+
209+
# whether the containers should run with readonly_root_filesystem
210+
readonly_root_filesystem: true
207211
# whether the Spilo container should run in privileged mode
208212
spilo_privileged: false
209213
# whether the Spilo container should run with additional permissions other than parent.

docs/hugo/content/en/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ date: 2024-03-11T14:26:51+01:00
44
draft: false
55
weight: 1
66
---
7-
Current Release: 0.9.0 (3.12.2025) [Release Notes](release_notes)
7+
Current Release: 0.9.1 (22.01.2026) [Release Notes](release_notes)
88

99
<img src="https://raw.githubusercontent.com/cybertec-postgresql/CYBERTEC-pg-operator/fac724618ea1395ed49cb1db7f3429f5b4324337/docs/diagrams/cpo_logo.svg" alt="drawing" width="350" />
1010

docs/hugo/content/en/backup/aws.md

Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -5,70 +5,70 @@ draft: false
55
weight: 2
66
---
77

8-
This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
8+
This chapter describes the use of pgBackRest in combination with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore, or SwiftStack. While it is not mandatory to operate Kubernetes on the AWS Cloud Platform, the efficiency and duration of a backup depend on the network connection to your storage provider.
99

10-
This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.
10+
{{< hint type=important >}} Precondition: A S3 bucket and a privileged role/user with valid credentials are required before proceeding. {{< /hint >}}
1111

12-
{{< hint type=important >}} Precondition: a S3-bucket and a priviledged role with credentials is needed for this chapter. {{< /hint >}}
13-
14-
### Create a s3-bucket on the AWS console
15-
16-
### Create a priviledged service-role
17-
18-
### Modifying the Cluster
19-
As soon as all requirements are met:
20-
21-
- A S3 bucket
22-
- Access-Token and Secret-Access-Key for the service role with the required authorisations for the bucket
23-
24-
the cluster can be modified. Firstly, a secret containing the Credentials is created and the cluster manifest is adapted accordingly.
25-
26-
The first step is to create the required secret. This is most easily done storing the needed data in a file called s3.conf and using a `kubectl` command.
12+
1. Create the Authentication Secret
2713

14+
The operator needs access to your S3 bucket. The credentials and the encryption passphrase are stored in a Kubernetes Secret. This is most easily done by creating a file named s3.conf:
2815
```
29-
# Create a file with name s3.conf and add the following infos. Please replace the placeholder by the credentials
3016
[global]
3117
repo1-s3-key=YOUR_S3_ACCESS_KEY
3218
repo1-s3-key-secret=YOUR_S3_KEY_SECRET
3319
repo1-cipher-pass=YOUR_ENCRYPTION_PASSPHRASE
20+
```
21+
{{< hint type=info >}} repo1-cipher-pass is only required if you want to use the backup encryption feature of pgBackRest. {{< /hint >}}
22+
23+
Then, create the secret using `kubectl`:
3424

35-
# Create the secret with the credentials
25+
```
26+
# Create the secret in the same namespace as your cluster
3627
kubectl create secret generic cluster-1-s3-credentials --from-file=s3.conf=s3.conf
3728
```
3829

39-
In the next step, the secret name ais stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
30+
2. Modifying the Cluster Manifest
31+
32+
Once the secret is created, the cluster manifest must be adapted. This involves defining the repository settings, the backup schedule, and the S3-specific parameters.
33+
S3 Addressing Styles (Host vs. Path)
34+
35+
A critical parameter for S3 compatibility is the repo1-s3-uri-style.
36+
37+
host: (Default) Accesses the bucket via https://bucket-name.s3.endpoint.com. Used by standard AWS S3.
38+
39+
path: Accesses the bucket via https://s3.endpoint.com/bucket-name. Often required for MinIO, Ceph, or other on-premise S3 implementations.
40+
41+
{{< hint type=info >}} The default value is host, so it does not necessarily have to be set unless path is required. {{< /hint >}}
42+
4043

4144
```
42-
apiVersion: cpo.opensource.cybertec.at/v1
43-
kind: postgresql
44-
metadata:
45-
name: cluster
46-
namespace: cpo
47-
spec:
48-
backup:
49-
pgbackrest:
50-
image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
51-
repos:
52-
- endpoint: 'https://s3-zurich.cyberlink.cloud:443'
53-
name: repo1
54-
region: zurich
55-
resource: cpo-cluster-bucket
56-
schedule:
57-
full: 30 2 * * *
58-
incr: '*/30 * * * *'
59-
storage: s3
60-
configuration:
61-
secret: cluster-1-s3-credential
62-
global:
63-
repo1-path: /cluster/repo1/
64-
repo1-retention-full: '7'
65-
repo1-retention-full-type: count
45+
apiVersion: cpo.opensource.cybertec.at/v1
46+
kind: postgresql
47+
metadata:
48+
name: cluster
49+
namespace: cpo
50+
spec:
51+
backup:
52+
pgbackrest:
53+
image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-18.1-1'
54+
repos:
55+
- endpoint: 's3.eu-central-1.amazonaws.com'
56+
name: repo1
57+
region: eu-central-1
58+
resource: cpo-cluster-bucket
59+
schedule:
60+
full: 30 2 * * *
61+
incr: '*/30 * * * *'
62+
storage: s3
63+
configuration:
64+
secret: cluster-1-s3-credential
65+
global:
66+
repo1-path: /cluster/repo1/
67+
repo1-retention-full: '7'
68+
repo1-retention-full-type: count
69+
repo1-s3-uri-style: host
6670
```
6771

68-
This example creates a backup in the defined S3 bucket. In addition to the above configurations, a secret is also required which contains the access data for the S3 storage. The name of the secret must be stored in the `spec.backup.pgbackrest.configuration.secret` object and the secret must be located in the same namespace as the cluster.
69-
Information required to address the S3 bucket:
70-
- `Endpoint`: S3 api endpoint
71-
- `Region`: Region of the bucket
72-
- `resource`: Name of the bucket
72+
{{< hint type=info >}} Each pgBackRest parameter can be used by adding it to the global section. See [pgbackrest documentation](https://pgbackrest.org/configuration.html). {{< /hint >}}
7373

74-
An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a sercret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
74+
An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a secret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.

docs/hugo/content/en/backup/azure_blob.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ This chapter describes the use of pgBackRest in combination with Azure Blob Stor
99

1010
{{< hint type=important >}} Precondition: a blob-storage-volume and a priviledged role is needed for this chapter. {{< /hint >}}
1111

12-
### Create a blob-storage-volume on the Azure console
12+
<!-- ### Create a blob-storage-volume on the Azure console
1313
14-
### Create a priviledged service-role
14+
### Create a priviledged service-role -->
1515

1616
### Modifying the Cluster
1717
As soon as all requirements are met:

docs/hugo/content/en/backup/gcs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,9 @@ This chapter describes the use of pgBackRest in combination with Google Cloud St
99

1010
{{< hint type=important >}} Precondition: a gcs-bucket and a priviledged role is needed for this chapter. {{< /hint >}}
1111

12-
### Create a gcs-bucket on the google cloud console
12+
<!-- ### Create a gcs-bucket on the google cloud console
1313
14-
### Create a priviledged service-role
14+
### Create a priviledged service-role -->
1515

1616
### Modifying the Cluster
1717
As soon as all requirements are met:

docs/hugo/content/en/installation/install_operator.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,10 +52,10 @@ helm install -n cpo cpo helm/operator/.
5252

5353
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
5454

55-
### Apply
55+
<!-- ### Apply
5656
5757
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
5858
5959
### Operatorhub
6060
61-
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
61+
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements. -->

docs/hugo/content/en/pg_versioning/_index.md renamed to docs/hugo/content/en/lifecycle/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "PG versioning"
2+
title: "Lifecycle"
33
date: 2023-12-28T14:26:51+01:00
44
draft: false
55
weight: 2100

docs/hugo/content/en/pg_versioning/major_upgrades.md renamed to docs/hugo/content/en/lifecycle/major_upgrades.md

Lines changed: 36 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,44 @@ draft: false
55
weight: 2120
66
---
77

8-
CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
8+
The CYBERTEC PostgreSQL Operator (CPO) enables in-place upgrades, allowing you to upgrade a cluster to a new PostgreSQL major version. This process utilizes pg_upgrade in the background to minimize downtime and data movement.
99

10-
{{< hint type=info >}}Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.{{< /hint >}}
10+
{{< hint type=info >}} Note: An in-place upgrade triggers a pod restart (rolling update) and causes a brief operational interruption during the actual execution of the data migration. {{< /hint >}}
1111

12+
## How to trigger an In-Place Upgrade ##
1213

13-
## How does the upgrade work?
14+
To trigger the upgrade, simply increase the version number in your cluster manifest. If the version is valid, the Operator automatically initiates the procedure described below.
15+
16+
```
17+
spec:
18+
postgresql:
19+
version: "18"
20+
```
21+
You can also apply this change via kubectl:
22+
23+
```sh
24+
kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
25+
'{"spec":{"postgresql":{"version":"18"}}}'
26+
```
27+
28+
## Alternative Upgrade Methods ##
29+
30+
### Upgrade on cloning
31+
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
32+
33+
### manual upgrade via the PostgreSQL container
34+
35+
In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
36+
37+
```
38+
python3 /scripts/inplace_upgrade.py N
39+
```
40+
where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.
41+
42+
{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
43+
44+
45+
## Under the Hood: How the Upgrade Works ##
1446

1547
### Preconditions:
1648
1. Pod restart - Use the rolling update strategy to replace all pods based on the new ENV `PGVERSION` with the version you want to update to.
@@ -53,35 +85,4 @@ CPO enables the use of the in-place upgrade, which makes it possible to upgrade
5385
### How a rollback is working?
5486
1. Stop rsynd if its running
5587
2. Disable the maintenance mode for the Cluster
56-
3. Drop directory `data_new`
57-
58-
59-
## How to trigger a In-Place-Upgrade with cpo?
60-
61-
```
62-
spec:
63-
postgresql:
64-
version: "18"
65-
```
66-
To trigger an In-Place-Upgrade you have just to increase the parameter `spec.postgresql.version`. If you choose a valid number the Operator will start with the prozedure, described above.
67-
68-
```sh
69-
kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
70-
'{"spec":{"postgresql":{"version":"18"}}}'
71-
```
72-
73-
## Upgrade on cloning
74-
75-
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
76-
77-
## manual upgrade via the PostgreSQL container
78-
79-
In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
80-
81-
```
82-
python3 /scripts/inplace_upgrade.py N
83-
```
84-
85-
where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.
86-
87-
{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
88+
3. Drop directory `data_new`
File renamed without changes.
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
---
2+
title: "Updating the Operator"
3+
date: 2025-12-28T14:26:51+01:00
4+
draft: false
5+
weight: 2100
6+
---
7+
8+
This chapter describes the recommended process for updating the CYBERTEC PostgreSQL Operator (CPO). To ensure a smooth transition and compatibility with new features, updates should be performed using our official Helm repository.
9+
10+
{{< hint type=important >}} CRD Update Requirement: Due to how Helm handles the crds/ directory, helm upgrade will not automatically update or patch existing Custom Resource Definitions (CRDs). You must manually apply the updated CRDs before upgrading the Helm release. {{< /hint >}}
11+
12+
## Using Helm-Chart
13+
14+
1. Update the Helm Repository
15+
16+
First, ensure your local Helm chart cache is up to date with the latest versions from the CYBERTEC repository:
17+
```
18+
helm repo update cpo
19+
```
20+
21+
2. Update the Custom Resource Definitions (CRDs)
22+
23+
Before upgrading the Helm release, you must manually apply the latest CRDs from the CYBERTEC-operator-tutorials repository. This is a safety measure because Helm does not touch existing CRDs to prevent accidental data loss.
24+
25+
Apply the definitions for the Postgres clusters and the operator configuration directly from the source:
26+
27+
```
28+
# Update the PostgreSQL Cluster CRD
29+
kubectl apply -f https://raw.githubusercontent.com/cybertec-postgresql/CYBERTEC-operator-tutorials/refs/heads/main/setup/helm/operator/crds/postgresql.crd.yaml
30+
31+
# Update the Operator Configuration CRD
32+
kubectl apply -f https://raw.githubusercontent.com/cybertec-postgresql/CYBERTEC-operator-tutorials/refs/heads/main/setup/helm/operator/crds/operatorconfiguration.crd.yaml
33+
```
34+
35+
3. Execute the Helm Upgrade
36+
37+
Once the CRDs are up to date, you can upgrade the operator deployment. This process replaces the operator pod with the new version and updates the necessary RBAC roles and service accounts.
38+
```
39+
# Upgrade the CPO release in the 'cpo' namespace
40+
helm upgrade cpo cpo/cybertec-pg-operator \
41+
--namespace cpo \
42+
--reuse-values
43+
```
44+
45+
## Using CPO-Tutorial-Repository
46+
47+
1. Clone or Update the Tutorial Repo
48+
```
49+
git clone https://github.com/$GITHUB_USER/CYBERTEC-operator-tutorials.git
50+
cd CYBERTEC-operator-tutorials
51+
```
52+
53+
2. Patch CRDS
54+
55+
```
56+
# Update the PostgreSQL Cluster CRD
57+
kubectl apply -f setup/helm/operator/crds/postgresql.crd.yaml
58+
59+
# Update the Operator Configuration CRD
60+
kubectl apply -f setup/helm/operator/crds/operatorconfiguration.crd.yaml
61+
```
62+
63+
3. Execute the Helm Upgrade
64+
```
65+
# Upgrade the CPO release in the 'cpo' namespace
66+
helm upgrade cpo setup/helm/operator/. \
67+
--namespace cpo \
68+
--reuse-values
69+
```
70+
71+
## Verification
72+
73+
To ensure the update was successful, perform the following checks:
74+
75+
1. Pod Status: Verify that the new operator pod is running.
76+
```
77+
kubectl get pods -n cpo -l app.kubernetes.io/name=cybertec-pg-operator
78+
```
79+
80+
2. Version Check: Check the logs to see the version string during startup.
81+
```
82+
kubectl logs -n cpo deployment/cybertec-pg-operator | grep "Starting operator"
83+
```
84+
85+
3. CRD Integrity: Ensure the new CRD fields are recognized by the Kubernetes API.
86+
```
87+
kubectl describe crd postgresqls.cpo.opensource.cybertec.at
88+
```
89+
90+
## Why manual CRD patching? ##
91+
92+
The CRDs are located in the helm/operator/crds/ folder. By design, Helm only installs these during the initial helm install. During an upgrade, Helm ignores this folder to protect the cluster from unintended schema changes. Therefore, manual application via kubectl apply is the standard and safest path for CPO updates.
93+
94+
## Compatibility ##
95+
96+
Always ensure that your postgresql manifests are compatible with the new operator version by checking the [Release Notes](release_notes) in the Documentation.

0 commit comments

Comments
 (0)