Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion charts/postgres-operator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,8 @@ configKubernetes:
enable_pod_disruption_budget: true
# toogles readiness probe for database pods
enable_readiness_probe: true
# toogles liveness probe for database pods
enable_liveness_probe: false
# enables sidecar containers to run alongside Spilo in the same pod
enable_sidecars: true

Expand Down Expand Up @@ -203,7 +205,9 @@ configKubernetes:

# group ID with write-access to volumes (required to run Spilo as non-root process)
# spilo_fsgroup: 103


# whether the containers should run with readonly_root_filesystem
readonly_root_filesystem: true
# whether the Spilo container should run in privileged mode
spilo_privileged: false
# whether the Spilo container should run with additional permissions other than parent.
Expand Down
5 changes: 0 additions & 5 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
ARG BASE_IMAGE
#=registry.opensource.zalan.do/library/alpine-3.15:latest
ARG VERSION=latest

FROM ${BASE_IMAGE} as builder
Expand All @@ -21,13 +20,9 @@ LABEL maintainer="Opensource @ CYBERTEC <[email protected]>"

# We need root certificates to deal with teams api over https
RUN ${PACKAGER} -y update && ${PACKAGER} -y install ca-certificates && ${PACKAGER} clean all;
#RUN apk --no-cache add ca-certificates

COPY --from=builder /go/src/github.com/cybertec-postgresql/cybertec-pg-operator/build/* /

# RUN addgroup -g 1000 pgo
# RUN adduser -D -u 1000 -G pgo -g 'Postgres Operator' pgo

RUN groupadd -g 1000 cpo
RUN useradd cpo -u 1000 -g 1000

Expand Down
7 changes: 1 addition & 6 deletions docker/build_operator.sh
Original file line number Diff line number Diff line change
@@ -1,19 +1,14 @@
#!/bin/bash

export DEBIAN_FRONTEND=noninteractive

arch=$(dpkg --print-architecture)

set -ex

# Install dependencies

# apt-get update
# apt-get install -y wget

(
cd /tmp
wget -q "https://go.dev/dl/go1.25.2.linux-${arch}.tar.gz" -O go.tar.gz
wget -q "https://go.dev/dl/go1.25.6.linux-${arch}.tar.gz" -O go.tar.gz
tar -xf go.tar.gz
mv go /usr/local
ln -s /usr/local/go/bin/go /usr/bin/go
Expand Down
2 changes: 1 addition & 1 deletion docs/hugo/content/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ date: 2024-03-11T14:26:51+01:00
draft: false
weight: 1
---
Current Release: 0.9.0 (3.12.2025) [Release Notes](release_notes)
Current Release: 0.9.1 (22.01.2026) [Release Notes](release_notes)

<img src="https://raw.githubusercontent.com/cybertec-postgresql/CYBERTEC-pg-operator/fac724618ea1395ed49cb1db7f3429f5b4324337/docs/diagrams/cpo_logo.svg" alt="drawing" width="350" />

Expand Down
100 changes: 50 additions & 50 deletions docs/hugo/content/en/backup/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,70 +5,70 @@ draft: false
weight: 2
---

This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
This chapter describes the use of pgBackRest in combination with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore, or SwiftStack. While it is not mandatory to operate Kubernetes on the AWS Cloud Platform, the efficiency and duration of a backup depend on the network connection to your storage provider.

This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.
{{< hint type=important >}} Precondition: A S3 bucket and a privileged role/user with valid credentials are required before proceeding. {{< /hint >}}

{{< hint type=important >}} Precondition: a S3-bucket and a priviledged role with credentials is needed for this chapter. {{< /hint >}}

### Create a s3-bucket on the AWS console

### Create a priviledged service-role

### Modifying the Cluster
As soon as all requirements are met:

- A S3 bucket
- Access-Token and Secret-Access-Key for the service role with the required authorisations for the bucket

the cluster can be modified. Firstly, a secret containing the Credentials is created and the cluster manifest is adapted accordingly.

The first step is to create the required secret. This is most easily done storing the needed data in a file called s3.conf and using a `kubectl` command.
1. Create the Authentication Secret

The operator needs access to your S3 bucket. The credentials and the encryption passphrase are stored in a Kubernetes Secret. This is most easily done by creating a file named s3.conf:
```
# Create a file with name s3.conf and add the following infos. Please replace the placeholder by the credentials
[global]
repo1-s3-key=YOUR_S3_ACCESS_KEY
repo1-s3-key-secret=YOUR_S3_KEY_SECRET
repo1-cipher-pass=YOUR_ENCRYPTION_PASSPHRASE
```
{{< hint type=info >}} repo1-cipher-pass is only required if you want to use the backup encryption feature of pgBackRest. {{< /hint >}}

Then, create the secret using `kubectl`:

# Create the secret with the credentials
```
# Create the secret in the same namespace as your cluster
kubectl create secret generic cluster-1-s3-credentials --from-file=s3.conf=s3.conf
```

In the next step, the secret name ais stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
2. Modifying the Cluster Manifest

Once the secret is created, the cluster manifest must be adapted. This involves defining the repository settings, the backup schedule, and the S3-specific parameters.
S3 Addressing Styles (Host vs. Path)

A critical parameter for S3 compatibility is the repo1-s3-uri-style.

host: (Default) Accesses the bucket via https://bucket-name.s3.endpoint.com. Used by standard AWS S3.

path: Accesses the bucket via https://s3.endpoint.com/bucket-name. Often required for MinIO, Ceph, or other on-premise S3 implementations.

{{< hint type=info >}} The default value is host, so it does not necessarily have to be set unless path is required. {{< /hint >}}


```
apiVersion: cpo.opensource.cybertec.at/v1
kind: postgresql
metadata:
name: cluster
namespace: cpo
spec:
backup:
pgbackrest:
image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-16.4-1'
repos:
- endpoint: 'https://s3-zurich.cyberlink.cloud:443'
name: repo1
region: zurich
resource: cpo-cluster-bucket
schedule:
full: 30 2 * * *
incr: '*/30 * * * *'
storage: s3
configuration:
secret: cluster-1-s3-credential
global:
repo1-path: /cluster/repo1/
repo1-retention-full: '7'
repo1-retention-full-type: count
apiVersion: cpo.opensource.cybertec.at/v1
kind: postgresql
metadata:
name: cluster
namespace: cpo
spec:
backup:
pgbackrest:
image: 'docker.io/cybertecpostgresql/cybertec-pg-container:pgbackrest-18.1-1'
repos:
- endpoint: 's3.eu-central-1.amazonaws.com'
name: repo1
region: eu-central-1
resource: cpo-cluster-bucket
schedule:
full: 30 2 * * *
incr: '*/30 * * * *'
storage: s3
configuration:
secret: cluster-1-s3-credential
global:
repo1-path: /cluster/repo1/
repo1-retention-full: '7'
repo1-retention-full-type: count
repo1-s3-uri-style: host
```

This example creates a backup in the defined S3 bucket. In addition to the above configurations, a secret is also required which contains the access data for the S3 storage. The name of the secret must be stored in the `spec.backup.pgbackrest.configuration.secret` object and the secret must be located in the same namespace as the cluster.
Information required to address the S3 bucket:
- `Endpoint`: S3 api endpoint
- `Region`: Region of the bucket
- `resource`: Name of the bucket
{{< hint type=info >}} Each pgBackRest parameter can be used by adding it to the global section. See [pgbackrest documentation](https://pgbackrest.org/configuration.html). {{< /hint >}}

An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a sercret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a secret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
4 changes: 2 additions & 2 deletions docs/hugo/content/en/backup/azure_blob.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ This chapter describes the use of pgBackRest in combination with Azure Blob Stor

{{< hint type=important >}} Precondition: a blob-storage-volume and a priviledged role is needed for this chapter. {{< /hint >}}

### Create a blob-storage-volume on the Azure console
<!-- ### Create a blob-storage-volume on the Azure console

### Create a priviledged service-role
### Create a priviledged service-role -->

### Modifying the Cluster
As soon as all requirements are met:
Expand Down
4 changes: 2 additions & 2 deletions docs/hugo/content/en/backup/gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ This chapter describes the use of pgBackRest in combination with Google Cloud St

{{< hint type=important >}} Precondition: a gcs-bucket and a priviledged role is needed for this chapter. {{< /hint >}}

### Create a gcs-bucket on the google cloud console
<!-- ### Create a gcs-bucket on the google cloud console

### Create a priviledged service-role
### Create a priviledged service-role -->

### Modifying the Cluster
As soon as all requirements are met:
Expand Down
4 changes: 2 additions & 2 deletions docs/hugo/content/en/installation/install_operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,10 @@ helm install -n cpo cpo helm/operator/.

The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.

### Apply
<!-- ### Apply

The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.

### Operatorhub

The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements. -->
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "PG versioning"
title: "Lifecycle"
date: 2023-12-28T14:26:51+01:00
draft: false
weight: 2100
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,44 @@ draft: false
weight: 2120
---

CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
The CYBERTEC PostgreSQL Operator (CPO) enables in-place upgrades, allowing you to upgrade a cluster to a new PostgreSQL major version. This process utilizes pg_upgrade in the background to minimize downtime and data movement.

{{< hint type=info >}}Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.{{< /hint >}}
{{< hint type=info >}} Note: An in-place upgrade triggers a pod restart (rolling update) and causes a brief operational interruption during the actual execution of the data migration. {{< /hint >}}

## How to trigger an In-Place Upgrade ##

## How does the upgrade work?
To trigger the upgrade, simply increase the version number in your cluster manifest. If the version is valid, the Operator automatically initiates the procedure described below.

```
spec:
postgresql:
version: "18"
```
You can also apply this change via kubectl:

```sh
kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
'{"spec":{"postgresql":{"version":"18"}}}'
```

## Alternative Upgrade Methods ##

### Upgrade on cloning
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.

### manual upgrade via the PostgreSQL container

In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:

```
python3 /scripts/inplace_upgrade.py N
```
where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.

{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}


## Under the Hood: How the Upgrade Works ##

### Preconditions:
1. Pod restart - Use the rolling update strategy to replace all pods based on the new ENV `PGVERSION` with the version you want to update to.
Expand Down Expand Up @@ -53,35 +85,4 @@ CPO enables the use of the in-place upgrade, which makes it possible to upgrade
### How a rollback is working?
1. Stop rsynd if its running
2. Disable the maintenance mode for the Cluster
3. Drop directory `data_new`


## How to trigger a In-Place-Upgrade with cpo?

```
spec:
postgresql:
version: "18"
```
To trigger an In-Place-Upgrade you have just to increase the parameter `spec.postgresql.version`. If you choose a valid number the Operator will start with the prozedure, described above.

```sh
kubectl patch postgresqls.cpo.opensource.cybertec.at cluster-1 --type='merge' -p \
'{"spec":{"postgresql":{"version":"18"}}}'
```

## Upgrade on cloning

When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.

## manual upgrade via the PostgreSQL container

In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:

```
python3 /scripts/inplace_upgrade.py N
```

where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.

{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
3. Drop directory `data_new`
Loading
Loading