You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hugo/content/en/backup/aws.md
+50-50Lines changed: 50 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,70 +5,70 @@ draft: false
5
5
weight: 2
6
6
---
7
7
8
-
This chapter describes the use of pgBackRest in combination with with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore or SwiftStack. It is not absolutely necessary to operate a Kubernetes on the AWS Cloud Platform. However, as with any cloud storage, the efficiency and therefore the duration of a backup depends on the connection.
8
+
This chapter describes the use of pgBackRest in combination with AWS S3 or S3-compatible storage such as MinIO, Cloudian HyperStore, or SwiftStack. While it is not mandatory to operate Kubernetes on the AWS Cloud Platform, the efficiency and duration of a backup depend on the network connection to your storage provider.
9
9
10
-
This Chapter will use AWS S3 for the example, the usage of different s3-compatible Storage is similiar.
10
+
{{< hint type=important >}} Precondition: A S3 bucket and a privileged role/user with valid credentials are required before proceeding. {{< /hint >}}
11
11
12
-
{{< hint type=important >}} Precondition: a S3-bucket and a priviledged role with credentials is needed for this chapter. {{< /hint >}}
13
-
14
-
### Create a s3-bucket on the AWS console
15
-
16
-
### Create a priviledged service-role
17
-
18
-
### Modifying the Cluster
19
-
As soon as all requirements are met:
20
-
21
-
- A S3 bucket
22
-
- Access-Token and Secret-Access-Key for the service role with the required authorisations for the bucket
23
-
24
-
the cluster can be modified. Firstly, a secret containing the Credentials is created and the cluster manifest is adapted accordingly.
25
-
26
-
The first step is to create the required secret. This is most easily done storing the needed data in a file called s3.conf and using a `kubectl` command.
12
+
1. Create the Authentication Secret
27
13
14
+
The operator needs access to your S3 bucket. The credentials and the encryption passphrase are stored in a Kubernetes Secret. This is most easily done by creating a file named s3.conf:
28
15
```
29
-
# Create a file with name s3.conf and add the following infos. Please replace the placeholder by the credentials
30
16
[global]
31
17
repo1-s3-key=YOUR_S3_ACCESS_KEY
32
18
repo1-s3-key-secret=YOUR_S3_KEY_SECRET
33
19
repo1-cipher-pass=YOUR_ENCRYPTION_PASSPHRASE
20
+
```
21
+
{{< hint type=info >}} repo1-cipher-pass is only required if you want to use the backup encryption feature of pgBackRest. {{< /hint >}}
22
+
23
+
Then, create the secret using `kubectl`:
34
24
35
-
# Create the secret with the credentials
25
+
```
26
+
# Create the secret in the same namespace as your cluster
In the next step, the secret name ais stored in the secret in the cluster manifest. In addition, global settings, such as the retention time of the backups in the global object, are defined, the image for `pgBackRest` is specified and the necessary information for the repository is added. This includes both the desired storage path in the bucket and the times for automatic backups based on the cron syntax.
30
+
2. Modifying the Cluster Manifest
31
+
32
+
Once the secret is created, the cluster manifest must be adapted. This involves defining the repository settings, the backup schedule, and the S3-specific parameters.
33
+
S3 Addressing Styles (Host vs. Path)
34
+
35
+
A critical parameter for S3 compatibility is the repo1-s3-uri-style.
36
+
37
+
host: (Default) Accesses the bucket via https://bucket-name.s3.endpoint.com. Used by standard AWS S3.
38
+
39
+
path: Accesses the bucket via https://s3.endpoint.com/bucket-name. Often required for MinIO, Ceph, or other on-premise S3 implementations.
40
+
41
+
{{< hint type=info >}} The default value is host, so it does not necessarily have to be set unless path is required. {{< /hint >}}
This example creates a backup in the defined S3 bucket. In addition to the above configurations, a secret is also required which contains the access data for the S3 storage. The name of the secret must be stored in the `spec.backup.pgbackrest.configuration.secret` object and the secret must be located in the same namespace as the cluster.
69
-
Information required to address the S3 bucket:
70
-
-`Endpoint`: S3 api endpoint
71
-
-`Region`: Region of the bucket
72
-
-`resource`: Name of the bucket
72
+
{{< hint type=info >}} Each pgBackRest parameter can be used by adding it to the global section. See [pgbackrest documentation](https://pgbackrest.org/configuration.html). {{< /hint >}}
73
73
74
-
An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a sercret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
74
+
An [example](https://github.com/cybertec-postgresql/CYBERTEC-operator-tutorials/tree/main/cluster-tutorials/pgbackrest_with_s3) with a secret generator is also available in the tutorials. Enter your access data in the s3.conf file and transfer the tutorial to your Kubernetes with kubectl apply -k cluster-tutorials/pgbackrest_with_s3/.
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
54
54
55
-
### Apply
55
+
<!--### Apply
56
56
57
57
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
58
58
59
59
### Operatorhub
60
60
61
-
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.
61
+
The installation uses a standard configuration. On the following page you will find more information on how to [configure cpo](/documentation/how-to-use/configuration/) and thus adapt it to your requirements.-->
Copy file name to clipboardExpand all lines: docs/hugo/content/en/lifecycle/major_upgrades.md
+36-35Lines changed: 36 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,12 +5,44 @@ draft: false
5
5
weight: 2120
6
6
---
7
7
8
-
CPO enables the use of the in-place upgrade, which makes it possible to upgrade a cluster to a new PG major. For this purpose, pg_upgrade is used in the background.
8
+
The CYBERTEC PostgreSQL Operator (CPO) enables in-place upgrades, allowing you to upgrade a cluster to a new PostgreSQL major version. This process utilizes pg_upgrade in the background to minimize downtime and data movement.
9
9
10
-
{{< hint type=info >}}Note that an in-place upgrade generates both a pod restore in the form of a rolling update and an operational interruption of the cluster during the actual execution of the restore.{{< /hint >}}
10
+
{{< hint type=info >}}Note: An in-place upgrade triggers a pod restart (rolling update) and causes a brief operational interruption during the actual execution of the data migration. {{< /hint >}}
11
11
12
+
## How to trigger an In-Place Upgrade ##
12
13
13
-
## How does the upgrade work?
14
+
To trigger the upgrade, simply increase the version number in your cluster manifest. If the version is valid, the Operator automatically initiates the procedure described below.
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
32
+
33
+
### manual upgrade via the PostgreSQL container
34
+
35
+
In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
36
+
37
+
```
38
+
python3 /scripts/inplace_upgrade.py N
39
+
```
40
+
where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.
41
+
42
+
{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
43
+
44
+
45
+
## Under the Hood: How the Upgrade Works ##
14
46
15
47
### Preconditions:
16
48
1. Pod restart - Use the rolling update strategy to replace all pods based on the new ENV `PGVERSION` with the version you want to update to.
@@ -53,35 +85,4 @@ CPO enables the use of the in-place upgrade, which makes it possible to upgrade
53
85
### How a rollback is working?
54
86
1. Stop rsynd if its running
55
87
2. Disable the maintenance mode for the Cluster
56
-
3. Drop directory `data_new`
57
-
58
-
59
-
## How to trigger a In-Place-Upgrade with cpo?
60
-
61
-
```
62
-
spec:
63
-
postgresql:
64
-
version: "18"
65
-
```
66
-
To trigger an In-Place-Upgrade you have just to increase the parameter `spec.postgresql.version`. If you choose a valid number the Operator will start with the prozedure, described above.
When cloning, the new cluster manifest must have a higher version number than the source cluster and is created from a base backup. Depending on the cluster size, the downtime can be considerable in this case, as write operations in the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, only use cloning to test major version upgrades and to check the compatibility of your app with the Postgres server of a higher version.
76
-
77
-
## manual upgrade via the PostgreSQL container
78
-
79
-
In this scenario the major version could then be run by a user from within the primary pod. Exec into the container and run:
80
-
81
-
```
82
-
python3 /scripts/inplace_upgrade.py N
83
-
```
84
-
85
-
where `N` is the number of members of your cluster (see `numberOfInstances`). The upgrade is usually fast, well under one minute for most DBs.
86
-
87
-
{{< hint type=Info >}}Note, that changes become irrevertible once pg_upgrade is called.{{< /hint >}}
This chapter describes the recommended process for updating the CYBERTEC PostgreSQL Operator (CPO). To ensure a smooth transition and compatibility with new features, updates should be performed using our official Helm repository.
9
+
10
+
{{< hint type=important >}} CRD Update Requirement: Due to how Helm handles the crds/ directory, helm upgrade will not automatically update or patch existing Custom Resource Definitions (CRDs). You must manually apply the updated CRDs before upgrading the Helm release. {{< /hint >}}
11
+
12
+
## Using Helm-Chart
13
+
14
+
1. Update the Helm Repository
15
+
16
+
First, ensure your local Helm chart cache is up to date with the latest versions from the CYBERTEC repository:
17
+
```
18
+
helm repo update cpo
19
+
```
20
+
21
+
2. Update the Custom Resource Definitions (CRDs)
22
+
23
+
Before upgrading the Helm release, you must manually apply the latest CRDs from the CYBERTEC-operator-tutorials repository. This is a safety measure because Helm does not touch existing CRDs to prevent accidental data loss.
24
+
25
+
Apply the definitions for the Postgres clusters and the operator configuration directly from the source:
Once the CRDs are up to date, you can upgrade the operator deployment. This process replaces the operator pod with the new version and updates the necessary RBAC roles and service accounts.
The CRDs are located in the helm/operator/crds/ folder. By design, Helm only installs these during the initial helm install. During an upgrade, Helm ignores this folder to protect the cluster from unintended schema changes. Therefore, manual application via kubectl apply is the standard and safest path for CPO updates.
93
+
94
+
## Compatibility ##
95
+
96
+
Always ensure that your postgresql manifests are compatible with the new operator version by checking the [Release Notes](release_notes) in the Documentation.
0 commit comments