Skip to content

Commit 7d78d57

Browse files
authored
Merge pull request #2 from prlaurence/doc-patch-design
Doc patch design
2 parents 6470fef + 37a4318 commit 7d78d57

File tree

7 files changed

+373
-281
lines changed

7 files changed

+373
-281
lines changed

hugo/content/Security/_index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,4 @@ date:
44
draft: false
55
weight: 7
66
---
7+
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
## pgbackrest Configuration
2+
3+
The PostgreSQL Operator integrates various features of the [pgbackrest backup and restore project](https://pgbackrest.org).
4+
5+
The *pgo-backrest-repo* container acts as a pgBackRest remote repository for the Postgres cluster to use for storing archive files and backups.
6+
7+
The following diagrams depicts some of the integration features:
8+
9+
![alt text](/operator-backrest-integration.png "Operator Backrest Integration")
10+
11+
In this diagram, starting from left to right we see the following:
12+
13+
* a user when they enter *pgo backup mycluster --backup-type=pgbackrest* will cause a pgo-backrest container to be run as a Job, that container will execute a *pgbackrest backup*
14+
command in the pgBackRest repository container to perform the backup function.
15+
16+
* a user when they enter *pgo show backup mycluster --backup-type=pgbackrest* will cause a *pgbackrest info* command to be executed on the pgBackRest repository container, the
17+
*info* output is sent directly back to the user to view
18+
19+
* the PostgreSQL container itself will use an archive command, *pgbackrest archive-push* to send archives to the pgBackRest repository container
20+
21+
* the user entering *pgo create cluster mycluster --pgbackrest* will cause
22+
a pgBackRest repository container deployment to be created, that repository
23+
is exclusively used for this Postgres cluster
24+
25+
* lastly, a user entering *pgo restore mycluster* will cause a *pgo-backrest-restore* container to be created as a Job, that container executes the *pgbackrest restore* command
26+
27+
### pgbackrest Restore
28+
29+
The pgbackrest restore command is implemented as the *pgo restore* command. This command is destructive in the sense that it is meant to *restore* a PG cluster meaning it will
30+
revert the PG cluster to a restore point that is kept in the pgbackrest repository. The prior primary data is not deleted but left in a PVC to be manually cleaned up by a DBA.
31+
The restored PG cluster will work against a new PVC created from the restore workflow.
32+
33+
When doing a *pgo restore*, here is the workflow the PostgreSQL Operator executes:
34+
35+
* turn off autofail if it is enabled for this PG cluster
36+
* allocate a new PVC to hold the restored PG data
37+
* delete the the current primary database deployment
38+
* update the pgbackrest repo for this PG cluster with a new data path of the new PVC
39+
* create a pgo-backrest-restore job, this job executes the *pgbackrest restore* command from the pgo-backrest-restore container, this Job mounts the newly created PVC
40+
* once the restore job completes, a new primary Deployment is created which mounts the restored PVC volume
41+
42+
At this point the PostgreSQL database is back in a working state. DBAs are still responsible to re-enable autofail using *pgo update cluster* and also perform a pgBackRest backup after the new primary is ready. This version of the PostgreSQL Operator also does not handle any errors in the PG replicas after a restore, that is left for the DBA to handle.
43+
44+
Other things to take into account before you do a restore:
45+
46+
* if a schedule has been created for this PostgreSQL cluster, delete that schedule prior to performing a restore
47+
* If your database has been paused after the target restore was completed, then you would need to run the psql command select pg_wal_replay_resume() to complete the recovery, on PostgreSQL 9.6⁄9.5 systems, the command you will use is select pg_xlog_replay_resume(). You can confirm the status of your database by using the built in postgres admin functions
48+
found [here:] (https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL-TABLE)
49+
* a pgBackRest restore is destructive in the sense that it deletes the existing primary deployment for the cluster prior to creating a new deployment containing the restored
50+
primary database. However, in the event that the pgBackRest restore job fails, the `pgo restore` command be can be run again, and instead of first deleting the primary deployment
51+
(since one no longer exists), a new primary will simply be created according to any options specified. Additionally, even though the original primary deployment will be deleted,
52+
the original primary PVC will remain.
53+
* there is currently no Operator validation of user entered pgBackRest command options, you will need to make sure to enter these correctly, if not the pgBackRest restore command
54+
can fail.
55+
* the restore workflow does not perform a backup after the restore nor does it verify that any replicas are in a working status after the restore, it is possible you might have to
56+
take actions on the replica to get them back to replicating with the new restored primary.
57+
* pgbackrest.org suggests running a pgbackrest backup after a restore, this needs to be done by the DBA as part of a restore
58+
* when performing a pgBackRest restore, the **node-label** flag can be utilized to target a specific node for both the pgBackRest restore job and the new (i.e. restored) primary
59+
deployment that is then created for the cluster. If a node label is not specified, the restore job will not target any specific node, and the restored primary deployment will
60+
inherit any node labels defined for the original primary deployment.
61+
62+
### pgbackrest AWS S3 Support
63+
64+
The PostgreSQL Operator supports the use AWS S3 storage buckets for the pgbackrest repository in any pgbackrest-enabled cluster. When S3 support is enabled for a cluster, all
65+
archives will automatically be pushed to a pre-configured S3 storage bucket, and that same bucket can then be utilized for the creation of any backups as well as when performing
66+
restores. Please note that once a storage type has been selected for a cluster during cluster creation (specifically `local`, `s3`, or _both_, as described in detail below), it
67+
cannot be changed.
68+
69+
The PostgreSQL Operator allows for the configuration of a single storage bucket, which can then be utilized across multiple clusters. Once S3 support has been enabled for a
70+
cluster, pgbackrest will create a `backrestrepo` directory in the root of the configured S3 storage bucket (if it does not already exist), and subdirectories will then be created
71+
under the `backrestrepo` directory for each cluster created with S3 storage enabled.
72+
73+
#### S3 Configuration
74+
75+
In order to enable S3 storage, you must provide the required AWS S3 configuration information prior to deploying the Operator. First, you will need to add the proper S3 bucket
76+
name, AWS S3 endpoint and AWS S3 region to the `Cluster` section of the `pgo.yaml` configuration file (additional information regarding the configuration of the `pgo.yaml` file can
77+
be found [here](/configuration/pgo-yaml-configuration/)) :
78+
79+
```yaml
80+
Cluster:
81+
BackrestS3Bucket: containers-dev-pgbackrest
82+
BackrestS3Endpoint: s3.amazonaws.com
83+
BackrestS3Region: us-east-1
84+
```
85+
86+
You will then need to specify the proper credentials for authenticating into the S3 bucket specified by adding a **key** and **key secret** to the `$PGOROOT/conf/pgo-backrest
87+
repo/aws-s3-credentials.yaml` configuration file:
88+
89+
```yaml
90+
---
91+
aws-s3-key: ABCDEFGHIJKLMNOPQRST
92+
aws-s3-key-secret: ABCDEFG/HIJKLMNOPQSTU/VWXYZABCDEFGHIJKLM
93+
```
94+
95+
Once the above configuration details have been provided, you can deploy the Operator per the [PGO installation instructions](/installation/operator-install/).
96+
97+
#### Enabling S3 Storage in a Cluster
98+
99+
With S3 storage properly configured within your PGO installation, you can now select either local storage, S3 storage, or _both_ when creating a new cluster. The type of storage
100+
selected upon creation of the cluster will determine the type of storage that can subsequently be used when performing pgbackrest backups and restores. A storage type is specified
101+
using the `--pgbackrest-storage-type` flag, and can be one of the following values:
102+
103+
* `local` - pgbackrest will use volumes local to the container (e.g. Persistent Volumes) for storing archives, creating backups and locating backups for restores. This is the
104+
default value for the `--pgbackrest-storage-type` flag.
105+
* `s3` - pgbackrest will use the pre-configured AWS S3 storage bucket for storing archives, creating backups and locating backups for restores
106+
* `local,s3` (both) - pgbackrest will use both volumes local to the container (e.g. persistent volumes), as well as the pre-configured AWS S3 storage bucket, for storing archives.
107+
Also allows the use of local and/or S3 storage when performing backups and restores.
108+
109+
For instance, the following command enables both `local` and `s3` storage in a new cluster:
110+
111+
```bash
112+
pgo create cluster mycluster --pgbackrest-storage-type=local,s3 -n pgouser1
113+
```
114+
115+
As described above, this will result in pgbackrest pushing archives to both local and S3 storage, while also allowing both local and S3 storage to be utilized for backups and
116+
restores. However, you could also enable S3 storage only when creating the cluster:
117+
118+
```bash
119+
pgo create cluster mycluster --pgbackrest-storage-type=s3 -n pgouser1
120+
```
121+
122+
Now all archives for the cluster will be pushed to S3 storage only, and local storage will not be utilized for storing archives (nor can local storage be utilized for backups and
123+
restores).
124+
125+
#### Using S3 to Backup & Restore
126+
127+
As described above, once S3 storage has been enabled for a cluster, it can also be used when backing up or restoring a cluster. Here a both local and S3 storage is selected when
128+
performing a backup:
129+
130+
```bash
131+
pgo backup mycluster --backup-type=pgbackrest --pgbackrest-storage-type=local,s3 -n pgouser1
132+
```
133+
134+
This results in pgbackrest creating a backup in a local volume (e.g. a persistent volume), while also creating an additional backup in the configured S3 storage bucket. However, a
135+
backup can be created using S3 storage only:
136+
137+
```bash
138+
pgo backup mycluster --backup-type=pgbackrest --pgbackrest-storage-type=s3 -n pgouser1
139+
```
140+
141+
Now pgbackrest will only create a backup in the S3 storage bucket only.
142+
143+
When performing a restore, either `local` or `s3` must be selected (selecting both for a restore will result in an error). For instance, the following command specifies S3 storage
144+
for the restore:
145+
146+
```bash
147+
pgo restore mycluster --pgbackrest-storage-type=s3 -n pgouser1
148+
```
149+
150+
This will result in a full restore utilizing the backups and archives stored in the configured S3 storage bucket.
151+
152+
_Please note that because `local` is the default storage type for the `backup` and `restore` commands, `s3` must be explicitly set using the `--pgbackrest-storage-type` flag when
153+
performing backups and restores on clusters where only S3 storage is enabled._
154+
155+
#### AWS Certificate Authority
156+
157+
The PostgreSQL Operator installation includes a default certificate bundle that is utilized by default to establish trust between pgbackrest and the AWS S3 endpoint used for S3
158+
storage. Please modify or replace this certificate bundle as needed prior to deploying the Operator if another certificate authority is needed to properly establish trust between pgbackrest
159+
and your S3 endpoint.
160+
161+
The certificate bundle can be found here: `$PGOROOT/pgo-backrest-repo/aws-s3-ca.crt`.
162+
163+
When modifying or replacing the certificate bundle, please be sure to maintain the same path and filename.

hugo/content/gettingstarted/Design/custom-config-ssl.md

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,16 +5,14 @@ draft: false
55
weight: 5
66
---
77

8-
## Custom Postgres SSL Configurations
8+
## Custom PostgreSQL SSL Configurations
99

10-
The Crunchy Data Postgres Operator can create clusters that use SSL authentication by
11-
utilizing custom configmaps.
10+
The PostgreSQL Operator can create clusters that use SSL authentication by utilizing custom configmaps.
1211

1312
#### Configuration Files for SSL Authentication
1413

15-
Users and administrators can specify a
16-
custom set of Postgres configuration files to be used when creating
17-
a new Postgres cluster. This example uses the files below-
14+
Users and administrators can specify a custom set of PostgreSQL configuration files to be used when creating
15+
a new PostgreSQL cluster. This example uses the files below-
1816

1917
* postgresql.conf
2018
* pg_hba.conf
@@ -24,8 +22,7 @@ along with generated security certificates, to setup a custom SSL configuration.
2422

2523
#### Config Files Purpose
2624

27-
The *postgresql.conf* file is the main Postgresql configuration file that allows
28-
the definition of a wide variety of tuning parameters and features.
25+
The *postgresql.conf* file is the main PostgreSQL configuration file that allows the definition of a wide variety of tuning parameters and features.
2926

3027
The *pg_hba.conf* file is the way Postgresql secures client access.
3128

0 commit comments

Comments
 (0)