You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22-49Lines changed: 22 additions & 49 deletions
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,6 @@ Table of Contents
13
13
-[Tuning](#tuning)
14
14
-[Backups](#backups)
15
15
-[Maintenance](#maintenance)
16
-
-[Vacuuming](#vacuuming)
17
-
-[Reindexing](#reindexing)
18
16
-[PostgreSQL Settings](#postgresql-settings)
19
17
-[maintenance_work_mem](#maintenance_work_mem)
20
18
-[work_mem](#work_mem)
@@ -29,29 +27,37 @@ This module provides tuning, maintenance, and backups for PE PostgreSQL.
29
27
30
28
## What does this module provide?
31
29
32
-
By default you get the following:
30
+
This module provides the following functionaility
33
31
34
32
1. Customized settings for PE PostgreSQL
35
33
1. Maintenance to keep the `pe-puppetdb` database lean and fast
36
-
1. Backups for all PE PostgreSQL databases
34
+
1. Backups for all PE PostgreSQL databases, disabled by default
37
35
- The `pe-puppetdb` database is backed up every week
38
36
- Other databases are backed up every night
39
37
40
38
## Usage
41
39
42
40
In order to use this module, classify the node running PE PostgreSQL with the `pe_databases` class.
43
-
That node is the Primary Master in a Monolithic installation, or the PE PuppetDB host in a Split install.
41
+
The Primary Server and Replica run PE PostgreSQL in most instances, but there may be one or more servers with this role in an XL deployment
44
42
45
43
To classify via the PE Console, create a new node group called "PE Database Maintenance".
46
-
Then pin the node running pe-postgresql to that node group.
44
+
Then pin the node(s) running pe-postgresql to that node group.
47
45
It is not recommended to classify using a pre-existing node group in the PE Console.
48
46
49
47
## Items you may want to configure
50
48
51
49
### Backup Schedule
52
50
51
+
Backups are not activated by default but can be enabled by setting the following parameter:
52
+
53
+
Hiera classification example
54
+
55
+
```
56
+
pe_databases::manage_database_backups:: true
57
+
```
58
+
53
59
You can modify the default backup schedule by provide an array of hashes that describes the databases to backup and their backup schedule.
54
-
Please refer to the [hieradata_examples](https://github.com/puppetlabs/puppetlabs-pe_databases/tree/master/hieradata_examples) directory of this repository for examples.
60
+
Please refer to the [hieradata_examples](https://github.com/puppetlabs/puppetlabs-pe_databases/tree/main/hieradata_examples) directory of this repository for examples.
55
61
56
62
> IMPORTANT NOTE: If you change the default schedule, it will stop managing the associated crontab entries, and there's not a clean way to automatically remove unmanaged crontab entries.
57
63
So you should delete all pe-postgres crontab entries via `crontab -r -u pe-postgres` and let Puppet repopulate them if you change the default schedule.
@@ -65,12 +71,10 @@ You can configure the retention policy by setting `pe_databases::backup::retenti
65
71
66
72
### Disable Maintenance
67
73
68
-
The maintenance cron jobs will perform a `VACUUM FULL` on various `pe-puppetdb` tables to keep them lean and fast.
69
-
A `VACUUM FULL` is a blocking operation and you will see the PuppetDB command queue grow while the cron jobs run.
70
-
The blocking should be short lived and the PuppetDB command queue should work itself down after, however, if for some reason you experience issues you can disable the maintenance cron jobs.
74
+
The maintenance systemd timers will perform a `pg_repack` on various `pe-puppetdb` tables to keep them lean and fast.
75
+
pg_repack is a non blocking operation and should have no impact on the operations of Puppet Enterprise, however, if for some reason you experience issues you can disable the maintenance systemd timers.
71
76
You can do so by setting `pe_databases::disable_maintenance: true` in your hieradata.
72
77
73
-
With PE 2018.1.7 and 2019.0.2 and newer, this module uses `pg_repack` which does not block.
74
78
75
79
# General PostgreSQL Recommendations
76
80
@@ -102,14 +106,15 @@ This module provides a script for backing up PE PostgreSQL databases and two def
102
106
103
107
## Maintenance
104
108
105
-
This module provides cron jobs to VACUUM FULL tables in the `pe-puppetdb` database:
106
-
- facts tables are VACUUMed Tuesdays and Saturdays at 4:30AM
107
-
- catalogs tables are VACUUMed Sundays and Thursdays at 4:30AM
108
-
- other tables are VACUUMed on the 20th of the month at 5:30AM
109
+
This module provides systemd timers to pg_repack tables in the `pe-puppetdb` database:
110
+
- facts tables are pg_repack'd Tuesdays and Saturdays at 4:30AM
111
+
- catalogs tables are pg_repack'd Sundays and Thursdays at 4:30AM
112
+
- reports table is pg_repack'd on the 10th of the month at 05:30AM on systems with PE 2019.7.0 or less
113
+
- resource_events table is pg_repack'd on the 15th of the month at 05:30AM on systems with PE 2019.3.0 or less
114
+
- other tables are pg_repack'd on the 20th of the month at 5:30AM
109
115
110
116
> Note: You may be able to improve the performance (reduce time to completion) of maintenance tasks by increasing the [maintenance_work_mem](#maintenance_work_mem) setting.
111
117
112
-
With PE 2018.1.7 and 2019.0.2 and newer, this module uses `pg_repack` instead of `VACUUM FULL`.
113
118
114
119
Please note that when using `pg_repack` as part of the pe_databases module, unclean exits can leave behind the schema when otherwise it should have been cleaned up. This can result in the messages similar to the following:
115
120
@@ -119,40 +124,8 @@ WARNING: the table "public.fact_paths" already has a trigger called "repack_trig
119
124
DETAIL: The trigger was probably installed during a previous attempt to run pg_repack on the table which was interrupted and for some reason failed to clean up the temporary objects. Please drop the trigger or drop and recreate the pg_repack extension altogether to remove all the temporary objects left over.
120
125
```
121
126
122
-
The module now contains a task `reset_pgrepack_schema` to mitigate this issue. This needs to be run against your Primary or Postgrsql server to resolve this and it will drop and recreate the extension, removing the temporary objects.
123
-
124
-
### Vacuuming
125
-
126
-
Generally speaking, PostgreSQL keeps itself in good shape with a process called [auto vacuuming](https://www.postgresql.org/docs/11/runtime-config-autovacuum.html).
127
-
This is enabled by default and tuned for Puppet Enterprise out of the box.
128
-
129
-
Note that there is a difference between `VACUUM` and `VACUUM FULL`.
130
-
`VACUUM FULL` rewrites a table on disk while `VACUUM` simply marks deleted row so the space that row occupied can be used for new data.
131
-
132
-
`VACUUM FULL` is generally not necessary, and if run too-frequently can cause excessive disk I/O.
133
-
However, in the case of `pe-puppetdb` the way it constantly receives and updates data causes bloat, and it is beneficial to VACUUM FULL the `facts` and `catalogs` tables every few days.
134
-
We, however, do not recommend a `VACUUM FULL` on the `reports` or `resource_events` tables as they are large and `VACUUM FULL` may cause extended downtime.
135
-
136
-
### Reindexing
137
-
138
-
Reindexing is also a prudent exercise.
139
-
It may not be necessary very often, but doing every month or so can definitely prevent performance issues in the long run.
140
-
In the scope of what this module provides, a `VACUUM FULL` will rewrite the table and all of its indexes so tables are reindexed during the `VACUUM FULL` maintenance cron jobs.
141
-
That only leaves the `reports` and `resource_events` tables to be reindexed.
142
-
Unfortunately, the most common place to get a `DEADLOCK` error mentioned below is when reindexing the `reports` table.
143
-
144
-
Reindexing is a blocking operation.
145
-
While an index is rebuilt, the data in the table cannot change and other operations have to wait for the rebuild to complete.
146
-
If you don’t have a large installation or you have a lot of memory or fast storage, you may be able to complete a reindex while your Puppet Enterprise installation is up.
147
-
PuppetDB will backup commands in its command queue and the PE Console may throw errors about not being able to load data.
148
-
After the reindex is complete, the PuppetDB command queue will be processed and the PE Console will work as expected.
149
-
150
-
In some cases, you cannot complete a reindex while the Puppet Enterprise services are trying to use the database.
151
-
You may receive a `DEADLOCK` error because the table that is supposed to be reindexed has too many requests on it and the reindex command cannot complete.
152
-
In these cases you need to stop the Puppet Enterprise services, run the reindex, and then start the Puppet Enterprise services again.
153
-
If you are getting a `DEADLOCK` error you can reduce the frequency of reindexing, the most important times to reindex are when you add new nodes, so reindexing is more important early in your PE installation when you are adding new nodes but less important to do frequently when you are in a steady state.
127
+
The module now contains a task `reset_pgrepack_schema` to mitigate this issue. This needs to be run against your Primary or PE-postgreSQL server to resolve this and it will drop and recreate the extension, removing the temporary objects.
154
128
155
-
With PE 2018.1.7 and 2019.0.2 and newer, this module uses `pg_repack` instead of `VACUUM FULL`.
0 commit comments