You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* sync all resources to cluster fields (CronJob, Streams, Patroni resources)
* separated sync and delete logic for Patroni resources
* align delete streams and secrets logic with other resources
* rename gatherApplicationIds to getDistinctApplicationIds
* improve slot check before syncing streams CRD
* add ownerReferences and annotations diff to Patroni objects
* add extra sync code for config service so it does not get too ugly
* some bugfixes when comparing annotations and return err on found
* sync Patroni resources on update event and extended unit tests
* add config service/endpoint owner references check to e2e tes
Copy file name to clipboardExpand all lines: docs/administrator.md
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -252,17 +252,16 @@ will differ and trigger a rolling update of the pods.
252
252
## Owner References and Finalizers
253
253
254
254
The Postgres Operator can set [owner references](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) to most of a cluster's child resources to improve
255
-
monitoring with GitOps tools and enable cascading deletes. There are three
255
+
monitoring with GitOps tools and enable cascading deletes. There are two
256
256
exceptions:
257
257
258
258
* Persistent Volume Claims, because they are handled by the [PV Reclaim Policy]https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/ of the Stateful Set
259
-
* The config endpoint + headless service resource because it is managed by Patroni
260
259
* Cross-namespace secrets, because owner references are not allowed across namespaces by design
261
260
262
261
The operator would clean these resources up with its regular delete loop
263
262
unless they got synced correctly. If for some reason the initial cluster sync
264
263
fails, e.g. after a cluster creation or operator restart, a deletion of the
265
-
cluster manifest would leave orphaned resources behind which the user has to
264
+
cluster manifest might leave orphaned resources behind which the user has to
266
265
clean up manually.
267
266
268
267
Another option is to enable finalizers which first ensures the deletion of all
0 commit comments