Description
Describe the bug
When a cluster has a pgpool, deployments are created with crunchy-pgpool: true
.
They won't get a service-name. Hence, they won't be accessible by using the corresponding service (selector service-name: <cluster_name>
). This means that pgpool can't access them, that replica are stuck waiting for the master node and that the master node is waiting for connections but no one can connect (except by direct ClusterIP).
As far as I understand the code, controller/podcontroller.go::isPostgresPod
will return false if the pod has crunchy-pgpool
equal to true
. However, any pod in a cluster with pgpool (except for the primary deployment after an update) will have the crunchy-pgpool
.
To Reproduce
Steps to reproduce the behavior:
As far as my cluster is concerned:
pgo create cluster <cluster_name> --pgpool --replica-count=2
orpgo create cluster <cluster_name>
pgo add pgpool <cluster_name>
- Delete a pod, newly spawned pod will never be tagged with
service-name: <cluster_name>-replica
Expected behavior
Deployment and pod should be correctly tagged with service-name: <cluster_name>
or service-name: <cluster_name>-replica
.
Replica use <cluster_name>
service as primary.
pgpool use <cluster_name>
service as primary and <cluster_name>-replica
service as primary.
Please tell us about your environment:
- Operating System: Linux 4.9.0-8-amd64 Update README.asciidoc #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux
- Where is this running ( Local, Cloud Provider): Local
- Storage being used (NFS, Hostpath, Gluster, etc): CephFS (with initContainers for fs permissions)
- Container Image Tag:
crunchydata/crunchy-postgres:centos7-11.2-2.3.1
crunchydata/crunchy-collect:centos7-11.2-2.3.1
crunchydata/crunchy-pgpool:centos7-11.2-2.3.1
crunchydata/pgo-apiserver:centos7-3.5.1
crunchydata/postgres-operator:centos7-3.5.1
crunchydata/pgo-scheduler:centos7-3.5.1
- PostgreSQL Version: ?
- Platform (Docker, Kubernetes, OpenShift): Kubernetes
- Platform Version: v1.10.11