You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please, answer some short questions which should help us to understand your problem / question better?
Which image of the operator are you using? e.g. registry.opensource.zalan.do/acid/postgres-operator:v1.6.2
registry.opensource.zalan.do/acid/postgres-operator:v1.6.2
Where do you run it - cloud or metal? Kubernetes or OpenShift? [AWS K8s | GCP ... | Bare Metal K8s]
Bare Metal K8s
Are you running Postgres Operator in production? [yes | no]
not yet
Type of issue? [Bug report, question, feature request, etc.]
Bug report
Operator is doing rolling update every sync (every 30 mins by default).
time="2021-04-08T06:04:15Z" level=info msg="SYNC event has been queued" cluster-name=staging/postgre-cluster pkg=controller worker=0
time="2021-04-08T06:04:15Z" level=info msg="there are 1 clusters running" pkg=controller
time="2021-04-08T06:04:15Z" level=info msg="syncing of the cluster started" cluster-name=staging/postgre-cluster pkg=controller worker=0
time="2021-04-08T06:04:15Z" level=warning msg="cannot initialize a new manifest robot role with the name of the protected user \"admin\"" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="team API is disabled" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="syncing secrets" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="secret staging/kong.postgre-cluster.credentials already exists, fetching its password" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="secret staging/postgres.postgre-cluster.credentials already exists, fetching its password" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="secret staging/standby.postgre-cluster.credentials already exists, fetching its password" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing master service" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing replica service" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="No load balancer created for the replica service" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing volumes using \"pvc\" storage resize mode" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="volume claims do not require changes" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing statefulsets" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="Generating Spilo container, environment variables" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="[{SCOPE postgre-cluster nil} {PGROOT /home/postgres/pgdata/pgroot nil} {POD_IP &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {PGUSER_SUPERUSER postgres nil} {KUBERNETES_SCOPE_LABEL cluster-name nil} {KUBERNETES_ROLE_LABEL spilo-role nil} {PGPASSWORD_SUPERUSER &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:postgres.postgre-cluster.credentials,},Key:password,Optional:nil,},}} {PGUSER_STANDBY standby nil} {PGPASSWORD_STANDBY &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:standby.postgre-cluster.credentials,},Key:password,Optional:nil,},}} {PAM_OAUTH2 https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees nil} {HUMAN_ROLE zalandos nil} {PGVERSION 13 nil} {KUBERNETES_LABELS {\"application\":\"spilo\"} nil} {SPILO_CONFIGURATION {\"postgresql\":{\"pg_hba\":[\"local all all trust\",\"hostssl all +zalandos 127.0.0.1/32 pam\",\"host all all 127.0.0.1/32 md5\",\"hostssl all +zalandos ::1/128 pam\",\"host all all ::1/128 md5\",\"hostssl replication standby all md5\",\"hostnossl all all all md5\",\"hostssl all +zalandos all pam\",\"hostssl all all all md5\"]},\"bootstrap\":{\"initdb\":[{\"auth-host\":\"md5\"},{\"auth-local\":\"trust\"}],\"users\":{\"zalandos\":{\"password\":\"\",\"options\":[\"CREATEDB\",\"NOLOGIN\"]}},\"dcs\":{\"maximum_lag_on_failover\":33554432}}} nil} {DCS_ENABLE_KUBERNETES_API true nil} {ENABLE_WAL_PATH_COMPAT true nil}]" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="statefulset staging/postgre-cluster is not in the desired state and needs to be updated" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- terminationMessagePath: /dev/termination-log," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- terminationMessagePolicy: File," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- restartPolicy: Always," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- dnsPolicy: ClusterFirst," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- serviceAccount: postgres-pod," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- securityContext: {}," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- schedulerName: default-scheduler" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="+ securityContext: {}" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- kind: PersistentVolumeClaim," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- apiVersion: v1," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- status: {" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- phase: Pending" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- }" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="+ status: {}" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- }," cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- revisionHistoryLimit: 10" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="+ }" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="metadata.annotation are different" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="-{" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="- zalando-postgres-operator-rolling-update-required: true" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="-}" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg=+null cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="reason: new statefulset's annotations do not match the current one" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="updating statefulset" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="patching statefulset annotations" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing pod disruption budgets" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing roles" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="closing database connection" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing databases" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="closing database connection" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing prepared databases with schemas" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="syncing connection pooler from (nil, nil) to (nil, nil)" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=debug msg="could not get connection pooler secret pooler.postgre-cluster.credentials: secrets \"pooler.postgre-cluster.credentials\" not found" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="cluster version up to date. current: 130002, min desired: 130000" cluster-name=staging/postgre-cluster pkg=cluster
time="2021-04-08T06:04:15Z" level=info msg="cluster has been synced" cluster-name=staging/postgre-cluster pkg=controller worker=0
Operator is installed with the default manifests - except that it is in a custom namesapce (not default).
We have only one cluster (for now) and it is in staging namesapce
Cluster is configured with the following manifest: apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgre-cluster namespace: staging spec: resources: requests: memory: "250Mi" cpu: "10m" limits: memory: "1Gi" cpu: "1" teamId: "test" volume: size: 10Gi numberOfInstances: 2 users: admin: # database owner - superuser - createdb test: [] # role for application foo databases: test: test postgresql: version: "13" patroni: pg_hba: - local all all trust - hostssl all +zalandos 127.0.0.1/32 pam - host all all 127.0.0.1/32 md5 - hostssl all +zalandos ::1/128 pam - host all all ::1/128 md5 - hostssl replication standby all md5 - hostnossl all all all md5 - hostssl all +zalandos all pam - hostssl all all all md5 maximum_lag_on_failover: 33554432
If you need anything else please let me know and I will provide it.
The text was updated successfully, but these errors were encountered:
Looks like it's because of the rolling update flag on the statefulset. With the latest release we move it on the pod level. During the rolling update the statefulset is updated/replaced but after that we propagate annotations of the previous version again producing the diff for the next sync.
However, I think this should only affect the statefulset, right? I see no logs regarding the rolling of pods.
Please, answer some short questions which should help us to understand your problem / question better?
registry.opensource.zalan.do/acid/postgres-operator:v1.6.2
Bare Metal K8s
not yet
Bug report
Operator is doing rolling update every sync (every 30 mins by default).
Operator is installed with the default manifests - except that it is in a custom namesapce (not default).
We have only one cluster (for now) and it is in staging namesapce
Cluster is configured with the following manifest:
apiVersion: "acid.zalan.do/v1" kind: postgresql metadata: name: postgre-cluster namespace: staging spec: resources: requests: memory: "250Mi" cpu: "10m" limits: memory: "1Gi" cpu: "1" teamId: "test" volume: size: 10Gi numberOfInstances: 2 users: admin: # database owner - superuser - createdb test: [] # role for application foo databases: test: test postgresql: version: "13" patroni: pg_hba: - local all all trust - hostssl all +zalandos 127.0.0.1/32 pam - host all all 127.0.0.1/32 md5 - hostssl all +zalandos ::1/128 pam - host all all ::1/128 md5 - hostssl replication standby all md5 - hostnossl all all all md5 - hostssl all +zalandos all pam - hostssl all all all md5 maximum_lag_on_failover: 33554432
If you need anything else please let me know and I will provide it.
The text was updated successfully, but these errors were encountered: