You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tests/longevity/results/1.1.0/1.1.0.md
+11-10Lines changed: 11 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -86,22 +86,23 @@ Further investigation is out of scope of this test.
86
86
87
87
```text
88
88
resource.type="k8s_container"
89
-
resource.labels.cluster_name="ciara-1"
89
+
resource.labels.cluster_name="<CLUSTER_NAME>"
90
90
resource.labels.namespace_name="nginx-gateway"
91
91
resource.labels.container_name="nginx-gateway"
92
92
severity=ERROR
93
93
SEARCH("error")
94
94
```
95
95
96
-
There were 53 error logs across 3 pod instances. They came in 3 almost identical batches, starting just over 24 hours
97
-
after the initial deployment, and then each subsequent error batch just over 24 hours after the last. They were all
98
-
relating to leader election loss, and subsequent restart (see https://github.com/nginxinc/nginx-gateway-fabric/issues/1100).
96
+
There were 53 error logs, and 6 restarts, across 3 pod instances. The error logs came in 3 almost identical batches,
97
+
starting just over 24 hours after the initial deployment, and then each subsequent error batch just over 24 hours after
98
+
the last. They were all relating to leader election loss, and subsequent restart (see https://github.com/nginxinc/nginx-gateway-fabric/issues/1100). There were also 2 termination events, both of these occurred approximately 5 minutes
99
+
after a leader election loss and successful restart.
99
100
100
-
Each error batches caused the pod to restart, but not terminate. However, the first pod was terminated about 10 minutes
101
+
Each error batches caused the pod to restart, but not terminate. The first pod was terminated about 5 minutes
101
102
after the first error batch and subsequent restart occurred. A similar occurance happened after the third error batch.
102
-
Exactly why these pods were terminated is not currently clear, but it looks to be a cluster event (perhaps an upgrade)
103
-
as the coffee and tea pods were terminated at that time also. All the restarts happened roughly at the same time each
104
-
day.
103
+
There was no termination event after the second error batch. Exactly why these pods were terminated is not currently
104
+
clear, but it looks to be a cluster event (perhaps an upgrade) as the coffee and tea pods were terminated at that time
105
+
also. All the restarts happened roughly at the same time each day.
0 commit comments