Skip to content

Commit f0b91a7

Browse files
committed
Review feedback
1 parent 55f547d commit f0b91a7

File tree

1 file changed

+11
-10
lines changed

1 file changed

+11
-10
lines changed

tests/longevity/results/1.1.0/1.1.0.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -86,22 +86,23 @@ Further investigation is out of scope of this test.
8686

8787
```text
8888
resource.type="k8s_container"
89-
resource.labels.cluster_name="ciara-1"
89+
resource.labels.cluster_name="<CLUSTER_NAME>"
9090
resource.labels.namespace_name="nginx-gateway"
9191
resource.labels.container_name="nginx-gateway"
9292
severity=ERROR
9393
SEARCH("error")
9494
```
9595

96-
There were 53 error logs across 3 pod instances. They came in 3 almost identical batches, starting just over 24 hours
97-
after the initial deployment, and then each subsequent error batch just over 24 hours after the last. They were all
98-
relating to leader election loss, and subsequent restart (see https://github.com/nginxinc/nginx-gateway-fabric/issues/1100).
96+
There were 53 error logs, and 6 restarts, across 3 pod instances. The error logs came in 3 almost identical batches,
97+
starting just over 24 hours after the initial deployment, and then each subsequent error batch just over 24 hours after
98+
the last. They were all relating to leader election loss, and subsequent restart (see https://github.com/nginxinc/nginx-gateway-fabric/issues/1100). There were also 2 termination events, both of these occurred approximately 5 minutes
99+
after a leader election loss and successful restart.
99100

100-
Each error batches caused the pod to restart, but not terminate. However, the first pod was terminated about 10 minutes
101+
Each error batches caused the pod to restart, but not terminate. The first pod was terminated about 5 minutes
101102
after the first error batch and subsequent restart occurred. A similar occurance happened after the third error batch.
102-
Exactly why these pods were terminated is not currently clear, but it looks to be a cluster event (perhaps an upgrade)
103-
as the coffee and tea pods were terminated at that time also. All the restarts happened roughly at the same time each
104-
day.
103+
There was no termination event after the second error batch. Exactly why these pods were terminated is not currently
104+
clear, but it looks to be a cluster event (perhaps an upgrade) as the coffee and tea pods were terminated at that time
105+
also. All the restarts happened roughly at the same time each day.
105106

106107
```text
107108
{"level":"info", "msg":"Starting manager", "ts":"2023-12-13T17:45:10Z"} -> Start-up
@@ -120,7 +121,7 @@ Errors:
120121

121122
```text
122123
resource.type=k8s_container AND
123-
resource.labels.cluster_name="ciara-1" AND
124+
resource.labels.cluster_name="<CLUSTER_NAME>" AND
124125
resource.labels.container_name="nginx" AND
125126
severity=ERROR AND
126127
SEARCH("`[warn]`") OR SEARCH("`[error]`")
@@ -134,7 +135,7 @@ Non-200 response codes in NGINX access logs:
134135

135136
```text
136137
resource.type=k8s_container AND
137-
resource.labels.cluster_name="ciara-1" AND
138+
resource.labels.cluster_name="<CLUSTER_NAME>" AND
138139
resource.labels.container_name="nginx"
139140
"GET" "HTTP/1.1" -"200"
140141
```

0 commit comments

Comments
 (0)