Skip to content

Error Logs of ReflectorRunnable caused by too 'old resource version' #1580

Closed
@FlorianBuchegger

Description

@FlorianBuchegger

We are using the SharedIndexInformer and notice error logs appearing from time to time, f.e.:

2021-03-07 17:20:46.507 ERROR 1 --- [els.V1Service-1] i.k.c.informer.cache.ReflectorRunnable   : class io.kubernetes.client.openapi.models.V1Service#Reflector loop failed unexpectedly
java.lang.RuntimeException: got ERROR event and its status: class V1Status {
    apiVersion: v1
    code: 410
    details: null
    kind: Status
    message: too old resource version: 7370317 (7376352)
    metadata: class V1ListMeta {
        _continue: null
        remainingItemCount: null
        resourceVersion: null
        selfLink: null
    }
    reason: Expired
    status: Failure
}
	at io.kubernetes.client.informer.cache.ReflectorRunnable.watchHandler(ReflectorRunnable.java:198) ~[client-java-11.0.0.jar:na]
	at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:118) ~[client-java-11.0.0.jar:na]
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
	at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) ~[na:na]
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
	at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

As it seems this is to be expected when there are no changes for some time (see kubernetes/kubernetes#22024) we wonder if there is a way to reduce the log level for this messages to info, as the watch will be fixed automatically. We could exempt this class from our log analysis, but we would maybe miss important error logs doing this.

Is using the SharedIndexInformer still the recommended way to avoid polling kubernetes exhaustively?
Is it correct that it is not a problem to have this errors happening from time to time?
What is the recommended way of dealing with the log?

Thanks for your help,
kind regards,
Florian

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions