Description
Describe the bug
When trying to read from a compacted topic using the FirstOffset
configuration without a consumer group, the library panics with the following stacktrace.
panic: markRead: negative count
goroutine 68 [running]:
github.com/segmentio/kafka-go.(*messageSetReader).markRead(0xc0004587a0)
external/com_github_segmentio_kafka_go/message_reader.go:345 +0x11a
github.com/segmentio/kafka-go.(*messageSetReader).readMessageV2(0xc0004587a0, 0x2c709, 0xc000507ac8, 0xc000507ab8, 0x2c708, 0x2c708, 0xc000059800, 0xc0005078f8, 0x416b3b, 0xc0005078f8, ...)
external/com_github_segmentio_kafka_go/message_reader.go:329 +0x49d
github.com/segmentio/kafka-go.(*messageSetReader).readMessage(0xc0004587a0, 0x2c709, 0xc000507ac8, 0xc000507ab8, 0x2c708, 0xc000507a5c, 0x17fd1b29700, 0x0, 0xc0004922a0, 0x0, ...)
external/com_github_segmentio_kafka_go/message_reader.go:136 +0xc5
github.com/segmentio/kafka-go.(*Batch).readMessage(0xc000195880, 0xc000507ac8, 0xc000507ab8, 0x0, 0x17fd1b29700, 0x100000001, 0xc0004922a0, 0xc0004922a0, 0xc000507ab8, 0x2)
external/com_github_segmentio_kafka_go/batch.go:240 +0x79
github.com/segmentio/kafka-go.(*Batch).ReadMessage(0xc000195880, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
external/com_github_segmentio_kafka_go/batch.go:192 +0x11a
github.com/segmentio/kafka-go.(*reader).read(0xc000507ed8, 0xe30160, 0xc0004882c0, 0x2c709, 0xc0000e21e0, 0x0, 0x0, 0x0)
external/com_github_segmentio_kafka_go/reader.go:1492 +0x3ec
github.com/segmentio/kafka-go.(*reader).run(0xc000507ed8, 0xe30160, 0xc0004882c0, 0x0)
external/com_github_segmentio_kafka_go/reader.go:1310 +0x2d9
github.com/segmentio/kafka-go.(*Reader).start.func1(0xc0004d8000, 0xe30160, 0xc0004882c0, 0xc00004402c, 0x10, 0x0, 0xfffffffffffffffe, 0xc0004d8138)
external/com_github_segmentio_kafka_go/reader.go:1211 +0x1d8
created by github.com/segmentio/kafka-go.(*Reader).start
external/com_github_segmentio_kafka_go/reader.go:1191 +0x1a5
Kafka Version
2.4.0
To Reproduce
Sadly I'm unable to reproduce the issue. But maybe you've seen the issue already in the past or you can point me to a place what I could check.
To be exact I found the application to panic even when restarting it over and over again. My only solution was to truncate the topic, which then brought the consumer back to life. But these messages were not special at all because I've exported them and reimported them to a different cluster and couldn't make the application fail with the other cluster. So it must be (additionally?!) connected to some internal state of the Kafka cluster.
Expected behavior
No panic at all should happen.
Additional context
Used version of the library is 0.4.30