'Spring Kafka Batch listener not receiving more than 1 or 2 messages
Spring Kafka Batch consumer receives only one or two messages we have increased fetch.min.bytes to 9000 and fetch.max.wait.ms 5000 [Based on this answer]https://stackoverflow.com/questions/50283011/how-to-increase-the-number-of-messages-consumed-by-spring-kafka-consumer-in-each#:~:text=The%20Spring%20Kafka%20Consumer%20(annotated,just%201%20or%202%20messages.
Even after increasing the values we are receiving only 1 or 2 messages Do we need to still increase the values of fetch.min.bytes and fetch.max.wait.ms or we need to add any other configurations or do we need to reduce max poll records size ? In local environment we were receiving 10 messages but in AWS MSK cluster we are receiving 1 or 2 messages
consumer config INFO log :
2022-05-13 16:15:48.118,"2022-05-13 16:15:48.117 [,] INFO main org.apache.kafka.clients.consumer.ConsumerConfig361 ConsumerConfig values: "
2022-05-13 16:15:48.118, allow.auto.create.topics = true
2022-05-13 16:15:48.118, auto.commit.interval.ms = 5000
2022-05-13 16:15:48.118, auto.offset.reset = latest
2022-05-13 16:15:48.118," bootstrap.servers = [xyz.amazonaws.com:yyyy, xxxxxyzxx.amazonaws.com:yyyy, xxxxxxzzz.amazonaws.com:yyyy]"
2022-05-13 16:15:48.118, check.crcs = true
2022-05-13 16:15:48.118, client.dns.lookup = use_all_dns_ips
2022-05-13 16:15:48.118, client.id = consumer-consumer.group.qa-5
2022-05-13 16:15:48.118, client.rack =
2022-05-13 16:15:48.118, connections.max.idle.ms = 540000
2022-05-13 16:15:48.118, default.api.timeout.ms = 60000
2022-05-13 16:15:48.118, enable.auto.commit = false
2022-05-13 16:15:48.118, exclude.internal.topics = true
2022-05-13 16:15:48.118, fetch.max.bytes = 52428800
2022-05-13 16:15:48.118, fetch.max.wait.ms = 5000
2022-05-13 16:15:48.118, fetch.min.bytes = 9000
2022-05-13 16:15:48.118, group.id = consumer.group.qa
2022-05-13 16:15:48.118, group.instance.id = null
2022-05-13 16:15:48.118, heartbeat.interval.ms = 3000
2022-05-13 16:15:48.118, interceptor.classes = []
2022-05-13 16:15:48.118, internal.leave.group.on.close = true
2022-05-13 16:15:48.118, internal.throw.on.fetch.stable.offset.unsupported = false
2022-05-13 16:15:48.118, isolation.level = read_uncommitted
2022-05-13 16:15:48.118, key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2022-05-13 16:15:48.118, max.partition.fetch.bytes = 1048576
2022-05-13 16:15:48.118, max.poll.interval.ms = 300000
2022-05-13 16:15:48.118, max.poll.records = 200
2022-05-13 16:15:48.118, metadata.max.age.ms = 300000
Solution 1:[1]
Can you provide the following information?
- MAX TPS (While producing events into the topic)
- MAX Message Size of each events.
These are the ideal Consumer configuration values.
enable.auto.commit = false
auto.commit.interval.ms = 5000 <<Upto the Client, but this is ideal, if you enable Auto Commit>>
connections.max.idle.ms = 540000
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
heartbeat.interval.ms = 3000
max.poll.interval.ms = <<5 Sec is ideal>>
max.poll.records = <<Upto the Client>>
session.timeout.ms = 10000
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | ChristDist |