'Message loss/missing from same topic, read with different consumer group in Kafka
I have been encountering a weird issue with Kafka and Confluent Sink Connector which I am using in my setup. I have a system where in I have two kafka connect sink working on same topic of Kafka. I have S3 Connect Sink and Elastic Sink both are configured to read data from the same topic and both have different consumer group assigned. As per my knowledge, both should have the same data read into. But what we are observing is the data read to Elasticsink is far too less than what it is persisted to S3 sink. Upon a simple check I could find that while S3 contains 100% of data which is being targeted to the topic, Elastic has only 10% of data.
As I am new to Kafka and operating with very minimal to no knowledge. Any pointer would help to figure out what could be the possible issue ?? And how can I debug around them . The setup has
Kafka 2.5.0
Confluent S3 version :- 5.5.1
Confluent Elastic Version :- 5.5.1
Config which I have for Elastic connector
topics: "topic1,topic2"
key.ignore: "true"
schema.ignore: "true"
timezone: "UTC"
connection.url: "https://elastic_search_url:9200"
offset.flush.timeout.ms: "180000"
session.timeout.ms: "600000"
connection.username: elastic
elastic.security.protocol: SSL
elastic.https.ssl.keystore.type: JKS
elastic.https.ssl.truststore.type: JKS
type.name: "_doc"
value.converter.schemas.enable: "false"
key.converter.schemas.enable: "false"
key.converter: "org.apache.kafka.connect.json.JsonConverter"
value.converter: "org.apache.kafka.connect.json.JsonConverter"
behavior.on.malformed.documents: "warn"
transforms: "routeTS"
transforms.routeTS.type: "org.apache.kafka.connect.transforms.TimestampRouter"
transforms.routeTS.topic.format: "${topic}-${timestamp}"
transforms.routeTS.timestamp.format: "YYYYMMdd"
Appreciate any help or pointers.
Solution 1:[1]
The issue is definitely not with Kafka consumer group. All group will process message independent to each other , no conflict or missing for any of the Connectors.
The issue appears to be with Elastic connector configuration, Kindly check the following property
key.ignore = false( default value)
write.method=INSERT ( default value)
Refer the definition here
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | kus |