Does "retention.bytes" apply to "compact" topic? The reason why I came here is that in lenses in my current project, I saw the partition bytes is 2GB which is w
I'm building a cdc pipeline to read mysql binlog through maxwell and putting them into kafka my compression type is snappy in maxwell config.But at consumer end
I have a 3 node Kafka cluster with a single zookeeper node, my question is how can I add a new Kafka node to this cluster without downtime?
I have a simple stream processor (not consumer/producer) that looks like this (Kotlin) @Bean fun processFoo():Function<KStream<FooName, FooAddress>, KS
We have a debezium source connectors working perfectly fine, and one of the properties set is, for example: "transforms.SetSchemaMetadata.schema.name": "myschem
Im using Spring Kafka and wrote Producer Class @Component @RequiredArgsConstructor class Producer { private static final String TOPIC = "channels"; pri
Im using Spring Kafka and wrote Producer Class @Component @RequiredArgsConstructor class Producer { private static final String TOPIC = "channels"; pri
I am trying to create a bare-bones skeleton integration test for Kafka with TestContainers: just publish message to topic and check it arrives to it (entire set
We have a "microservices" platform and we are using debezium for change data capture from databases on these platforms which is working nicely. Now, we'd like t
When I launch Docker container with Kafka broker it fails sometimes, but I can't understand by logs what exactly happens, logs always are: # docker-compose up b
I have enabled "store.kafka.keys" : "true", "store.kafka.headers" : "true", "keys.format.class" : "io.confluent.connect.s3.format.json.JsonFormat", "headers.for
I'm deploying zeebe using helm. With extraInitContainers directive I manage to include the kafka-exporter 3.1.1 and it loads correctly. In the yml file I set a
I have a code where I am aggregating the data from Kafka stream via: StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.table(AppConfigs.t
I have multiple questions about the kafka connect S3 sink connector 1.I was wondering if its possible using the S3 sink of kafka connect to save records with mu
I have a kafka connect task which fetches data from a topic with 3 partitions and send the data to a cassandra sink, so I have kconnect in distributed mode with
Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties
What the console of the kafka consumer looks like: ["2017-12-31 16:06:01", 12472391, 1] ["2017-12-31 16:06:01", 12472097, 1] ["2017-12-31 16:05:59", 12471979,
I'm developing a kafka producer code in scala with those libs (I have to use version >6.X in kafka avro serializer to use TLS comunication): <dependency&g
I am having a large cluster of Confluent Kafka comprising of multiple sub-clusters One for Zookeeper, another for Kafka broker with Schema Registry and KSQL str
First of all, I've seen this thread but it's unrelated and having different issue. I have the following settings fragment in my Kafka properties file: ssl.keyst