I have enabled "store.kafka.keys" : "true", "store.kafka.headers" : "true", "keys.format.class" : "io.confluent.connect.s3.format.json.JsonFormat", "headers.for
I'm deploying zeebe using helm. With extraInitContainers directive I manage to include the kafka-exporter 3.1.1 and it loads correctly. In the yml file I set a
I have a code where I am aggregating the data from Kafka stream via: StreamsBuilder streamsBuilder = new StreamsBuilder(); streamsBuilder.table(AppConfigs.t
I have multiple questions about the kafka connect S3 sink connector 1.I was wondering if its possible using the S3 sink of kafka connect to save records with mu
I have a kafka connect task which fetches data from a topic with 3 partitions and send the data to a cassandra sink, so I have kconnect in distributed mode with
Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties
What the console of the kafka consumer looks like: ["2017-12-31 16:06:01", 12472391, 1] ["2017-12-31 16:06:01", 12472097, 1] ["2017-12-31 16:05:59", 12471979,
I'm developing a kafka producer code in scala with those libs (I have to use version >6.X in kafka avro serializer to use TLS comunication): <dependency&g
I am having a large cluster of Confluent Kafka comprising of multiple sub-clusters One for Zookeeper, another for Kafka broker with Schema Registry and KSQL str
First of all, I've seen this thread but it's unrelated and having different issue. I have the following settings fragment in my Kafka properties file: ssl.keyst
First of all, I've seen this thread but it's unrelated and having different issue. I have the following settings fragment in my Kafka properties file: ssl.keyst
We are planning to upgrade our existing apache kafka cluster to confluent kafka while upgrading do we have any data loss in the topics?? And also main reason we
I started learning ksqldb for Kafka and ran into a problem, I have a Product stream and the structure is the following: ID | VARCHAR(STRING) NAME
Setup Multiple independent source systems push AVRO events into a Kafka topic. A Kafka S3 sink connector reads AVRO events from this topic and writes into S3 pa
I'm using open-telemetry to trace my applications and have a few microservices and Kafka broker in my distributed system. I'm using Java/spring-boot and in the
load multiple postgresql tables into multiple kafka topics in google cloud environment using pubsub or kafka connect.
I am trying to read a stream from kafka using pyspark. I am using spark version 3.0.0-preview2 and spark-streaming-kafka-0-10_2.12 Before this I just stat zoo
I have a Stream on a topic with schema: --root --name: string --age: integer --accounts: Array --email I would like to select all root elements hav
I have a Stream on a topic with schema: --root --name: string --age: integer --accounts: Array --email I would like to select all root elements hav
Sometimes we encounter lag in kafka consumer due to some external issues. Flink job will always consume kafka history (delayed data) with exactly-once semantics