I have enabled "store.kafka.keys" : "true", "store.kafka.headers" : "true", "keys.format.class" : "io.confluent.connect.s3.format.json.JsonFormat", "headers.for
I have multiple questions about the kafka connect S3 sink connector 1.I was wondering if its possible using the S3 sink of kafka connect to save records with mu
I have a kafka connect task which fetches data from a topic with 3 partitions and send the data to a cassandra sink, so I have kconnect in distributed mode with
Have installed confluent 6.2.0 in my 3 kafka nodes and also installed confluentinc-kafka-connect-s3-10.0.1 in 3 nodes and modified the quickstart-s3.properties
I am having a large cluster of Confluent Kafka comprising of multiple sub-clusters One for Zookeeper, another for Kafka broker with Schema Registry and KSQL str
Setup Multiple independent source systems push AVRO events into a Kafka topic. A Kafka S3 sink connector reads AVRO events from this topic and writes into S3 pa
load multiple postgresql tables into multiple kafka topics in google cloud environment using pubsub or kafka connect.
I have a topic that will eventually have lots of different schemas on it. For now it just has the one. I've created a connect job via REST like this: { "name"
I am using debezium oracle connector in kafka connect.While starting connector I am getting below error, java.lang.RuntimeException: Failed to resolve Oracle da
My pipeline is: Kerberized Kafka --> Logstash (hosted on a different server) --> Splunk. Can I replace the Logstash component with Kafka Connect? Could
I am using Debezium as a CDC tool to stream data from MySql. After installing Debezium MySQL connector to Confluent OSS cluster, I am trying to capture MySQL bi
I have been encountering a weird issue with Kafka and Confluent Sink Connector which I am using in my setup. I have a system where in I have two kafka connect s
I get an error when running kafka-mongodb-source-connect I was trying to run connect-standalone with connect-avro-standalone.properties and MongoSourceConnector
I am trying to interpret a Avro record stored by Debezium in Kafka, using Python { "name": "id", "type": {
I am trying to implemnt CDC piline with Debezium mysql connecter and kafkal But Source connecter not able to pusblish event for insert and update operation in t
When I create a kafka connect connector with the debezium connector, it results in four database connections. Three of them remain idle, while one works as the
I installed kafka confluent oss 4.0 on a fresh linux centos 7 but kafka connect failed to start. Steps to reproduce : - Install Oracle JDK 8 - Copy confluen
I have setup Debezium and Azure Event Hub as CDC engine from PostgeSQL. Exactly like on this tutorial: https://dev.to/azure/tutorial-set-up-a-change-data-captur
In Mongodb, the objectid is base64. I'm streaming these docs to Kafka using Debezium. How can I get ObjectId to be written as UUID in kafka? Mongo Example Doc :
I want to stream data from Kafka to MongoDB by using Kafka Connector. I found this one https://github.com/hpgrahsl/kafka-connect-mongodb. But there is no step t