I am doing a POC with Spring Boot & Kafka for a transactional project and I have the following doubt: Scenario: One microservices MSPUB1 receives Requests
I have a Kafka cluster running with 2 partitions. I was looking for a way to increase the partition count to 3. However, I don't want to lose existing messages
I installed kafka confluent oss 4.0 on a fresh linux centos 7 but kafka connect failed to start. Steps to reproduce : - Install Oracle JDK 8 - Copy confluen
If I have same hardware, to use Kafka or our current solution(ServiceMix/Camel). Is there any difference? Can Kafka handle "bigger" data than
I am trying to extract the VALUE part from a bunch pf Kafka Topic Msg's in Python. I am trying to subscribe to a Kafka Topic and read the Latest Message and par
In Mongodb, the objectid is base64. I'm streaming these docs to Kafka using Debezium. How can I get ObjectId to be written as UUID in kafka? Mongo Example Doc :
I'm a begginer on kafka as well as docker, I have been doing a course and working with kafka producer and consumer but for some reason it is not working. When I
How to find the kafka version in linux? whether there is a way to find the installed kafka version other than mentioning the version while downloading it?
What is difference between partition and replica of a topic in kafka cluster. I mean both store the copies of messages in a topic. Then what is the real diffre
I am stuck in a problem while using Kafka in a microservice architecture . I am not able to understand how a microservice handling HTTP requests will be able to
My Kafka Producer is producing messages at the rate of about .. 350 mb per 30 seconds.. Kafka Setup: --> 1 Zookeeper instance --> 3 Kafka Brokers --&
I build a spark Streaming application to keep receiving messages from Kafka and then write them into a table HBase. This app runs pretty good for first 25 mins
I want to mirroring from Kafka Source Cluster to Kafka Destination Cluster. Everything is working fine if my both Source and Target Cluster are on the same vers
I want to stream data from Kafka to MongoDB by using Kafka Connector. I found this one https://github.com/hpgrahsl/kafka-connect-mongodb. But there is no step t
I am using PostGre as database. I want to capture one table data for each batch and convert it as parquet file and store in to s3. I tried to connect using JDB
We have hadoop cluster with 3 kafka machines and 3 zookeeper servers hadoop version - 2.6.4 ( HORTONWORKS ) under zookeeper logs ( /var/log/zookper ) we saw a m
I was reading articles related to Kafka and StreamSets and my understanding was Kafka acts as a broker between Producer system and subscriber. Producer push t
I was reading articles related to Kafka and StreamSets and my understanding was Kafka acts as a broker between Producer system and subscriber. Producer push t
I was reading articles related to Kafka and StreamSets and my understanding was Kafka acts as a broker between Producer system and subscriber. Producer push t