'Kafka Error while creating ACLs NoAuth for /kafka-acl/TransactionalId
I am trying to setup my very first Kafka cluster using confluent 6.0.1 community edition. I have three zookeeper and three kafka nodes. Three server nodes are:
- kafkaserver1
- kafkaserver2
- kafkaserver3
Each node runs zookeeper and kafka services. Authentication is: SASL_SSL using SCRAM-SHA-256
Both zookeeper and kafka services seems to be working fine but when I try to assign ACLs, I get the following error:
Error while executing ACL command: KeeperErrorCode = NoAuth for /kafka-acl/TransactionalId
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /kafka-acl/TransactionalId
at org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1646)
at kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2(KafkaZkClient.scala:1111)
at kafka.zk.KafkaZkClient.$anonfun$createAclPaths$2$adapted(KafkaZkClient.scala:1111)
at scala.collection.immutable.HashSet.foreach(HashSet.scala:932)
at kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1(KafkaZkClient.scala:1111)
at kafka.zk.KafkaZkClient.$anonfun$createAclPaths$1$adapted(KafkaZkClient.scala:1109)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:553)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:551)
at scala.collection.AbstractIterable.foreach(Iterable.scala:920)
at kafka.zk.KafkaZkClient.createAclPaths(KafkaZkClient.scala:1109)
at kafka.security.authorizer.AclAuthorizer.configure(AclAuthorizer.scala:169)
at kafka.admin.AclCommand$AuthorizerService.addAcls(AclCommand.scala:212)
at kafka.admin.AclCommand$.main(AclCommand.scala:70)
at kafka.admin.AclCommand.main(AclCommand.scala)
zookeeper.properties (same across all three servers)
tickTime=2000
dataDir=/var/lib/confluent/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=kafkaserver1:2888:3888
server.2=kafkaserver2:2888:3888
server.3=kafkaserver3:2888:3888
autopurge.snapRetainCount=3
autopurge.purgeInterval=24
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
server.properties (same across all nodes)
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-512,PLAIN,SCRAM-SHA-256
ssl.keymanager.algorithm=SunX509
ssl.keystore.location=/opt/confluent-community/certs/kafka.server.keystore.jks
ssl.keystore.password=Password1
ssl.key.password=Password1
ssl.keystore.type=JKS
ssl.protocol=TLS
ssl.trustmanager.algorithm=PKIX
ssl.truststore.location=/opt/confluent-community/certs/kafka.server.truststore.jks
ssl.truststore.password=Password1
ssl.truststore.type=JKS
#authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
super.users=User:admin
zookeeper.set.acl=true
allow.everyone.if.no.acl.found=true
server.properties (Node specific. For simplicity, including only for "kafkaserver1" node)
listeners=PLAINTEXT://kafkaserver1:9092,SSL://kafkaserver1:9093,SASL_SSL://kafkaserver1:9094
advertised.listeners=PLAINTEXT://kafkaserver1:9092,SSL://kafkaserver1,SASL_SSL://kafkaserver1:9094
zookeeper.connect=kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181
Zookeeper Jaas configuration files (Same across all nodes)
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="Architecture@20"
user_kafka="Kafka@20";
};
Kaka Jaas configuration file (Same across all nodes)
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka-secret"
user-admin="admin";
};
I started zookeeper nodes.. and created admin ACL using the following:
$KAFKA_HOME/bin/kafka-configs.sh --zookeeper kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Created demo user as seen below
$KAFKA_HOME/bin/kafka-configs.sh --zookeeper kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=demouser-secret],SCRAM-SHA-512=[password=demouser-secret]' --entity-type users --entity-name demouser
So far, everything is working well..
Now, next step is to assign ACL to a demouser by executing the following (This should ideally create a topic and add ACL for a user)
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=kafkaserver1:2181,kafkaserver2:2181,kafkaserver3:2181 --add --allow-principal User:demouser --operation Create --operation Describe --topic demo-topic
When I execute the above command, it throws an error mentioned at the start of a thread.
Solution 1:[1]
ssl.client.auth=none
- This should be none in case as you are using SASL instead of SSL.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Harshal Patil |