首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Kafka + SSL:配置的常规SSLEngine问题使用提供的设置创建的客户端SSLEngine

Kafka + SSL:配置的常规SSLEngine问题使用提供的设置创建的客户端SSLEngine
EN

Stack Overflow用户
提问于 2019-08-12 19:05:53
回答 1查看 6.3K关注 0票数 0

我正在尝试用SSL配置kafka,但是当我启动kafka时,我得到了这个错误:

代码语言:javascript
复制
[2019-08-12 12:28:15,506] INFO Awaiting socket connections on localhost:9093. (kafka.network.Acceptor)
[2019-08-12 12:28:17,014] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: General SSLEngine problem for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
    at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:73)
    at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
    at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
    at kafka.network.Processor.<init>(SocketServer.scala:726)
    at kafka.network.SocketServer.newProcessor(SocketServer.scala:367)
    at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:261)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
    at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:260)
    at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:223)
    at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:220)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:220)
    at kafka.network.SocketServer.startup(SocketServer.scala:120)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:255)
    at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114)
    at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66)
Caused by: org.apache.kafka.common.config.ConfigException: Invalid value javax.net.ssl.SSLHandshakeException: General SSLEngine problem for configuration A client SSLEngine created with the provided settings can't connect to a server SSLEngine created with those settings.
    at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:98)
    at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:71)
    ... 17 more
[2019-08-12 12:28:17,017] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)

这就是我所做的:

代码语言:javascript
复制
- 1) Create a Certificate Authority. The generated CA is a public-private key pair and certificate used to sign other certificates. A CA is responsible for signing certificates.
openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Kafka-Security-CA" -keyout ca-key -out ca-cert -nodes

- 2) Create a kafka broker certificate:
keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass serversecret -keypass serversecret -dname "CN=localhost" -storetype pkcs12

- 3) Get the signed version of the certificate:
keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass serversecret -keypass serversecret

- 4) Sign the certificate with the CA:
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:serversecret

- 5) Create a truststore by importing the CA public certificate so that the kafka broker is trusting all certificates which has been issued by our CA:
keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass serversecret -keypass serversecret -noprompt

- 6) Import the signed certificate in the keystore:
keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass serversecret -keypass serversecret -noprompt

- 7) Configure server.properties:
listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
zookeeper.connect=localhost:2181

ssl.keystore.location=/home/xrobot/confluent-5.3.0-community/kafka.server.keystore.jks
ssl.keystore.password=serversecret
ssl.key.password=serversecret
ssl.truststore.location=/home/xrobot/confluent-5.3.0-community/kafka.server.truststore.jks
ssl.truststore.password=serversecret

security.inter.broker.protocol=SSL
ssl.client.auth=required
ssl.endpoint.identification.algorithm=https

编辑:我从"ssl.endpoint.identification.algorithm=https“中删除了https,现在我得到了这个错误:

代码语言:javascript
复制
[2019-08-13 17:58:32,083] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:os.version=4.15.0-55-generic (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:user.name=xrobot (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:user.home=/home/xrobot (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,083] INFO Client environment:user.dir=/home/xrobot/confluent-5.3.0-community (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,084] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@8dbdac1 (org.apache.zookeeper.ZooKeeper)
[2019-08-13 17:58:32,096] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:32,104] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-08-13 17:58:32,106] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-08-13 17:58:32,176] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:32,180] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:32,236] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000dd1939b0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:32,242] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:32,261] INFO Unable to read additional data from server sessionid 0x1000dd1939b0001, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:32,375] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:32,375] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:33,504] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-08-13 17:58:33,505] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:33,508] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:33,511] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000dd1939b0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:33,516] INFO Unable to read additional data from server sessionid 0x1000dd1939b0001, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:33,617] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:33,618] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:34,823] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-08-13 17:58:34,824] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:34,825] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:34,829] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000dd1939b0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:34,833] INFO Unable to read additional data from server sessionid 0x1000dd1939b0001, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:34,934] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:34,934] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:36,769] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-08-13 17:58:36,770] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:36,771] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:36,774] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000dd1939b0001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:36,778] INFO Unable to read additional data from server sessionid 0x1000dd1939b0001, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2019-08-13 17:58:36,879] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-08-13 17:58:36,879] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
^C[2019-08-13 17:58:37,292] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler)
[2019-08-13 17:58:37,294] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable)
[2019-08-13 17:58:37,294] INFO Closing BaseMetricsReporter (io.confluent.support.metrics.BaseMetricsReporter)
[2019-08-13 17:58:37,295] INFO Waiting for metrics thread to exit (io.confluent.support.metrics.SupportedServerStartable)
[2019-08-13 17:58:37,295] INFO Shutting down KafkaServer (io.confluent.support.metrics.SupportedServerStartable)
[2019-08-13 17:58:37,297] INFO shutting down (kafka.server.KafkaServer)
[2019-08-13 17:58:37,304] ERROR Fatal error during KafkaServer shutdown. (kafka.server.KafkaServer)
java.lang.IllegalStateException: Kafka server is still starting up, cannot shut down!
    at kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
    at io.confluent.support.metrics.SupportedServerStartable.shutdown(SupportedServerStartable.java:147)
    at io.confluent.support.metrics.SupportedKafka$1.run(SupportedKafka.java:62)
[2019-08-13 17:58:37,305] ERROR Caught exception when trying to shut down KafkaServer. Exiting forcefully. (io.confluent.support.metrics.SupportedServerStartable)
java.lang.IllegalStateException: Kafka server is still starting up, cannot shut down!
    at kafka.server.KafkaServer.shutdown(KafkaServer.scala:584)
    at io.confluent.support.metrics.SupportedServerStartable.shutdown(SupportedServerStartable.java:147)
    at io.confluent.support.metrics.SupportedKafka$1.run(SupportedKafka.java:62)
xrobot@xrobot:~/confluent-5.3.0-community$ 
EN

回答 1

Stack Overflow用户

发布于 2020-02-04 23:21:24

尝试将"-keyalg RSA“添加到p.2中,如下所示:

代码语言:javascript
复制
- 2) Create a kafka broker certificate:
keytool -genkey -keyalg RSA -keystore kafka.server.keystore.jks -validity 365 -storepass serversecret -keypass serversecret -dname "CN=localhost" -storetype pkcs12

然后,您应该将服务器证书添加到p.6中的server.keystore:

代码语言:javascript
复制
- 6) Import the signed certificate in the keystore:

    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass serversecret -keypass serversecret -noprompt
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file cert-signed -storepass serversecret -keypass serversecret -noprompt

As服务器密钥存储库需要保存它,以便为客户端提供服务。

注意这一点:

代码语言:javascript
复制
#From kafka 2.0 onwards, host name verification of servers is enabled by default and the errors were logged because,
#the kafka hostname didnt match the certificate CN. If your hostname and certificate doesnt match,
#then you can disable the hostname verification by setting the property ssl.endpoint.identification.algorithm to empty string

将其保留为空以进行测试:

代码语言:javascript
复制
ssl.endpoint.identification.algorithm =

您应该更改/etc/hosts并为服务器提供FQDN名称,将其放在DNS中以正确处理客户端,或者使用Kafka的FQDN将其添加到每个客户端/etc/hosts IP -因为这对于kafka是必要的。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57460012

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档