我试图使用JDBC连接器连接到集群上的PostgreSQL数据库(该数据库不是由集群直接管理的)。
我用以下命令调用了Kafka Connect:
connect-standalone.sh worker.properties jdbc-connector.properties这是worker.properties文件的内容:
class=io.confluent.connect.jdbc.JdbcSourceConnector
name=test-postgres-1
tasks.max=1
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/home/user/offest
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
connection.url=jdbc:postgresql://database-server.url:port/database?user=user&password=password这是jdbc-connector.properties的内容
mode=incrementing
incrementing.column.name=id
topic.prefix=test-postgres-jdbc-当我尝试用上面的命令启动连接器时,它会崩溃,导致以下错误:
[2018-04-16 11:39:08,164] ERROR Failed to create job for jdbc.properties (org.apache.kafka.connect.cli.ConnectStandalone:88)
[2018-04-16 11:39:08,166] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:99)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {mode=incrementing, incrementing.column.name=pdv, topic.prefix=test-postgres-jdbc-} contains no connector type
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:80)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:67)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:96)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector config {mode=incrementing, incrementing.column.name=id, topic.prefix=test-postgres-jdbc-} contains no connector type
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:233)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:158)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:93)在注意到导致错误的连接器只显示了来自jdbc-connector.properties的信息之后,我尝试将这两个文件合并在一起,但是命令突然终止(没有创建主题或偏移文件),输出如下:
[SLF4J infos...]
[2018-04-16 11:48:54,620] INFO Usage: ConnectStandalone worker.properties connector1.properties [connector2.properties ...] (org.apache.kafka.connect.cli.ConnectStandalone:59)发布于 2018-04-16 17:37:21
您需要在jdbc-connector.properties中拥有大多数这些属性,而不是worker.properties。有关连接器配置中的配置选项的完整列表,请参见options.html (在您的示例中为jdbc-connector.properties)。
试试这个:
worker.properties:
internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false offset.storage.file.filename=/home/user/offest value.converter=org.apache.kafka.connect.json.JsonConverter key.converter=org.apache.kafka.connect.json.JsonConverterjdbc-connector.properties:
class=io.confluent.connect.jdbc.JdbcSourceConnector名称=test-postgres-1 tasks.max=1 mode=incrementing incrementing.column.name=id topic.前缀=test jdbc- connection.url=jdbc:postgresql://database-server.url:port/database?user=user&password=password您可以在这里看到卡夫卡连接的更多例子:
https://stackoverflow.com/questions/49856692
复制相似问题