我们让Kafka connect在Kube pod上运行。我们在worker日志中看到以下错误。我们试着重新启动吊舱。
[2020-04-02 14:40:13,237] WARN Aborting multi-part upload with id '0vbfuRZCRIkc431LJN.hyaPuo3ZQzuAsfTrMSdBE_Q9.sZP8g-' (io.confluent.connect.s3.storage.S3OutputStream)
[2020-04-02 14:40:13,274] ERROR Multipart upload failed to complete for bucket 'my_bucket' key '/myfile': (io.confluent.connect.s3.TopicPartitionWriter)
org.apache.kafka.connect.errors.DataException: Multipart upload failed to complete.
at io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:159)
at io.confluent.connect.s3.format.avro.AvroRecordWriterProvider$1.commit(AvroRecordWriterProvider.java:96)
at io.confluent.connect.s3.TopicPartitionWriter.commitFile(TopicPartitionWriter.java:503)
at io.confluent.connect.s3.TopicPartitionWriter.commitFiles(TopicPartitionWriter.java:483)
at io.confluent.connect.s3.TopicPartitionWriter.commitOnTimeIfNoData(TopicPartitionWriter.java:294)
at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:184)
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Part upload failed:
at io.confluent.connect.s3.storage.S3OutputStream.uploadPart(S3OutputStream.java:136)
at io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:153)
... 17 more
[2020-04-02 14:40:13,275] INFO Committing files after waiting for rotateIntervalMs time but less than flush.size records available. (io.confluent.connect.s3.TopicPartitionWriter)发布于 2020-04-02 23:15:11
DataException:无法完成分块上传。java.io.IOException:部件上载失败
看起来和S3的连接中断了。
发布于 2020-04-03 05:56:15
根本原因是缺少适当的KMS密钥策略。
发布于 2021-01-26 15:30:57
在我的例子中,这是两件事:
已在存储桶上启用
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],在禁用s3:PutObject的帐户上启用了
"s3:x-amz-server-side-encryption": "false"解决了问题事实证明这个文档很有用:https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
https://stackoverflow.com/questions/60994659
复制相似问题