我正在尝试按照以下说明在EKS上部署Spark History Server : https://github.com/helm/charts/tree/master/stable/spark-history-server.我希望我的Spark作业写入S3存储桶,并希望历史服务器从该存储桶中读取数据。两者都需要使用访问密钥和密钥进行身份验证。将日志从我的应用程序写入存储桶可以很好地工作。但是,我无法将spark历史服务器配置为从存储桶中读取数据。我创建了一个k8s密钥,如我的访问密钥和密钥所述。另外,我创建了以下配置文件:
#values.yaml
pvc:
enablePVC: false
existingClaimName: nfs-pvc
eventsDir: “/”
nfs:
enableExampleNFS: false
pvName: nfs-pv
pvcName: nfs-pvc
s3:
enableS3: true
enableIAM: false
secret: aws-secrets
accessKeyName: aws-access-key
secretKeyName: aws-secret-key
logDirectory: s3a://my-bucket-name/path-in-my-bucket但是,当我尝试安装helm chart时,运行历史服务器的pod总是崩溃,并出现http400错误:
2021-01-18 13:02:13 INFO HistoryServer:2566 - Started daemon with process name: 7@spark-history-server-1610974923-66c4fd74f6-5xkjf
2021-01-18 13:02:13 INFO SignalUtils:54 - Registered signal handler for TERM
2021-01-18 13:02:13 INFO SignalUtils:54 - Registered signal handler for HUP
2021-01-18 13:02:13 INFO SignalUtils:54 - Registered signal handler for INT
2021-01-18 13:02:13 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2021-01-18 13:02:13 INFO SecurityManager:54 - Changing view acls to: root
2021-01-18 13:02:13 INFO SecurityManager:54 - Changing modify acls to: root
2021-01-18 13:02:13 INFO SecurityManager:54 - Changing view acls groups to:
2021-01-18 13:02:13 INFO SecurityManager:54 - Changing modify acls groups to:
2021-01-18 13:02:13 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2021-01-18 13:02:13 INFO FsHistoryProvider:54 - History server ui acls disabled; users with admin permissions: ; groups with admin permissions
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:280)
at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: DEFB3BC73467356C, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: f1aPV+hj8fIrlSLpfmxwMblFWXr67PFfD4YtJ0ucx7RzUYJdUKVE9QwAzc3Hfn5DvtJb5qADLco=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:117)
at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:86)
... 6 more我被困在这一点上了,我恳请您提供进一步的调查/故障排除选项的建议。
谢谢!
https://stackoverflow.com/questions/65775330
复制相似问题