我正试图使用以下代码将csv文件写入Amazon存储桶
s3write_using(gene_read_counts, FUN = write.csv, object = "gene_read_counts_test.csv", bucket = "test-bioinformatics-dev-bkt/research/bioinformatics/colo_final/data/processed/colorectal", row.names=FALSE)我收到以下错误
文件大小为71619789。考虑设置'multipart = TRUE‘。Parse_aws_s3_response中的错误(r,Sig,详细=详细):禁止(HTTP403).
发布于 2019-11-12 17:01:00
在观察错误时,可能有两个方面。
来实现。
- Split the Objects/file in small chunks.
- Upload initialization using `CreateMultipartUpload` of S3 API.
- Upload part of objects using multipart upload. use `UploadPartCopy` operation of S3 API
- Complete Multipart Upload. use `CompleteMultipartUpload` operation of S3 API.
- Meanwhile must implement `AbortMultipartUpload` if any of the part uploaded failed. Using `AbortMultipartUpload`, the storage consumed by any previously uploaded parts will be freed. 请参阅以下AWS文件。https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html
,请检查IAM角色。
因为您在S3中使用“R”语言编写文件。我建议使用Put_Object函数,并在函数中设置multipart = TRUE来上传文件部分。
您可以使用下面的代码
put_object(filename, object, bucketname, multipart = TRUE, acl = c("private",
"public-read", "public-read-write", "aws-exec-read", "authenticated-read",
"bucket-owner-read", "bucket-owner-full-control"), headers = list(), ...)当您在上面的函数中说multipart = TRUE时,它将创建提供的对象的部分或块,并将其上传到S3中。
https://stackoverflow.com/questions/58818454
复制相似问题