我在S3上有很多需要压缩的文件,然后通过S3提供压缩文件。目前,我将它们从流压缩到本地文件,然后再次上传文件。这占用了大量的磁盘空间,因为每个文件大约有3-10MB,我必须压缩多达100.000个文件。因此,zip的容量可以超过1TB。所以我想要一个这样的解决方案:
Create a zip file on S3 from files on S3 using Lambda Node
在这里,压缩是直接在S3上创建的,而不占用本地磁盘空间。但是我还不够聪明,不能把上面的解决方案转移到Java上。我也在java aws sdk上发现了相互矛盾的信息,说他们计划在2017年改变流行为。
我不确定这是否会有帮助,但以下是我到目前为止一直在做的事情(Upload是保存S3信息的本地模型)。为了更好的可读性,我刚刚删除了日志和其他东西。我想我没有占用下载的空间,直接将InputStream“管道”到压缩包中。但正如我所说的,我也希望避免使用本地压缩文件,直接在S3上创建它。但是,这可能需要使用S3作为目标而不是FileOutputStream来创建ZipOutputStream。我不确定如何做到这一点。
public File zipUploadsToNewTemp(List<Upload> uploads) {
List<String> names = new ArrayList<>();
byte[] buffer = new byte[1024];
File tempZipFile;
try {
tempZipFile = File.createTempFile(UUID.randomUUID().toString(), ".zip");
} catch (Exception e) {
throw new ApiException(e, BaseErrorCode.FILE_ERROR, "Could not create Zip file");
}
try (
FileOutputStream fileOutputStream = new FileOutputStream(tempZipFile);
ZipOutputStream zipOutputStream = new ZipOutputStream(fileOutputStream)) {
for (Upload upload : uploads) {
InputStream inputStream = getStreamFromS3(upload);
ZipEntry zipEntry = new ZipEntry(upload.getFileName());
zipOutputStream.putNextEntry(zipEntry);
writeStreamToZip(buffer, zipOutputStream, inputStream);
inputStream.close();
}
zipOutputStream.closeEntry();
zipOutputStream.close();
return tempZipFile;
} catch (IOException e) {
logError(type, e);
if (tempZipFile.exists()) {
FileUtils.delete(tempZipFile);
}
throw new ApiException(e, BaseErrorCode.IO_ERROR,
"Error zipping files: " + e.getMessage());
}
}
// I am not even sure, but I think this takes up memory and not disk space
private InputStream getStreamFromS3(Upload upload) {
try {
String filename = upload.getId() + "." + upload.getFileType();
InputStream inputStream = s3FileService
.getObject(upload.getBucketName(), filename, upload.getPath());
return inputStream;
} catch (ApiException e) {
throw e;
} catch (Exception e) {
logError(type, e);
throw new ApiException(e, BaseErrorCode.UNKOWN_ERROR,
"Unkown Error communicating with S3 for file: " + upload.getFileName());
}
}
private void writeStreamToZip(byte[] buffer, ZipOutputStream zipOutputStream,
InputStream inputStream) {
try {
int len;
while ((len = inputStream.read(buffer)) > 0) {
zipOutputStream.write(buffer, 0, len);
}
} catch (IOException e) {
throw new ApiException(e, BaseErrorCode.IO_ERROR, "Could not write stream to zip");
}
}最后是上传源代码。Inputstream是从临时Zip文件创建的。
public PutObjectResult upload(InputStream inputStream, String bucketName, String filename, String folder) {
String uploadKey = StringUtils.isEmpty(folder) ? "" : (folder + "/");
uploadKey += filename;
ObjectMetadata metaData = new ObjectMetadata();
byte[] bytes;
try {
bytes = IOUtils.toByteArray(inputStream);
} catch (IOException e) {
throw new ApiException(e, BaseErrorCode.IO_ERROR, e.getMessage());
}
metaData.setContentLength(bytes.length);
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketPrefix + bucketName, uploadKey, byteArrayInputStream, metaData);
putObjectRequest.setCannedAcl(CannedAccessControlList.PublicRead);
try {
return getS3Client().putObject(putObjectRequest);
} catch (SdkClientException se) {
throw s3Exception(se);
} finally {
IOUtils.closeQuietly(inputStream);
}
}刚刚发现了一个类似的问题,我需要什么也没有答案:
Upload ZipOutputStream to S3 without saving zip file (large) temporary to disk using AWS S3 Java
发布于 2021-04-05 23:18:41
您可以从S3数据中获取输入流,然后压缩这批字节并将其传输回S3
long numBytes; // length of data to send in bytes..somehow you know it before processing the entire stream
PipedOutputStream os = new PipedOutputStream();
PipedInputStream is = new PipedInputStream(os);
ObjectMetadata meta = new ObjectMetadata();
meta.setContentLength(numBytes);
new Thread(() -> {
/* Write to os here; make sure to close it when you're done */
try (ZipOutputStream zipOutputStream = new ZipOutputStream(os)) {
ZipEntry zipEntry = new ZipEntry("myKey");
zipOutputStream.putNextEntry(zipEntry);
S3ObjectInputStream objectContent = amazonS3Client.getObject("myBucket", "myKey").getObjectContent();
byte[] bytes = new byte[1024];
int length;
while ((length = objectContent.read(bytes)) >= 0) {
zipOutputStream.write(bytes, 0, length);
}
objectContent.close();
} catch (IOException e) {
e.printStackTrace();
}
}).start();
amazonS3Client.putObject("myBucket", "myKey", is, meta);
is.close(); // always close your streams发布于 2019-07-03 14:21:13
我建议使用亚马逊EC2实例(低至1c/小时,或者您甚至可以使用Spot实例以较低的价格获得它)。较小的实例类型成本较低,但带宽有限,因此可以调整大小来获得您喜欢的性能。
编写一个遍历文件的脚本,然后:
所有的zip魔术都发生在本地磁盘上。无需使用streams。只需使用亚马逊S3 download_file()和upload_file()调用即可。
如果EC2实例与亚马逊S3位于同一区域,则不会收取数据传输费用。
https://stackoverflow.com/questions/56846856
复制相似问题