我想使用一个S3脚本(它是一个备份脚本),将一个大约10 it的存档文件传输到我的AmazonPHM桶中。
我实际上使用了以下代码:
$uploader = new \Aws\S3\MultipartCopy($s3Client, $tmpFilesBackupDirectory, [
'Bucket' => 'MyBucketName',
'Key' => 'backup'.date('Y-m-d').'.tar.gz',
'StorageClass' => $storageClass,
'Tagging' => 'expiration='.$tagging,
'ServerSideEncryption' => 'AES256',
]);
try
{
$result = $uploader->copy();
echo "Upload complete: {$result['ObjectURL']}\n";
}
catch (Aws\Exception\MultipartUploadException $e)
{
echo $e->getMessage() . "\n";
}我的问题是,几分钟后(比如说1000万),我从apache服务器收到一条错误消息:504Gateway超时值。我知道这个错误与我的Apache服务器的配置有关,但我不想增加服务器的超时时间。
我的想法是使用PHP低级API执行以下步骤:
我认为这是可行的,但在使用此方法之前,我想知道是否有更容易的方法来实现我的目标,例如使用高级API.
有什么建议吗?
注意:我并不是在寻找现有的脚本:我的主要目标是学习如何解决这个问题:)
诚挚的问候,
莱昂内尔
发布于 2018-03-20 19:36:08
为了回答我自己的问题(我希望有一天能帮到别人!),他是我如何解决我的问题,一步一步地:
1/当我加载页面时,我会检查存档文件是否已经存在。如果没有,我将创建我的.tar.gz文件,并使用header()重新加载页面。我注意到这个步骤非常慢,因为需要存档的数据很多。这就是为什么我重新加载我的页面,以避免任何超时在接下来的步骤!
2/如果备份文件存在,我将使用AWS MultipartUpload发送10块100 of的数据。每次成功发送块时,我都会更新一个会话变量($_ session‘’backup‘),以了解需要上载的下一个块是什么。一旦我的10个块被发送,我再次重新加载页面,以避免任何超时。
3/我重复第二步,直到完成所有部分的上载为止,使用我的会话变量来知道下一步需要发送上传的哪一部分。
4最后,我完成了多部分上传,并删除了本地存储的存档。
当然,在重新加载页面之前,您可以发送超过10次100 to。我选择这个值是为了确保即使下载速度慢,也不会达到超时。但我想我可以轻松地发送大约5GB每次没有问题。注意:你不能把你的脚本重定向到自己太多的时间。这是有限度的(我认为Chrome和for以前得到错误的次数是以前的20倍,IE的错误次数更多)。在我的例子中(存档大约是10 1GB ),每个重新加载1GB的传输是很好的(页面将被重新加载大约10次)。但是随着档案大小的增加,我每次都要发送更多的文件。
这是我的完整剧本。我当然可以改进,但它目前工作得很好,它可能会帮助有类似问题的人!
public function backup()
{
ini_set('max_execution_time', '1800');
ini_set('memory_limit', '1024M');
require ROOT.'/Public/scripts/aws/aws-autoloader.php';
$s3Client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => 'eu-west-1',
'credentials' => [
'key' => '',
'secret' => '',
],
]);
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$this->cleanInterruptedMultipartUploads($s3Client);
$this->createSQLBackupFile();
$this->uploadSQLBackup($s3Client, $tmpDBBackupDirectory);
}
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
if(!isset($_SESSION['backup']['archiveReady']))
{
$this->createFTPBackupFile();
header('Location: '.CURRENT_URL);
}
$this->uploadFTPBackup($s3Client, $tmpFilesBackupDirectory);
unlink($tmpDBBackupDirectory);
unlink($tmpFilesBackupDirectory);
}
public function createSQLBackupFile()
{
// Backup DB
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$return_var = NULL;
$output = NULL;
$dbLogin = '';
$dbPassword = '';
$dbName = '';
$command = 'mysqldump -u '.$dbLogin.' -p'.$dbPassword.' '.$dbName.' --single-transaction --quick | gzip > '.$tmpDBBackupDirectory;
exec($command, $output, $return_var);
}
return $tmpDBBackupDirectory;
}
public function createFTPBackupFile()
{
// Compacting all files
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
$command = 'tar -cf '.$tmpFilesBackupDirectory.' '.ROOT;
exec($command);
$_SESSION['backup']['archiveReady'] = true;
return $tmpFilesBackupDirectory;
}
public function uploadSQLBackup($s3Client, $tmpDBBackupDirectory)
{
$result = $s3Client->putObject([
'Bucket' => '',
'Key' => 'backup'.date('Y-m-d').'.sql.gz',
'SourceFile' => $tmpDBBackupDirectory,
'StorageClass' => '',
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
}
public function uploadFTPBackup($s3Client, $tmpFilesBackupDirectory)
{
$storageClass = 'STANDARD_IA';
$bucket = '';
$key = 'backup'.date('Y-m-d').'.tar.gz';
$chunkSize = 100 * 1024 * 1024; // 100MB
$reloadFrequency = 10;
if(!isset($_SESSION['backup']['uploadId']))
{
$response = $s3Client->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $key,
'StorageClass' => $storageClass,
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
$_SESSION['backup']['uploadId'] = $response['UploadId'];
$_SESSION['backup']['partNumber'] = 1;
}
$file = fopen($tmpFilesBackupDirectory, 'r');
$parts = array();
//Reading parts already uploaded
for($i = 1; $i < $_SESSION['backup']['partNumber']; $i++)
{
if(!feof($file))
{
fread($file, $chunkSize);
}
}
// Uploading next parts
while(!feof($file))
{
do
{
try
{
$result = $s3Client->uploadPart(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'PartNumber' => $_SESSION['backup']['partNumber'],
'Body' => fread($file, $chunkSize),
));
}
}
while (!isset($result));
$_SESSION['backup']['parts'][] = array(
'PartNumber' => $_SESSION['backup']['partNumber'],
'ETag' => $result['ETag'],
);
$_SESSION['backup']['partNumber']++;
if($_SESSION['backup']['partNumber'] % $reloadFrequency == 1)
{
header('Location: '.CURRENT_URL);
die;
}
}
fclose($file);
$result = $s3Client->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'MultipartUpload' => Array(
'Parts' => $_SESSION['backup']['parts'],
),
));
$url = $result['Location'];
}
public function cleanInterruptedMultipartUploads($s3Client)
{
$tResults = $s3Client->listMultipartUploads(array('Bucket' => ''));
$tResults = $tResults->toArray();
if(isset($tResults['Uploads']))
{
foreach($tResults['Uploads'] AS $result)
{
$s3Client->abortMultipartUpload(array(
'Bucket' => '',
'Key' => $result['Key'],
'UploadId' => $result['UploadId']));
}
}
if(isset($_SESSION['backup']))
{
unset($_SESSION['backup']);
}
}如果有人有疑问,请与我联系:)
发布于 2018-03-20 14:05:36
https://stackoverflow.com/questions/47832871
复制相似问题