我正在尝试使用库Azure.Storage.Files.Shares将文件上载到Azure文件共享。
如果我不对文件进行分块(通过发出单个UploadRange调用),它可以正常工作,但对于超过4Mb的文件,我无法让分块工作。该文件在下载时大小相同,但无法在查看器中打开。
我不能在一个大文件上设置较小的HttpRanges,因为我得到了一个‘请求主体太大’的错误,所以我将文件流分割成多个迷你流,并上传每个流的整个HttpRange
ShareClient share = new ShareClient(Common.Settings.AppSettings.AzureStorageConnectionString, ShareName());
ShareDirectoryClient directory = share.GetDirectoryClient(directoryName);
ShareFileClient file = directory.GetFileClient(fileKey);
using(FileStream stream = fileInfo.OpenRead())
{
file.Create(stream.Length);
//file.UploadRange(new HttpRange(0, stream.Length), stream);
int blockSize = 128 * 1024;
BinaryReader reader = new BinaryReader(stream);
while(true)
{
byte[] buffer = reader.ReadBytes(blockSize);
if (buffer.Length == 0)
break;
MemoryStream uploadChunk = new MemoryStream();
uploadChunk.Write(buffer, 0, buffer.Length);
uploadChunk.Position = 0;
file.UploadRange(new HttpRange(0, uploadChunk.Length), uploadChunk);
}
reader.Close();
}上面的代码上载时没有错误,但从Azure下载图像时它已损坏。
有谁有什么想法吗?感谢您能提供的任何帮助。
干杯
史蒂夫
发布于 2020-04-03 07:50:31
我能够重现这个问题。基本上,问题出在以下代码行:
new HttpRange(0, uploadChunk.Length)本质上,您总是将内容设置在相同的范围内,这就是文件被损坏的原因。
请尝试下面的代码。应该能行得通。我在这里所做的是定义HTTP范围偏移量,并根据已写入文件的字节数不断移动它。
using (FileStream stream = fileInfo.OpenRead())
{
file.Create(stream.Length);
//file.UploadRange(new HttpRange(0, stream.Length), stream);
int blockSize = 1 * 1024;
long offset = 0;//Define http range offset
BinaryReader reader = new BinaryReader(stream);
while (true)
{
byte[] buffer = reader.ReadBytes(blockSize);
if (buffer.Length == 0)
break;
MemoryStream uploadChunk = new MemoryStream();
uploadChunk.Write(buffer, 0, buffer.Length);
uploadChunk.Position = 0;
HttpRange httpRange = new HttpRange(offset, buffer.Length);
var resp = file.UploadRange(httpRange, uploadChunk);
offset += buffer.Length;//Shift the offset by number of bytes already written
}
reader.Close();
}https://stackoverflow.com/questions/61001985
复制相似问题