首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使用HttpWebRequest获取文件并行

如何使用HttpWebRequest获取文件并行
EN

Stack Overflow用户
提问于 2014-02-12 19:26:15
回答 2查看 2.1K关注 0票数 11

我正在尝试制作一个像IDM这样的程序,它可以同时下载文件的各个部分。

我用来实现这一点的工具是C# .Net4.5中的TPL

但是,当使用Tasks使操作并行时,我遇到了问题。

该序列功能运行良好,正在正确下载文件。

使用任务的并行函数正在工作,直到发生了一些奇怪的事情:

我用Factory.StartNew()创建了4个任务,在每个任务中都给出了开始位置和结束位置,任务将下载这些文件,然后在byte[]中返回它,一切都进行得很好,任务运行良好,但是在某个时候,执行冻结--就是这样,程序停止了,其他什么都没有发生。

并行功能的实现:

代码语言:javascript
复制
static void DownloadPartsParallel()
    {

        string uriPath = "http://mschnlnine.vo.llnwd.net/d1/pdc08/PPTX/BB01.pptx";
        Uri uri = new Uri(uriPath);
        long l = GetFileSize(uri);
        Console.WriteLine("Size={0}", l);
        int granularity = 4;
        byte[][] arr = new byte[granularity][];
        Task<byte[]>[] tasks = new Task<byte[]>[granularity];
        tasks[0] = Task<byte[]>.Factory.StartNew(() => DownloadPartOfFile(uri, 0, l / granularity));
        tasks[1] = Task<byte[]>.Factory.StartNew(() => DownloadPartOfFile(uri, l / granularity + 1, l / granularity + l / granularity));
        tasks[2] = Task<byte[]>.Factory.StartNew(() => DownloadPartOfFile(uri, l / granularity + l / granularity + 1, l / granularity + l / granularity + l / granularity));
        tasks[3] = Task<byte[]>.Factory.StartNew(() => DownloadPartOfFile(uri, l / granularity + l / granularity + l / granularity + 1, l));//(l / granularity) + (l / granularity) + (l / granularity) + (l / granularity)


        arr[0] = tasks[0].Result;
        arr[1] = tasks[1].Result;
        arr[2] = tasks[2].Result;
        arr[3] = tasks[3].Result;
        Stream localStream;
        localStream = File.Create("E:\\a\\" + Path.GetFileName(uri.LocalPath));
        for (int i = 0; i < granularity; i++)
        {

            if (i == granularity - 1)
            {
                for (int j = 0; j < arr[i].Length - 1; j++)
                {
                    localStream.WriteByte(arr[i][j]);
                }
            }
            else
                for (int j = 0; j < arr[i].Length; j++)
                {
                    localStream.WriteByte(arr[i][j]);
                }
        }
    }

DownloadPartOfFile函数实现:

代码语言:javascript
复制
public static byte[] DownloadPartOfFile(Uri fileUrl, long from, long to)
    {
        int bytesProcessed = 0;
        BinaryReader reader = null;
        WebResponse response = null;
        byte[] bytes = new byte[(to - from) + 1];

        try
        {
            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(fileUrl);
            request.AddRange(from, to);
            request.ReadWriteTimeout = int.MaxValue;
            request.Timeout = int.MaxValue;
            if (request != null)
            {
                response = request.GetResponse();
                if (response != null)
                {
                    reader = new BinaryReader(response.GetResponseStream());
                    int bytesRead;
                    do
                    {
                        byte[] buffer = new byte[1024];
                        bytesRead = reader.Read(buffer, 0, buffer.Length);
                        if (bytesRead == 0)
                        {
                            break;
                        }
                        Array.Resize<byte>(ref buffer, bytesRead);
                        buffer.CopyTo(bytes, bytesProcessed);
                        bytesProcessed += bytesRead;
                        Console.WriteLine(Thread.CurrentThread.ManagedThreadId + ",Downloading" + bytesProcessed);
                    } while (bytesRead > 0);
                }
            }
        }
        catch (Exception e)
        {
            Console.WriteLine(e.Message);
        }
        finally
        {
            if (response != null) response.Close();
            if (reader != null) reader.Close();
        }

        return bytes;
    }

我试图通过将int.MaxValue设置为读取超时、写入读取超时和超时来解决这个问题,这就是为什么程序会冻结,如果我不这样做,在函数DownloadPartsParallel中就会出现超时的例外情况。

因此,是否有一个解决办法,或任何其他的建议,可能有帮助,谢谢。

EN

回答 2

Stack Overflow用户

发布于 2014-02-13 06:56:41

我将使用HttpClient.SendAsync而不是WebRequest (参见"HttpClient来了!“)。

I不会使用任何额外的线程。 HttpClient.SendAsync API自然是异步的,并且返回一个可访问的Task<>,因此没有必要将其卸载到带有Task.Run/Task.TaskFactory.StartNew的池线程中(详细讨论请参见 )。

我也会限制使用SemaphoreSlim.WaitAsync()的并行下载的数量。下面是我的控制台应用程序(没有经过广泛的测试):

代码语言:javascript
复制
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;

namespace Console_21737681
{
    class Program
    {
        const int MAX_PARALLEL = 4; // max parallel downloads
        const int CHUNK_SIZE = 2048; // size of a single chunk

        // a chunk of downloaded data
        class Chunk
        {
            public long Start { get; set; }
            public int Length { get; set; }
            public byte[] Data { get; set; }
        };

        // throttle downloads
        SemaphoreSlim _throttleSemaphore = new SemaphoreSlim(MAX_PARALLEL);

        // get a chunk
        async Task<Chunk> GetChunk(HttpClient client, long start, int length, string url)
        {
            await _throttleSemaphore.WaitAsync();
            try
            {
                using (var request = new HttpRequestMessage(HttpMethod.Get, url))
                {
                    request.Headers.Range = new System.Net.Http.Headers.RangeHeaderValue(start, start + length - 1);
                    using (var response = await client.SendAsync(request))
                    {
                        var data = await response.Content.ReadAsByteArrayAsync();
                        return new Chunk { Start = start, Length = length/*, Data = data*/ };
                    }
                }
            }
            finally
            {
                _throttleSemaphore.Release();
            }
        }

        // download the URL in parallel by chunks
        async Task<Chunk[]> DownloadAsync(string url)
        {
            using (var client = new HttpClient())
            {
                var request = new HttpRequestMessage(HttpMethod.Head, url);
                var response = await client.SendAsync(request);
                var contentLength = response.Content.Headers.ContentLength;

                if (!contentLength.HasValue)
                    throw new InvalidOperationException("ContentLength");

                var numOfChunks = (int)((contentLength.Value + CHUNK_SIZE - 1) / CHUNK_SIZE);

                var tasks = Enumerable.Range(0, numOfChunks).Select(i =>
                {
                    // start a new chunk
                    long start = i * CHUNK_SIZE;
                    var length = (int)Math.Min(CHUNK_SIZE, contentLength.Value - start);
                    return GetChunk(client, start, length, url);
                }).ToList();

                await Task.WhenAll(tasks);

                // the order of chunks is random
                return tasks.Select(task => task.Result).ToArray();
            }
        }

        static void Main(string[] args)
        {
            var program = new Program();
            var chunks = program.DownloadAsync("http://flaglane.com/download/australian-flag/australian-flag-large.png").Result;

            Console.WriteLine("Chunks: " + chunks.Count());
            Console.ReadLine();
        }
    }
}
票数 4
EN

Stack Overflow用户

发布于 2014-02-12 20:26:26

好吧,我会怎么做你想做的事。这基本上是相同的想法,只是实现方式不同。

代码语言:javascript
复制
public static void DownloadFileInPiecesAndSave()
{
    //test
    var uri = new Uri("http://www.w3.org/");

    var bytes = DownloadInPieces(uri, 4);
    File.WriteAllBytes(@"c:\temp\RangeDownloadSample.html", bytes);
}

/// <summary>
/// Donwload a file via HTTP in multiple pieces using a Range request.
/// </summary>
public static byte[] DownloadInPieces(Uri uri, uint numberOfPieces)
{
    //I'm just fudging this for expository purposes. In reality you would probably want to do a HEAD request to get total file size.
    ulong totalFileSize = 1003; 

    var pieceSize = totalFileSize / numberOfPieces;

    List<Task<byte[]>> tasks = new List<Task<byte[]>>();
    for (uint i = 0; i < numberOfPieces; i++)
    {
        var start = i * pieceSize;
        var end = start + (i == numberOfPieces - 1 ? pieceSize + totalFileSize % numberOfPieces : pieceSize);
        tasks.Add(DownloadFilePiece(uri, start, end));
    }

    Task.WaitAll(tasks.ToArray());

    //This is probably not the single most efficient way to combine byte arrays, but it is succinct...
    return tasks.SelectMany(t => t.Result).ToArray();
}

private static async Task<byte[]> DownloadFilePiece(Uri uri, ulong rangeStart, ulong rangeEnd)
{
    try
    {
        var request = (HttpWebRequest)WebRequest.Create(uri);
        request.AddRange((long)rangeStart, (long)rangeEnd);
        request.Proxy = WebProxy.GetDefaultProxy();

        using (var response = await request.GetResponseAsync())
        using (var responseStream = response.GetResponseStream())
        using (var memoryStream = new MemoryStream((int)(rangeEnd - rangeStart)))
        {
            await responseStream.CopyToAsync(memoryStream);
            return memoryStream.ToArray();
        }
    }
    catch (WebException wex)
    {
        //Do lots of error handling here, lots of things can go wrong
        //In particular watch for 416 Requested Range Not Satisfiable
        return null;
    }
    catch (Exception ex)
    {
        //handle the unexpected here...
        return null;
    }
}

请注意,我在这里忽略了许多内容,例如:

  • 检测服务器是否支持范围请求。如果没有,那么服务器将在每个请求中返回整个内容,我们将得到它的几个副本。
  • 处理任何类型的HTTP错误。如果第三个请求失败了呢?
  • 重试逻辑
  • 超时
  • 找出文件到底有多大
  • 检查该文件是否足够大以保证有多个请求,如果足够大,需要多少个请求?对于小于1或2MB的文件,可能不值得并行地这样做,但是您必须测试
  • 很可能是一堆其他的东西。

所以你还有很长的路要走,我才能把它用于生产。但它应该让你知道从哪里开始。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/21737681

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档