首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >crawledPage.HttpWebResponse在Abot中为空

crawledPage.HttpWebResponse在Abot中为空
EN

Stack Overflow用户
提问于 2014-01-24 22:53:32
回答 1查看 833关注 0票数 1

我正在尝试使用Abot创建一个C#网络爬虫

我遵循了QuickStart Tutorial,但我似乎不能让它工作。

它在方法crawler_ProcessPageCrawlCompleted中有一个未处理的异常,恰好在下面这一行:

代码语言:javascript
复制
if (crawledPage.WebException != null || crawledPage.HttpWebResponse.StatusCode != HttpStatusCode.OK) 
{
   Console.WriteLine("Crawl of page failed {0}", crawledPage.Uri.AbsoluteUri);
}

因为crawledPage.HttpWebResponse为空。

我可能漏掉了什么,但是什么?

注释和完整代码:

我按照教程的建议编辑了我的app.config文件,下面是我的类(引用Abot.dll):

代码语言:javascript
复制
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;


using Abot.Crawler;
using Abot.Poco;
using System.Net;
using System.Windows.Forms; // for HttpStatusCode

namespace WebCrawler
{
    public class MyCrawler
    {
        public MyCrawler()
        {

        }
        public PoliteWebCrawler crawler;
        public void initialize()
        {
            // 3. Create an instance of Abot.Crawler.PoliteWebCrawler
            // 3.2 Will use app.config for confguration
            // because I choose 2.1 === edited app.config
             crawler = new PoliteWebCrawler();

            // 4. Register for events and create processing methods (both synchronous and asynchronous versions available)
            crawler.PageCrawlStartingAsync += crawler_ProcessPageCrawlStarting;
            crawler.PageCrawlCompletedAsync += crawler_ProcessPageCrawlCompleted;
            crawler.PageCrawlDisallowedAsync += crawler_PageCrawlDisallowed;
            crawler.PageLinksCrawlDisallowedAsync += crawler_PageLinksCrawlDisallowed;
            #region(Step 5. Add custom objects to crawl bag ?)
            //5. Add any number of custom objects to the dynamic crawl bag. These objects will be available in the CrawlContext.CrawlBag object.
            // ???
            /*
            PoliteWebCrawler crawler = new PoliteWebCrawler();
            crawler.CrawlBag.MyFoo1 = new Foo();
            crawler.CrawlBag.MyFoo2 = new Foo();
            crawler.PageCrawlStartingAsync += crawler_ProcessPageCrawlStarting;

            void crawler_ProcessPageCrawlStarting(object sender, PageCrawlStartingArgs e)
            {
                    //Get your Foo instances from the CrawlContext object
                    CrawlContext context = e.CrawlContext;
                    context.CrawlBag.MyFoo1.Bar();
                    context.CrawlBag.MyFoo2.Bar();
            }
            */
            #endregion

        }// initialize()

        public void doCrawl()
        {            
            CrawlResult result = crawler.Crawl(new Uri("http://yahoo.com"));

            if (result.ErrorOccurred)
            {
               /* line 60 : */  // Console.WriteLine("Crawl of {0} completed with error: {1}", result.RootUri.AbsoluteUri, result.ErrorMessage);
                // I commented out because it outputs the error : 'Abot.Poco.CrawlResult' does not contain a definition for 'ErrorMessage'
            }
            else
            {
                Console.WriteLine("Crawl of {0} completed without error.", result.RootUri.AbsoluteUri);
            }
        }

        void crawler_ProcessPageCrawlStarting(object sender, PageCrawlStartingArgs e)
        {
            PageToCrawl pageToCrawl = e.PageToCrawl;
            Console.WriteLine("About to crawl link {0} which was found on page {1}", pageToCrawl.Uri.AbsoluteUri, pageToCrawl.ParentUri.AbsoluteUri);
        }

        void crawler_ProcessPageCrawlCompleted(object sender, PageCrawlCompletedArgs e)
        {
            CrawledPage crawledPage = e.CrawledPage;            

            if (crawledPage.HttpWebResponse == null)
            {
                MessageBox.Show("HttpWebResponse null");
            }

            /* line 84 : */ if (crawledPage.WebException != null || crawledPage.HttpWebResponse.StatusCode != HttpStatusCode.OK)
                Console.WriteLine("Crawl of page failed {0}", crawledPage.Uri.AbsoluteUri);
            else
                Console.WriteLine("Crawl of page succeeded {0}", crawledPage.Uri.AbsoluteUri);

            if (string.IsNullOrEmpty(crawledPage.RawContent))
                Console.WriteLine("Page had no content {0}", crawledPage.Uri.AbsoluteUri);
        }

        void crawler_PageLinksCrawlDisallowed(object sender, PageLinksCrawlDisallowedArgs e)
        {
            CrawledPage crawledPage = e.CrawledPage;
            Console.WriteLine("Did not crawl the links on page {0} due to {1}", crawledPage.Uri.AbsoluteUri, e.DisallowedReason);
        }

        void crawler_PageCrawlDisallowed(object sender, PageCrawlDisallowedArgs e)
        {
            PageToCrawl pageToCrawl = e.PageToCrawl;
            Console.WriteLine("Did not crawl page {0} due to {1}", pageToCrawl.Uri.AbsoluteUri, e.DisallowedReason);
        }
    }// end of public class MyCrawler
}

错误在第84行。

此外,第60行还有一个额外的细节(可能表明我遗漏了什么),它是:

代码语言:javascript
复制
'Abot.Poco.CrawlResult' does not contain a definition for 'ErrorMessage' and no extension method 'ErrorMessage' accepting a first argument of type 'Abot.Poco.CrawlResult' could be found (are you missing a using directive or an assembly reference?)

谢谢你的帮助!

EN

回答 1

Stack Overflow用户

发布于 2014-05-25 02:39:56

这意味着您遇到一个不响应http请求的url (即..它不像http://shhdggdhshshhsjsjj.com那样存在)。这可能会导致HttpWebResponse和WebException属性都为空。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/21335662

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档