根据我的理解,这个基本示例应该能够爬行和获取页面。
我在http://stormcrawler.net/getting-started/上学习了这个例子,但是爬虫似乎只获取了几个页面,然后就什么也不做了。
我想爬行http://books.toscrape.com/并运行爬行,但是在日志中看到只有第一个页面被抓取,还有一些被发现,但没有被抓取:
8010 [Thread-34-parse-executor[5 5]] INFO c.d.s.b.JSoupParserBolt - Parsing : starting http://books.toscrape.com/
8214 [Thread-34-parse-executor[5 5]] INFO c.d.s.b.JSoupParserBolt - Parsed http://books.toscrape.com/ in 182 msec
content 1435 chars
url http://books.toscrape.com/
domain toscrape.com
description
title All products | Books to Scrape - Sandbox
http://books.toscrape.com/catalogue/category/books/new-adult_20/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
http://books.toscrape.com/catalogue/category/books/thriller_37/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
http://books.toscrape.com/catalogue/category/books/academic_40/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
http://books.toscrape.com/catalogue/category/books/classics_6/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
http://books.toscrape.com/catalogue/category/books/paranormal_24/index.html DISCOVERED Thu Apr 05 13:46:01 CEST 2018
url.path: http://books.toscrape.com/
depth: 1
....
17131 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 6:partitioner URLPartitioner {}
17164 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 8:spout queue_size 0
17403 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 5:parse JSoupParserBolt {tuple_success=1, outlink_kept=73}
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher num_queues 0
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher fetcher_average_perdoc {time_in_queues=265.0, bytes_fetched=51294.0, fetch_time=52.0}
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher fetcher_counter {robots.fetched=1, bytes_fetched=51294, fetched=1}
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher activethreads 0
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher fetcher_average_persec {bytes_fetched_perSec=5295.137813564571, fetched_perSec=0.10323113451016827}
17693 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928770 172.18.25.22:1024 3:fetcher in_queues 0
27127 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 6:partitioner URLPartitioner {}
27168 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 8:spout queue_size 0
27405 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 5:parse JSoupParserBolt {tuple_success=0, outlink_kept=0}
27695 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 3:fetcher num_queues 0
27695 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 3:fetcher fetcher_average_perdoc {}
27695 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 3:fetcher fetcher_counter {robots.fetched=0, bytes_fetched=0, fetched=0}
27695 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 3:fetcher activethreads 0
27696 [Thread-39] INFO o.a.s.m.LoggingMetricsConsumer - 1522928780 172.18.25.22:1024 3:fetcher fetcher_average_persec {bytes_fetched_perSec=0.0, fetched_perSec=0.0}没有修改配置文件。包括爬虫。另外,标志parser.emitOutlinks应该是真的,因为这是爬行器-default.yaml的默认设置。
在另一个项目中,我还学习了youtube关于elasticsearch的教程。在这里,我也遇到了一个问题,就是根本没有任何页面被获取和索引。
爬虫没有抓取任何页面的错误在哪里?
发布于 2018-04-05 12:31:54
由工件生成的拓扑仅仅是一个示例,并使用StdOutStatusUpdater,它只是将发现的URL转储到控制台。如果您是在本地模式下运行或使用单个工作人员,则可以使用MemoryStatusUpdater,因为它将向MemorySpout添加已发现的URL,然后依次对这些URL进行处理。
请注意,当您终止拓扑或拓扑崩溃时,这不会保留有关URL的信息。同样,这只是用于调试,并作为使用StormCrawler的初始步骤。
如果希望持久化URL,可以使用任何持久性后端(SOLR/ Elasticsearch,SQL)。可以把你和ES的问题描述成一个单独的问题。
https://stackoverflow.com/questions/49672154
复制相似问题