我正试图抓取一个url是http://def.com/xyz/(say)的网页,它有2000多个输出url,但当我查询solr时,它显示的文档不到50个,而我期望显示大约2000个文档。我使用以下查询:
./crawl urls TestCrawl http://localhost:8983/solr/ -depth 2 -topN 3000 控制台输出为:
Injector: starting at 2014-12-08 21:36:15
Injector: crawlDb: TestCrawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Total number of urls rejected by filters: 0
Injector: Total number of urls after normalization: 1
Injector: Merging injected urls into crawl db.
Injector: overwrite: false
Injector: update: false
Injector: URLs merged: 1
Injector: Total new urls injected: 0
Injector: finished at 2014-12-08 21:36:18, elapsed: 00:00:02我假设不知何故nutch不能从爬虫脚本中获取topN值。
发布于 2014-12-10 01:57:25
请验证nutch配置中的属性db.max.outlinks.per.page。将此值更改为更大的数字,或更改为-1以对所有urls进行爬网和索引。
希望这能帮上忙
Le Quoc Do
https://stackoverflow.com/questions/27362491
复制相似问题