我还在适应纳奇。我使用bin/nutch crawl urls -dir crawl -depth 6 -topN 10通过nutch.apache.org进行了测试爬行,并使用:bin/nutch crawl urls -solr http://<domain>:<port>/solr/core1/ -depth 4 -topN 7将其索引到solr。
甚至没有提到它在我自己的网站上超时,我似乎无法让它再次爬行,或爬行任何其他网站(如wiki.apache.org)。我已经删除了nutch主目录中的所有爬行目录,但仍然收到以下错误(声明没有更多的URL可供爬行):
<user>@<domain>:/usr/share/nutch$ sudo sh nutch-test.sh
solrUrl is not set, indexing will be skipped...
crawl started in: crawl
rootUrlDir = urls
threads = 10
depth = 6
solrUrl=null
topN = 10
Injector: starting at 2013-07-03 15:56:47
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 1
Injector: total number of urls injected after normalization and filtering: 0
Injector: Merging injected urls into crawl db.
Injector: finished at 2013-07-03 15:56:50, elapsed: 00:00:03
Generator: starting at 2013-07-03 15:56:50
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 10
Generator: jobtracker is 'local', generating exactly one partition.
Generator: 0 records selected for fetching, exiting ...
Stopping at depth=0 - no more URLs to fetch.
No URLs to fetch - check your seed list and URL filters.
crawl finished: crawl我的urls/seed.txt文件中有http://nutch.apache.org/。
我的regex-urlfilter.txt里面有+^http://([a-z0-9\-A-Z]*\.)*nutch.apache.org//([a-z0-9\-A-Z]*\/)*。
我还增加了-depth和topN,以指定有更多的索引,但它总是在第一次爬行之后出现错误。我如何重新设置它以便它再次爬行?在纳奇的某个地方有一些需要清理的URL缓存吗?
UPDATE:我们站点的问题似乎是我没有使用www,没有www它就无法解决。通过ping,www.ourdomain.org确实解析。
但是我已经把这个放进了必要的文件里,而且仍然有一个问题。首先,Injector: total number of urls rejected by filters: 1似乎是一个全面的问题,但并不是第一次爬行。为什么和什么过滤器拒绝URL,它不应该是。
发布于 2013-07-09 15:32:16
伙计们这太尴尬了。但是旧的nutch-not-crawling-because-it's-dismissining-urls加载项“检查您的*-urlfilter.txt”文件适用于这里。
在我的例子中,我在url中有一个额外的/:
+^http://([a-z0-9\-A-Z]*\.)*nutch.apache.org//([a-z0-9\-A-Z]*\/)*
应该是+^http://([a-z0-9\-A-Z]*\.)*nutch.apache.org/([a-z0-9\-A-Z]*\/)*
https://stackoverflow.com/questions/17458155
复制相似问题