我尝试只解析html表中item和Skill Cap列的数据:http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html
在解析时,我遇到了对齐问题,因为我的脚本是从其他列解析的。
import scrapy
class parser(scrapy.Spider):
name = "recipe_table"
start_urls = ['http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html']
def parse(self, response):
for row in response.xpath('//*[@class="datatable sortable"]//tr'):
data = row.xpath('td//text()').extract()
if not data: # skip empty row
continue
yield {
'name': data[0],
'cap': data[1],
# 'misc': data[2]
}结果:当scrapy runspider cap.py -t json到达第3行时,来自非预期列的数据将被解析。我不确定选择是怎么回事。
2019-05-09 19:41:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html> (referer: None)
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Banquet Set', 'cap': u'0'}
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Banquet Table', 'cap': u'0'}
2019-05-09 19:41:28 [scrapy.core.scraper] DEBUG: Scraped from <200 http://ffxi.allakhazam.com/dyn/guilds/Alchemy.html>
{'item_name': u'Cermet Kilij', 'cap': u'Cermet Kilij +1'}发布于 2019-05-10 09:02:13
使用XPath显式设置源列如何:
for row in response.xpath('//*[@class="datatable sortable"]//tr'):
yield {
'name': row.xpath('./td[1]/text()').extract_first(),
'cap': row.xpath('./td[3]/text()').extract_first(),
# 'misc': etc.
}https://stackoverflow.com/questions/56068945
复制相似问题