我正在尝试修改我的网络爬虫,这样我就可以在网站上获得Javascript输入的信息。我想用硒而不是splash来做。下面是一个例子:
class TestSpider(Spider):
name="test"
start_urls = ["http://crawler-test.com/mobile/dynamic"]
my_excludes = ['style','link','meta','script','noscript','base']
my_str = '//text()['
for my_exclude in my_excludes:
my_str = my_str + "not(ancestor::" + my_exclude + ") and "
my_str = my_str[:-5] + "]"
def start_requests(self):
for url in self.start_urls:
yield SeleniumRequest(url=url, callback=self.parse)
def parse(self, response):
body = response.xpath(self.my_str).re(".*")
file = open("TestResult.txt", "w")
file.writelines(body)
file.close()
print(body)我还对文档中建议的设置进行了更改:
from shutil import which
BOT_NAME = 'TestSpider'
SPIDER_MODULES = ['TestSpider.spiders']
NEWSPIDER_MODULE = 'TestSpider.spiders'
SELENIUM_DRIVER_NAME = 'firefox'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('geckodriver')
SELENIUM_DRIVER_ARGUMENTS=['-headless']
DOWNLOADER_MIDDLEWARES = {'scrapy_selenium.SeleniumMiddleware': 800}我得到的是网站的静态输入,而不是动态输入。(Javascript)一些帮助会非常好。谢谢!
发布于 2021-02-11 22:16:20
刚刚读了你的文章,我也在关注同样的问题。
chk_seller_xpath = '//*/input[@id="e1-13"]'
js = "document.evaluate('%s', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue.click()" % chk_seller_xpath
driver.execute_script(js)通过执行上面的操作,我可以在javascript中执行脚本。
https://stackoverflow.com/questions/66156341
复制相似问题