我试图访问这个网站的数据:http://surge.srcc.lsu.edu/s1.html。到目前为止,我的代码循环在下拉菜单中,我想在表1的顶部循环页面。ect。我尝试过使用Select,但是我得到了一个不能与span一起使用的错误:"UnexpectedTagNameException: Select只在元素上工作,而不是在上使用“。
# importing libraries
from selenium import webdriver
import time
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import re
driver = webdriver.Firefox()
driver.get("http://surge.srcc.lsu.edu/s1.html")
# definition for switching frames
def frame_switch(css_selector):
driver.switch_to.frame(driver.find_element_by_css_selector(css_selector))
# data is in an iframe
frame_switch("iframe")
html_source = driver.page_source
nameSelect = Select(driver.find_element_by_xpath('//select[@id="storm_name"]'))
stormCount = len(nameSelect.options)
data=[]
for i in range(1, stormCount):
print("starting loop on option storm " + nameSelect.options[i].text)
nameSelect.select_by_index(i)
time.sleep(3)
yearSelect = Select(driver.find_element_by_xpath('//select[@id="year"]'))
yearCount = len(yearSelect.options)
for j in range(1, yearCount):
print("starting loop on option year " + yearSelect.options[j].text)
yearSelect.select_by_index(j)
time.sleep(2)这就是我在选择页面时遇到的问题:
change_page=Select(driver.find_element_by_class_name("yui-pg-pages"))
page_count = len(change_page.options)
for k in range(1, page_count):
change_page.select_by_index(k)
# Select Page & run following code
soup = BeautifulSoup(driver.page_source, 'html.parser')
print(soup.find_all("tbody", {"class" : re.compile(".*")})[1])
# get the needed table body
table=soup.find_all("tbody", {"class" : re.compile(".*")})[1]
rows = table.find_all('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.text.strip() for ele in cols]
data.append(cols)发布于 2016-04-07 16:42:10
改用xpath选择器。
driver.find_element_by_xpath('//a[@class="yui-pg-next"]') 然后在您可以与next按钮交互时,只需循环即可。我更喜欢这种方法,如果当我在页面中循环时,页面的数量会发生变化。您不需要使用Select。事实上,我认为Select除了下拉菜单之外什么都没有。
或者,如果您需要使用页面链接来执行此操作,因为页面不会经常更改,您可以尝试如下:
# Use find_elements_by_xpath to select multiple elements.
pages = driver.find_elements_by_xpath('//a[@class="yui-pg-page"]')
# loop through results
for page_link in pages:
page_link.click()
# do stuff.https://stackoverflow.com/questions/36481856
复制相似问题