我以前从未使用过Python,所以请原谅我缺乏知识,但我正在尝试从xenforo论坛获取所有的帖子。到目前为止还不错,除了它为同一线程的每个页面选择了多个URL之外,我之前已经发布了一些数据来解释我的意思。
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-9
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-10
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/page-11真的,我理想情况下想要的只是其中之一。
forums/my-first-forum/: threads/my-gap-year-uni-story.13846/下面是我的脚本:
from bs4 import BeautifulSoup
import requests
def get_source(url):
return requests.get(url).content
def is_forum_link(self):
return self.find('special string') != -1
def fetch_all_links_with_word(url, word):
source = get_source(url)
soup = BeautifulSoup(source, 'lxml')
return soup.select("a[href*=" + word + "]")
main_url = "http://example.com/forum/"
forumLinks = fetch_all_links_with_word(main_url, "forums")
forums = []
for link in forumLinks:
if link.has_attr('href') and link.attrs['href'].find('.rss') == -1:
forums.append(link.attrs['href']);
print('Fetched ' + str(len(forums)) + ' forums')
threads = {}
for link in forums:
threadLinks = fetch_all_links_with_word(main_url + link, "threads")
for threadLink in threadLinks:
print(link + ': ' + threadLink.attrs['href'])
threads[link] = threadLink
print('Fetched ' + str(len(threads)) + ' threads')发布于 2019-05-15 22:50:59
这个解决方案假设为了检查唯一性而应该从url中删除的内容总是"/page-#...“。如果不是这样的话,这个解决方案将不起作用。
您可以使用set来代替使用列表来存储urls,它只会添加唯一的值。然后在url中删除"page“的最后一个实例,如果它的格式为"/page-#",则删除它后面的任何内容,其中#是任何数字,然后将其添加到集合中。
forums = set()
for link in forumLinks:
if link.has_attr('href') and link.attrs['href'].find('.rss') == -1:
url = link.attrs['href']
position = url.rfind('/page-')
if position > 0 and url[position + 6:position + 7].isdigit():
url = url[:position + 1]
forums.add(url);https://stackoverflow.com/questions/56148760
复制相似问题