首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Python多进程池无故冻结

Python多进程池无故冻结
EN

Stack Overflow用户
提问于 2016-09-01 16:47:37
回答 1查看 784关注 0票数 0

我是python的新手,我希望这里的人能帮助我。

几周前,我开始学习python,并试图构建一个网络爬虫。

其思想如下:第一部分从网站抓取域名(每个字母)。第二部分检查域是否有效(可访问且未驻留),并将其持久化到数据库中。

在爬虫到达'r‘之前,一切都运行得很好。几分钟后,程序冻结,没有任何错误消息等。'r‘后面的字母也不会造成任何问题……程序冻结的域是不一样的。

下面是我的代码:

代码语言:javascript
复制
import requests
import re
import logging
import time

from bs4 import BeautifulSoup

from multiprocessing.pool import Pool

""" Extract only the plain text of element
"""
def visible(element):
    if element.parent.name in ['style', 'script', '[document]', 'head', 'title']:
        return False
    elif re.match('.*<!--.*-->.*', str(element), re.DOTALL):
        return False
    elif re.fullmatch(r"[\s\r\n]", str(element)):
        return False
    return True


logging.basicConfig(format='%(asctime)s %(name)s - %(levelname)s: %(message)s', level=logging.ERROR)
logger = logging.getLogger('crawler')
hdlr = logging.FileHandler('crawler.log')
formatter = logging.Formatter('%(asctime)s %(name)s - %(levelname)s: %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)

""" Checks if a domain is parked.
    Returns true if a domain is not parked, otherwise false
    """
def check_if_valid(website):
    try:
        resp = requests.get("http://www." + website, timeout=10, verify=False)

        soup = BeautifulSoup(resp.text, 'html.parser')

        if len(soup.find_all('script')) == 0:
            # check for very small web pages
            if len(resp.text) < 700:
                return None
            # check for 'park' pattern
            text = filter(visible, soup.find_all(text=True))
            for elem in text:
                if 'park' in elem:
                    return None

        return "http://www." + website + "/"

    except requests.exceptions.RequestException as e:
        # no logging -> too many exceptions
        return None
    except Exception as ex:
        logger.exception("Error during domain validation")


def persist_domains(nonParkedDomains):
    logger.info("Inserting domains into database")
    dbConn = mysqlDB.connect()

    for d in nonParkedDomains:
        mysqlDB.insert_company_domain(dbConn, d)

    mysqlDB.close_connection(dbConn)


if __name__ =="__main__":
    dryrun = True

    if dryrun:
        logger.warning("Testrun! Data does not get persisted!")

    url = "http://www.safedomain.at/"

#    chars = ['0-9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't','u', 'v', 'w', 'x', 'y', 'z']
    chars = ['r','s', 't','u', 'v', 'w', 'x', 'y', 'z']
    payload = {'sub': 'domains', 'char': '', 'page': '1'}

    domains = list()
    cntValidDomains = 0


    logger.info("Start collecting domains from \"http://www.safedomain.at\"....")
    try:
        for c in chars:
            payload['char'] = c
            payload['page'] = '1'

            response = requests.get(url, params=payload, verify=False)
            soup = BeautifulSoup(response.text, 'html.parser')

            while not soup.find_all('a', {'data-pagenumber': True}):
                time.sleep(5)
                response = requests.get(url, params=payload, verify=False)
                soup = BeautifulSoup(response.text, 'html.parser')

            maxPage = int(soup.find_all('a', {'data-pagenumber': True})[-1].getText())

            domains = list()
            for page in range(1, maxPage + 1):
                payload['page'] = page

                logger.debug("Start crawling with following payload: char=%s page=%s", payload['char'], payload['page'])

                response = requests.get(url, params=payload)
                soup = BeautifulSoup(response.text, 'html.parser')

                for elem in soup.find_all('ul', {'class': 'arrow-list'}):
                    for link in elem.find_all('a'):
                        domains.append(link.getText())

            logger.info("Finished! Collected domains for %s: %s",c, len(domains))
            logger.info("Checking if domains are valid...")

            with Pool(48) as p:
                nonParkedDomains = p.map(check_if_valid, domains)

            p.close()
            p.join()

            nonParkedDomains = list(filter(None.__ne__, nonParkedDomains))

            cntTemp = cntTemp + len(nonParkedDomains)

            # check if domains should get persisted

            if dryrun:
                logger.info("Valid domains for %s in domains", c)
                for elem in nonParkedDomains:
                    logger.info(elem)
            else:
                persist_domains(nonParkedDomains)

            logger.info("Finished domain validation for %s!", c)
            cntValidDomains = cntTemp + cntValidDomains

        logger.info("Valid domains: %s", cntTemp)
        logger.info("Program finished!")

    except Exception as e:
        logger.exception("Domain collection stopped unexpectedly")

编辑:经过几个小时的调试和测试,我有了一个想法。会不会是线程中使用的request模块造成了麻烦?

EN

回答 1

Stack Overflow用户

发布于 2016-09-02 19:25:37

经过几个小时的调试和测试,我可以解决这个问题。

我使用的不是多进程池,而是ThreadPoolExecutor (它更适合网络应用程序)

我已经知道threaded函数中的requests.get()函数造成了一些麻烦。我将超时时间更改为% 1。

在这些改变之后,程序工作了。

我不知道确切的原因,但我会很感兴趣的。如果有人知道,如果他/她能把它贴出来,我会很感激。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/39266816

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档