首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在Python单元测试中修补或模拟Google Cloud Logging

在Python单元测试中修补或模拟Google Cloud Logging
EN

Stack Overflow用户
提问于 2021-09-01 18:25:31
回答 1查看 207关注 0票数 0

我有一个最终部署在Google Cloud平台上的程序,我使用日志资源管理器和GCP来监控我的日志。为此,我需要使用google.cloud.logging库,它是在GCP logs Explorer上显示具有正确严重级别的日志所必需的。

在我的单元测试中,我试图修补对google.cloud.logging库的调用,然而,它们在我的本地都以403错误结束,这表明该库还没有被修补。

cloud_logger.py

代码语言:javascript
复制
import google.cloud.logging
import logging
def get_logger():
    client = google.cloud.logging.Client()
    client.get_default_handler()
    client.setup_logging()
    logger = logging.getLogger(__name__)
    return logger

使用cloud_logger.get_logger的rss_crawler.py

代码语言:javascript
复制
from cloud_logger import get_logger
logger = get_logger()
def crawl_rss_source(source_crawling_details):
    brand_name = source_crawling_details[constants.BRAND_NAME]
    source_name = source_crawling_details[constants.SOURCE_NAME]
    initial_agent_settings = source_crawling_details[constants.INITIAL_AGENT_SETTINGS]
    logger.info(f"Started crawling {brand_name}-{source_name} crawling")
    source = source_crawling_details["source"]
    entry_points_list = source[constants.ENTRY]
    source_crawling_details.update({constants.ENTRY: entry_points_list})
    source_crawling_details.update({constants.AGENT: initial_agent_settings})
    content = get_content(source_crawling_details)
    logger.info("Getting links present in rss feed entry")
    entry_points = rss_entry_points(content)
    source_crawling_details.update({constants.ENTRY_POINTS: entry_points})
    candidate_urls = start_content_tasks(source_crawling_details)
    if not candidate_urls:
        raise CustomException("There are no links to scrape")
    # filtered urls found with crawl rules, next step get scrape candidates based on scrape rules
    scrape_rules = source[constants.SCRAPE]
    scrape_candidates = get_scrape_candidates(scrape_rules, candidate_urls)
    if not scrape_candidates:
        raise CustomException(
            f"Could not find any links for scraping, please check scrape,crawl rules, or possibly depth level for brand {brand_name} , source {source_name}"
        )
    return scrape_candidates

test_rss_crawler.py

代码语言:javascript
复制
@patch("start_crawl.fetch_source_crawling_fields")
    @patch("rss_crawler.logger")
    def test_crawl_rss_source_raises_exception(
        self, mocked_logger, mocked_source_fetcher
    ):
        mocked_logger.logger.return_value = logging.getLogger(__name__)
        self.test_source[constants.SCRAPE] = {
            "white_list": ["https://buffer.com/blog/(\\w|\\d|\\-)+/$"]
        }
        details = set_content_source_details(
            self.brand_name,
            self.source_name,
            self.agent_args,
            self.source,
            **self.key_word_argument,
        )
        # test to see if exception is raised if scrape rule is not matching
        self.assertRaises(CustomException, crawl_rss_source, details)
        self.test_source[constants.CRAWL] = {"white_list": ["https://buffer.com/blog/"]}
        details = set_content_source_details(
            self.brand_name,
            self.source_name,
            self.agent_args,
            self.source,
            **self.key_word_argument,
        )
        # test to see if exception is raised if crawl rule is not matching
        self.assertRaises(CustomException, crawl_rss_source, details)

然而,当我运行这些测试时,即使在打了补丁之后,我也会收到这些警告消息:

代码语言:javascript
复制
Traceback (most recent call last):
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/cloud/logging_v2/handlers/transports/background_thread.py", line 115, in _safely_commit_batch
    batch.commit()
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/cloud/logging_v2/logger.py", line 385, in commit
    client.logging_api.write_entries(entries, **kwargs)
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/cloud/logging_v2/_gapic.py", line 149, in write_entries
    self._gapic_api.write_log_entries(request=request)
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/cloud/logging_v2/services/logging_service_v2/client.py", line 592, in write_log_entries
    response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
    return wrapped_func(*args, **kwargs)
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
    return retry_target(
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/api_core/retry.py", line 189, in retry_target
    return target()
  File "/Users/reydon227/Concured/crawler/env/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable
    six.raise_from(exceptions.from_grpc_error(exc), exc)
  File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 The caller does not have permission
EN

回答 1

Stack Overflow用户

发布于 2021-11-25 18:54:40

我不确定作者是如何处理的,但我在使用mocked google things的一次测试中也出现了类似的症状,而且我在test_rss_crawler.py代码中看不到任何导入部分。在我的代码中,我不得不在测试方法内部导入被测类,因为@mock.patch(..)在方法之前运行,但在测试文件导入部分之后运行,并且导入模块中的代码可能已经初始化了应该打补丁的类,所以补丁没有作用。Mb smbd会发现导入本应在测试方法中打补丁的模块可能会有所帮助。

代码语言:javascript
复制
@mock.patch('some_class')
def test_smth(self, some_class):
  import some_class
  some_class.return_value = .... 
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69018894

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档