本周,我的集成测试停止工作。我发现这是一个django-rq的工作,只是无限期的坚持。我的产出:
$: RQ worker 'rq:worker:47e0aaf280be.13' started, version 0.12.0
$: *** Listening on default...
$: Cleaning registries for queue: default
$: default: myapp.engine.rules.process_event(<myapp.engine.event.Event object at 0x7f34f1ce50f0>) (a1e66a46-1a9d-4f52-be6f-6f4529dd2480)这就是它结冰的时刻。我得用键盘中断
代码没有改变。可以肯定的是,我回到了主分支,检查了它,重新运行了集成测试,它们也失败了。
如何从python中的测试用例开始调试redis或rq以了解可能发生的事情?有办法通过python查看实际的队列记录吗?只有在测试运行时,Redis队列才会存在,而且,由于它已被冻结,我可以从运行Redis服务的Docker容器中通过redis-cli查看Redis队列。
到目前为止,我用于调试的方法是:
from rq import Queue
from redis import Redis
from django_rq import get_worker
...
def test_motion_alarm(self):
motion_sensor_data = {"motion_detected": 1}
post_alarm(
self.live_server_url,
self.location,
self.sensor_device_id,
"ALARM_MOTIONDETECTED",
motion_sensor_data
)
redis_conn = Redis('my_queue')
q = Queue(connection=redis_conn)
print(len(q))
queued_job_ids = q.job_ids
queued_jobs = q.jobs
logger.debug('RQ info: \njob IDs: {}, \njobs: {}'.format(queued_job_ids, queued_jobs))
get_worker().work(burst=True)
time.sleep(1)
self.assertTrue(db.event_exists_at_location(
db.get_location_by_motion_detected(self.location_id),
"ALARM_MOTIONDETECTED"))它产生此调试输出:
$ DEBUG [myapi.tests.integration.test_rules:436] RQ info:
job IDs: ['bef879c4-832d-431d-97e7-9eec9f4bf5d7']
jobs: [Job('bef879c4-832d-431d-97e7-9eec9f4bf5d7', enqueued_at=datetime.datetime(2018, 12, 6, 0, 10, 14, 829488))]
$ RQ worker 'rq:worker:54f6054e7aa5.7' started, version 0.12.0
$ *** Listening on default...
$ Cleaning registries for queue: default
$ default: myapi.engine.rules.process_event(<myapi.engine.event.Event object at 0x7fbf204e8c50>) (bef879c4-832d-431d-97e7-9eec9f4bf5d7)在队列容器中,在队列上运行一个monitor进程,我偶尔会看到一批新的监视器响应:
1544110882.343826 [0 172.19.0.4:38905] "EXPIRE" "rq:worker:ac50518f1c5e.7" "35"
1544110882.344304 [0 172.19.0.4:38905] "HSET" "rq:worker:ac50518f1c5e.7" "last_heartbeat" "2018-12-06T15:41:22.344170Z"
1544110882.968846 [0 172.19.0.4:38910] "EXPIRE" "rq:worker:ac50518f1c5e.12" "35"
1544110882.969651 [0 172.19.0.4:38910] "HSET" "rq:worker:ac50518f1c5e.12" "last_heartbeat" "2018-12-06T15:41:22.969181Z"
1544110884.122917 [0 172.19.0.4:38919] "EXPIRE" "rq:worker:ac50518f1c5e.13" "35"
1544110884.124966 [0 172.19.0.4:38919] "HSET" "rq:worker:ac50518f1c5e.13" "last_heartbeat" "2018-12-06T15:41:24.124809Z"
1544110884.708910 [0 172.19.0.4:38925] "EXPIRE" "rq:worker:ac50518f1c5e.14" "35"
1544110884.710736 [0 172.19.0.4:38925] "HSET" "rq:worker:ac50518f1c5e.14" "last_heartbeat" "2018-12-06T15:41:24.710599Z"
1544110885.415111 [0 172.19.0.4:38930] "EXPIRE" "rq:worker:ac50518f1c5e.15" "35"
1544110885.417279 [0 172.19.0.4:38930] "HSET" "rq:worker:ac50518f1c5e.15" "last_heartbeat" "2018-12-06T15:41:25.417155Z"
1544110886.028965 [0 172.19.0.4:38935] "EXPIRE" "rq:worker:ac50518f1c5e.16" "35"
1544110886.030002 [0 172.19.0.4:38935] "HSET" "rq:worker:ac50518f1c5e.16" "last_heartbeat" "2018-12-06T15:41:26.029817Z"
1544110886.700132 [0 172.19.0.4:38940] "EXPIRE" "rq:worker:ac50518f1c5e.17" "35"
1544110886.701861 [0 172.19.0.4:38940] "HSET" "rq:worker:ac50518f1c5e.17" "last_heartbeat" "2018-12-06T15:41:26.701716Z"
1544110887.359702 [0 172.19.0.4:38945] "EXPIRE" "rq:worker:ac50518f1c5e.18" "35"
1544110887.361642 [0 172.19.0.4:38945] "HSET" "rq:worker:ac50518f1c5e.18" "last_heartbeat" "2018-12-06T15:41:27.361481Z"
1544110887.966641 [0 172.19.0.4:38950] "EXPIRE" "rq:worker:ac50518f1c5e.19" "35"
1544110887.967931 [0 172.19.0.4:38950] "HSET" "rq:worker:ac50518f1c5e.19" "last_heartbeat" "2018-12-06T15:41:27.967760Z"
1544110888.595785 [0 172.19.0.4:38955] "EXPIRE" "rq:worker:ac50518f1c5e.20" "35"
1544110888.596962 [0 172.19.0.4:38955] "HSET" "rq:worker:ac50518f1c5e.20" "last_heartbeat" "2018-12-06T15:41:28.596799Z"
1544110889.199269 [0 172.19.0.4:38960] "EXPIRE" "rq:worker:ac50518f1c5e.21" "35"
1544110889.200416 [0 172.19.0.4:38960] "HSET" "rq:worker:ac50518f1c5e.21" "last_heartbeat" "2018-12-06T15:41:29.200265Z"
1544110889.783128 [0 172.19.0.4:38965] "EXPIRE" "rq:worker:ac50518f1c5e.22" "35"
1544110889.785444 [0 172.19.0.4:38965] "HSET" "rq:worker:ac50518f1c5e.22" "last_heartbeat" "2018-12-06T15:41:29.785158Z"
1544110890.422338 [0 172.19.0.4:38970] "EXPIRE" "rq:worker:ac50518f1c5e.23" "35"
1544110890.423470 [0 172.19.0.4:38970] "HSET" "rq:worker:ac50518f1c5e.23" "last_heartbeat" "2018-12-06T15:41:30.423314Z"而且,奇怪的是,也许是故意的,每当我看到它们经过时,它们就会以:30秒或:00秒结束。
所以,我可以确定,是的,队列中确实有这个项目,而且作业正在运行,那么为什么每个作业都不启动并运行呢?
发布于 2018-12-06 23:47:12
这似乎是最近报告的rq_scheduler库中的一个缺陷,如下所述:https://github.com/rq/rq-scheduler/issues/197
有一个正在为它工作。然而,我注意到,我们允许redis库增加到3.0.0,而不显式地请求该版本,这最终破坏了系统。
在构建脚本中,我将Dockerfile设置为执行:RUN pip install redis=="2.10.6",这暂时缓解了问题。
https://stackoverflow.com/questions/53640418
复制相似问题