因此,对于我在Celery 3.0.19上的一些任务,Celery显然没有考虑队列属性,而是将任务发送到默认的celery队列
/This is a stupid test with the proprietary code ripped out.
def run_chef_task(task, **env):
if env is None:
env = {}
if not task_name is None:
env['CHEF'] = task_name
print env
cmd = []
if len(env):
cmd = ['env']
for key, value in env.items():
if not isinstance(key, str) or not isinstance(value, str):
raise TypeError(
"Environment Values must be strings ({0}, {1})"\
.format(key, value))
key = "ND" + key.upper()
cmd.append('%s=%s' % (key, value))
cmd.extend(['/root/chef/run_chef', 'noudata_default'])
print cmd
ret = " ".join(cmd)
ret = subprocess.check_call(cmd)
print 'CHECK'
return ret,cmdR= run_chef_task.apply_async(args='mongo_backup,queue = 'my_special_queue_with_only_one_worker') r.get() #立即返回
去花店吧。查找任务。查找运行任务的worker。确保worker是不同的,并且运行任务的worker不是特殊的worker。确认Flower说'special_worker‘只在'my_special_queue’上,只有'special_worker‘不在'my_special_queue’上。
现在是真正有趣的部分:
在代理上启动rabbitmq-management (并确认代理就是代理)。
有一条消息通过代理在正确的队列上发送到正确的工作进程(已验证)。紧接着,在celery队列上发送了另一条消息
在worker的日志文件中,它说它接受并完成了任务:
[2013-05-16 02:24:15,455: INFO/MainProcess] Got task from broker: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c]
[2013-05-16 02:24:15,456: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x2479c08> (args:('noto.tasks.chef_tasks.run_chef_task', '0dba1107-2bb5-4c19-8df3-8a74d8e1234c', ['mongo_backup'], {}, {'utc': True, 'is_eager': False, 'chord': None, 'group': None, 'args': ['mongo_backup'], 'retries': 0, 'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}, 'expires': None, 'task': 'noto.tasks.chef_tasks.run_chef_task', 'callbacks': None, 'errbacks': None, 'hostname': 'manager1.i-6e958f0f', 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '0dba1107-2bb5-4c19-8df3-8a74d8e1234c'}) kwargs:{})
// This is output from the task
[2013-05-16 02:24:15,459: WARNING/PoolWorker-1] {'CHEF': 'mongo_backup'}
[2013-05-16 02:24:15,463: WARNING/PoolWorker-1] ['env', 'NDCHEF=mongo_backup', '/root/chef/run_chef', 'default']
[2013-05-16 02:24:15,477: DEBUG/MainProcess] Task accepted: noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] pid:17210
...A bunch of boring debug logs repeating the registered tasks
[2013-05-16 02:31:45,061: INFO/MainProcess] Task noto.tasks.chef_tasks.run_chef_task[0dba1107-2bb5-4c19-8df3-8a74d8e1234c] succeeded in 88.438395977s: (0, ['env', 'NDCHEF=mongo_backup',...因此,它接受任务,运行任务,并完全触发另一个队列中的另一个工作进程在同一时间运行该任务,而不是正确返回。我能想到的唯一一件事是这个工人是唯一一个拥有正确来源的人。所有其他工作人员都有带有注释掉的子流程调用的旧源代码,因此他们或多或少立即返回。
有谁知道是什么原因造成的吗?这并不是我们看到的唯一发生这种情况的任务,因为它似乎从芹菜队列中随机挑选了3台机器来运行它。是不是我们对celeryconfig做了什么奇怪的事情,从而导致了这种情况?
发布于 2013-05-17 08:57:53
您的交换日志显示没有显式路由,请参阅routing_key默认的‘TaskPool’:
'delivery_info': {'priority': None, 'routing_key': u'', 'exchange': u'celery'}我猜问题出在开箱即用的自动默认设置上。考虑在celery配置中测试显式手动路由。
http://docs.celeryproject.org/en/latest/userguide/routing.html#manual-routing
例如:
CELERY_ROUTES = {
"work-queue": {
"queue": "work_queue",
"binding_key": "work_queue"
},
"new-feeds": {
"queue": "new_feeds",
"binding_key": "new_feeds"
},
}
CELERY_QUEUES = {
"work_queue": {
"exchange": "work_queue",
"exchange_type": "direct",
"binding_key": "work_queue",
},
"new_feeds": {
"exchange": "new_feeds",
"exchange_type": "direct",
"binding_key": "new_feeds"
},
}https://stackoverflow.com/questions/16593855
复制相似问题