我是一个新手,我已经将下面的代码插入到scrapy.cfg文件中。
[settings]
default = uk.settings
[deploy:scrapyd]
url = http://localhost:6800/
project=ukmall
[deploy:scrapyd2]
url = http://scrapyd.mydomain.com/api/scrapyd/
username = john
password = secret如果我在代码代码下面运行
$scrapyd-deploy -l我能拿到
scrapyd2 http://scrapyd.mydomain.com/api/scrapyd/
scrapyd http://localst:6800/查看所有可用项目
scrapyd-deploy -L scrapyd但它在我的机器上什么也没显示?
参考:http://scrapyd.readthedocs.org/en/latest/deploy.html#deploying-a-project
如果是的话
$ scrapy deploy scrapyd2
anandhakumar@MMTPC104:~/ScrapyProject/mall_uk$ scrapy deploy scrapyd2
Packing version 1412322816
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 103, in run
egg, tmpdir = _build_egg()
File "/usr/lib/pymodules/python2.7/scrapy/commands/deploy.py", line 228, in _build_egg
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d], stdout=o, stderr=e)
File "/usr/lib/pymodules/python2.7/scrapy/utils/python.py", line 276, in retry_on_eintr
return function(*args, **kw)
File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'setup.py', 'clean', '-a', 'bdist_egg', '-d', '/tmp/scrapydeploy-VLM6W7']' returned non-zero exit status 1
anandhakumar@MMTPC104:~/ScrapyProject/mall_uk$ 如果我为另一个项目这样做,就意味着它显示出来了。
$ scrapy deploy scrapyd
Packing version 1412325181
Deploying to project "project2" in http://localhost:6800/addversion.json
Server response (200):
{"status": "error", "message": "[Errno 13] Permission denied: 'eggs'"}发布于 2014-10-03 15:48:56
您将只能列出已部署的爬行器。如果您还没有部署任何东西,那么要部署您的爬行器,您只需使用scrapy deploy:
scrapy deploy [ <target:project> | -l <target> | -L ]
vagrant@portia:~/takeovertheworld$ scrapy deploy scrapyd2
Packing version 1410145736
Deploying to project "takeovertheworld" in http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/addversion.json
Server response (200):
{"status": "ok", "project": "takeovertheworld", "version": "1410145736", "spiders": 1}通过访问scrapyd API验证项目是否已正确安装:
vagrant@portia:~/takeovertheworld$ curl http://ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com:6800/listprojects.json
{"status": "ok", "projects": ["takeovertheworld"]}发布于 2015-12-25 11:01:11
我也犯了同样的错误。正如@hugsbrugs所说,因为scrapy项目中的一个文件夹具有根rights.So,所以我这样做。
sudo scrapy deploy scrapyd2
https://stackoverflow.com/questions/26174934
复制相似问题