在我的单个容器EB部署中,我使用了amazon提供的新的弹性文件系统。我不明白为什么安装的EFS不能映射到容器中。
EFS挂载在/efs挂载点成功地在主机上执行。
提供给Dockerrun.aws.json的是
{
"AWSEBDockerrunVersion": "1"
"Volumes": [
{
"HostDirectory": "/efs-mount-point",
"ContainerDirectory": "/efs-mount-point"
}
]
}然后,一旦容器开始运行,就会在容器中创建卷。然而,它已经映射了主机目录/ EFS -挂载点,而不是实际的EFS挂载点。我不知道如何让Docker映射到安装在/ EFS -挂载点而不是主机目录中的efs卷中。
NFS卷和Docker玩得好吗?
发布于 2016-07-11 14:59:18
您需要在主机重新启动码头实例中挂载EFS卷之后进行EC2。
下面是一个例子,.ebextensions/efs.config
commands:
01mkdir:
command: "mkdir -p /efs-mount-point"
02mount:
command: "mountpoint -q /efs-mount-point || mount -t nfs4 -o nfsvers=4.1 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-fa35c253.efs.us-west-2.amazonaws.com:/ /efs-mount-point"
03restart:
command: "service docker restart"发布于 2017-01-27 08:19:19
AWS有自动在弹性豆杆上创建和安装EFS的指令。他们可以找到这里
这些说明链接到两个要自定义的配置文件,并放置在部署包的.ebextensions文件夹中。
需要进一步修改文件storage-efs-mountfilesystem.config以处理Docker容器。添加以下命令:
02_restart:
command: "service docker restart"对于多容器环境,弹性集装箱服务也必须重新启动(当码头在上面重新启动时,它被杀死了):
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"因此,storage-efs-mountfilesystem.config的完整命令部分是:
commands:
01_mount:
command: "/tmp/mount-efs.sh"
02_restart:
command: "service docker restart"
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"这不能“开箱即用”的原因是,EC2实例在运行.ebextensions中的命令之前启动了停靠守护进程。启动顺序是:
在第一步,docker守护进程提供给容器的文件系统视图是固定的。因此,在步骤3中对主机文件系统所做的更改不会反映在容器的视图中。
一个奇怪的效果是,容器在将文件系统安装到主机之前就会看到一个挂载点。主机看到安装的文件系统。因此,由容器编写的文件将被写入挂载目录下的主机目录,而不是挂载的文件系统。卸载EC2主机上的文件系统将公开写入挂载目录的容器文件。
发布于 2016-08-29 08:30:35
EFS与AWS豆柄-多集装箱码头将工作。但是许多东西将停止工作,因为您必须在挂载EFS之后重新启动docker。
实例命令
搜索周围,您可能会发现,您需要做“码头重新启动”后,安装EFS。没那么简单。当自动标号发生和/或部署新版本的应用程序时,您将遇到麻烦。
下面是一个用于将EFS安装到docker实例的脚本,其中需要执行以下步骤:
这是我的剧本:
.ebextensions/commands.config
commands:
01stopdocker:
command: "sudo stop ecs > /dev/null 2>&1 || /bin/true && sudo service docker stop"
02killallnetworkbindings:
command: 'sudo killall docker > /dev/null 2>&1 || /bin/true'
03removenetworkinterface:
command: "rm -f /var/lib/docker/network/files/local-kv.db"
test: test -f /var/lib/docker/network/files/local-kv.db
# Mount the EFS created in .ebextensions/media.config
04mount:
command: "/tmp/mount-efs.sh"
# On new instances, delay needs to be added because of 00task enact script. It tests for start/ but it can be various states of start...
# Basically, "start ecs" takes some time to run, and it runs async - so we sleep for some time.
# So basically let the ECS manager take it's time to boot before going on to enact scritps and post deploy scripts.
09restart:
command: "service docker start && sudo start ecs && sleep 120s"挂载脚本和环境变量
.ebextensions/mount-config.config
# efs-mount.config
# Copy this file to the .ebextensions folder in the root of your app source folder
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_REGION: '`{"Ref": "AWS::Region"}`'
# Replace with the required mount directory
EFS_MOUNT_DIR: '/efs_volume'
# Use in conjunction with efs_volume.config or replace with EFS volume ID of an existing EFS volume
EFS_VOLUME_ID: '`{"Ref" : "FileSystem"}`'
packages:
yum:
nfs-utils: []
files:
"/tmp/mount-efs.sh":
mode: "000755"
content : |
#!/bin/bash
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')
echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
echo 'Stopping NFS ID Mapper...'
service rpcidmapd status &> /dev/null
if [ $? -ne 0 ] ; then
echo 'rpc.idmapd is already stopped!'
else
service rpcidmapd stop
if [ $? -ne 0 ] ; then
echo 'ERROR: Failed to stop NFS ID Mapper!'
exit 1
fi
fi
echo 'Checking if EFS mount directory exists...'
if [ ! -d ${EFS_MOUNT_DIR} ]; then
echo "Creating directory ${EFS_MOUNT_DIR} ..."
mkdir -p ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Directory creation failed!'
exit 1
fi
chmod 777 ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
echo 'ERROR: Permission update failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} already exists!"
fi
mountpoint -q ${EFS_MOUNT_DIR}
if [ $? -ne 0 ]; then
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo "mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}"
mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}
if [ $? -ne 0 ] ; then
echo 'ERROR: Mount command failed!'
exit 1
fi
else
echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
fi
echo 'EFS mount complete.'资源和配置
您必须更改下面的option_settings。要找到您必须在option_settings下面定义的VPC和子网,请查看AWS web控制台-> VPC,在那里您必须找到默认的VPC id和3个默认子网id。如果您的豆茎使用自定义VPC,您必须使用这些设置。
.ebextensions/efs-volume.config
# efs-volume.config
# Copy this file to the .ebextensions folder in the root of your app source folder
option_settings:
aws:elasticbeanstalk:customoption:
EFSVolumeName: "EB-EFS-Volume"
VPCId: "vpc-xxxxxxxx"
SubnetUSWest2a: "subnet-xxxxxxxx"
SubnetUSWest2b: "subnet-xxxxxxxx"
SubnetUSWest2c: "subnet-xxxxxxxx"
Resources:
FileSystem:
Type: AWS::EFS::FileSystem
Properties:
FileSystemTags:
- Key: Name
Value:
Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EB_EFS_Volume"}
MountTargetSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for mount target
SecurityGroupIngress:
- FromPort: '2049'
IpProtocol: tcp
SourceSecurityGroupId:
Fn::GetAtt: [AWSEBSecurityGroup, GroupId]
ToPort: '2049'
VpcId:
Fn::GetOptionSetting: {OptionName: VPCId}
MountTargetUSWest2a:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2a}
MountTargetUSWest2b:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2b}
MountTargetUSWest2c:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: {Ref: FileSystem}
SecurityGroups:
- {Ref: MountTargetSecurityGroup}
SubnetId:
Fn::GetOptionSetting: {OptionName: SubnetUSWest2c}资源:
https://stackoverflow.com/questions/38180665
复制相似问题