
本文转载自:https://mp.weixin.qq.com/s/9w_PN7dLQtMmGekagHz8AA
原生 NFS + NFS Subdir External Provisioner,这是 K8S 社区最主流的方案 ——原生 NFS 服务器提供存储能力,NFS Subdir Provisioner 实现 K8S 动态 PV 供应,能自动为每个 PVC 创建专属 NFS 子目录(避免多 PVC 数据混叠),部署简单、维护成本低,适合 90% 的中小规模 K8S 集群,部署以Ubuntu24为例。
首先了解当前服务器的存储情况和环境配置。
lsblk识别到新设备(如
/dev/sdb),未分区、未格式化。
sudo fdisk -l /dev/sdb确认
/dev/sdb是未分区的新磁盘。
(步骤:创建分区 → 格式化文件系统 → 创建挂载点并挂载)
echo -e "n\np\n1\n\n\nw" | sudo fdisk /dev/sdbsudo mkfs.ext4 /dev/sdb1sudo mkdir /nfs-storagesudo mount /dev/sdb1 /nfs-storagesudo tee -a /etc/fstab <<EOF
/dev/sdb1 /nfs-storage ext4 defaults 0 0
EOFdf -h /nfs-storage显示
/dev/sdb1已挂载到/nfs-storage即成功。
sudo apt update && sudo apt install -y nfs-kernel-serversudo mkdir -p /nfs-storage/k8s-pvssudo chown nobody:nogroup /nfs-storage/k8s-pvs && sudo chmod 777 /nfs-storage/k8s-pvs/etc/exports(NFS共享规则): (注:原直接编辑文件失败,改用tee命令)sudo tee /etc/exports <<'EOF'
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
/nfs-storage/k8s-pvs *(rw,sync,no_subtree_check,no_root_squash,fsid=0)
EOFsudo systemctl restart nfs-kernel-serversudo systemctl enable nfs-kernel-serversudo exportfs -vsudo ufw status若防火墙未启用,则无需额外配置。
sudo mkdir -p /tmp/test-nfs && sudo mount -t nfs localhost:/nfs-storage/k8s-pvs /tmp/test-nfsdf -h /tmp/test-nfs && echo "NFS测试成功" | sudo tee /tmp/test-nfs/test.txtsudo umount /tmp/test-nfs && sudo rmdir /tmp/test-nfssudo mkdir -p /nfs-storage/k8s-pvs/.nfs-provisioner && sudo chown nobody:nogroup /nfs-storage/k8s-pvs/.nfs-provisioner(示例:检查共享目录权限)
ls -ld /nfs-storage/k8s-pvsnfs-provisioner-deployment.yaml)# NFS Subdir External Provisioner Deployment for K8S
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.9.2 # 替换为你的NFS服务器IP
- name: NFS_PATH
value: /nfs-storage/k8s-pvs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.9.2 # 替换为你的NFS服务器IP
path: /nfs-storage/k8s-pvs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true/dev/sdb)ext4文件系统/nfs-storage挂载目录nfs-kernel-server服务/nfs-storage/k8s-pvs)/etc/exports允许所有客户端访问(权限:rw,sync,no_subtree_check,no_root_squash,fsid=0)192.168.9.2/nfs-storage/k8s-pvsnfs-provisioner-deployment.yaml文件,包含: kubectl apply -f nfs-provisioner-deployment.yamlapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs-storagePVC创建后会自动绑定PV,PV实际路径对应
/nfs-storage/k8s-pvs/default-test-nfs-pvc-xxx
/etc/exports中的*)本文转载自:https://mp.weixin.qq.com/s/9w_PN7dLQtMmGekagHz8AA
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文系转载,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。