我正在尝试在HA簇中设置DGraph,但是如果没有卷,它就不会部署。
当直接将提供的配置应用于裸金属集群时,将无法工作。
$ kubectl get pod --namespace dgraph
dgraph-alpha-0 0/1 Pending 0 112s
dgraph-ratel-7459974489-ggnql 1/1 Running 0 112s
dgraph-zero-0 0/1 Pending 0 112s
$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims还有人有这个问题吗?我已经经历这个问题好几天了,找不到解决这个问题的方法。我怎样才能让Dgraph使用集群的本地存储?
谢谢
发布于 2020-05-20 13:42:48
我自己找到了解决方案。
我必须手动创建pv和pvc,然后Dgraph可以在部署期间使用它们。
下面是我用来创建所需的storageclass、pv和pvc的配置
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-0
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-1
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-alpha-2
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-0
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-1
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: datadir-dgraph-dgraph-zero-2
labels:
type: local
spec:
storageClassName: local
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-0
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-1
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-alpha-2
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-0
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-1
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: datadir-dgraph-dgraph-zero-2
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi部署Dgraph时,它会锁定在pvc上。
$ kubectl get pvc -n dgraph -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
datadir-dgraph-dgraph-alpha-0 Bound datadir-dgraph-dgraph-zero-2 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-alpha-1 Bound datadir-dgraph-dgraph-alpha-0 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-alpha-2 Bound datadir-dgraph-dgraph-zero-0 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-0 Bound datadir-dgraph-dgraph-alpha-1 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-1 Bound datadir-dgraph-dgraph-alpha-2 8Gi RWO local 6h40m Filesystem
datadir-dgraph-dgraph-zero-2 Bound datadir-dgraph-dgraph-zero-1 8Gi RWO local 6h40m Filesystem发布于 2020-10-07 01:23:30
Dgraph的信任假设库伯内特斯集群中有一个工作卷插件(provisioner)。在托管Kubernetes产品(aws、GKE、DO等)中,这一步骤已经由提供商负责。
我认为目标应该是与云提供商实现相同的功能,即配置必须是动态的(例如,OP自己的答案是正确的,但静态供给 - k8s文档)。
当运行裸金属时,您必须手动配置一个卷插件,然后才能使用动态供给量 (k8s docs),从而使用StatefulSets、PersistentVolumeClaims等。谢天谢地,有许多可供使用的供给者 (k8s docs)。对于,支持动态配置,选中“内部提供器”的列表中的每一项都可以。
因此,虽然问题有很多解决方案,但我最终还是使用了NFS。为了实现动态配置,我必须使用外部提供程序。希望这和安装舵图一样简单。
通过终端运行ssh
sudo apt update
sudo apt install nfs-kernel-server nfs-commonsudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes打开文件/etc/exports
sudo nano /etc/exports在底部添加以下一行
/var/nfs/kubernetes client_ip(rw,sync,no_subtree_check)用主节点ip替换 client_ip。在我的例子中,这个IP是我的路由器租用给运行主节点的机器的DHCP服务器IP (192.168.1.7)
sudo systemctl restart nfs-kernel-serverhelm install nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner将 nfs.server标志替换为主节点/NFS服务器的适当IP/主机名。
注意,为了使Kubernetes默认使用插件(provisioner)来创建卷,storageClass.defaultClass必须是true。
标志nfs.path与步骤2中创建的路径相同。
如果Helm抱怨找不到图表运行helm repo add stable https://kubernetes-charts.storage.googleapis.com/
单服务器
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yamlHA簇
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yamlhttps://stackoverflow.com/questions/61893322
复制相似问题