首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在裸金属Kubernetes集群上运行Dgraph

如何在裸金属Kubernetes集群上运行Dgraph
EN

Stack Overflow用户
提问于 2020-05-19 14:17:29
回答 2查看 555关注 0票数 0

我正在尝试在HA簇中设置DGraph,但是如果没有,它就不会部署。

当直接将提供的配置应用于裸金属集群时,将无法工作。

代码语言:javascript
复制
$ kubectl get pod --namespace dgraph
dgraph-alpha-0                      0/1     Pending     0          112s
dgraph-ratel-7459974489-ggnql       1/1     Running     0          112s
dgraph-zero-0                       0/1     Pending     0          112s


$ kubectl describe pod/dgraph-alpha-0 --namespace dgraph
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "dgraph-alpha-0": pod has unbound immediate PersistentVolumeClaims

还有人有这个问题吗?我已经经历这个问题好几天了,找不到解决这个问题的方法。我怎样才能让Dgraph使用集群的本地存储?

谢谢

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2020-05-20 13:42:48

我自己找到了解决方案。

我必须手动创建pvpvc,然后Dgraph可以在部署期间使用它们。

下面是我用来创建所需的storageclasspvpvc的配置

代码语言:javascript
复制
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-alpha-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/alpha-2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-0
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-1
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-dgraph-dgraph-zero-2
  labels:
    type: local
spec:
  storageClassName: local
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/dgraph/zero-2"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-alpha-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-0
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-1
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: datadir-dgraph-dgraph-zero-2
spec:
  storageClassName: local
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

部署Dgraph时,它会锁定在pvc上。

代码语言:javascript
复制
$ kubectl get pvc -n dgraph -o wide
NAME                            STATUS   VOLUME                          CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
datadir-dgraph-dgraph-alpha-0   Bound    datadir-dgraph-dgraph-zero-2    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-1   Bound    datadir-dgraph-dgraph-alpha-0   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-alpha-2   Bound    datadir-dgraph-dgraph-zero-0    8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-0    Bound    datadir-dgraph-dgraph-alpha-1   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-1    Bound    datadir-dgraph-dgraph-alpha-2   8Gi        RWO            local          6h40m   Filesystem
datadir-dgraph-dgraph-zero-2    Bound    datadir-dgraph-dgraph-zero-1    8Gi        RWO            local          6h40m   Filesystem
票数 1
EN

Stack Overflow用户

发布于 2020-10-07 01:23:30

Dgraph的信任假设库伯内特斯集群中有一个工作卷插件(provisioner)。在托管Kubernetes产品(aws、GKE、DO等)中,这一步骤已经由提供商负责。

我认为目标应该是与云提供商实现相同的功能,即配置必须是动态的(例如,OP自己的答案是正确的,但静态供给 - k8s文档)。

当运行裸金属时,您必须手动配置一个卷插件,然后才能使用动态供给量 (k8s docs),从而使用StatefulSets、PersistentVolumeClaims等。谢天谢地,有许多可供使用的供给者 (k8s docs)。对于支持动态配置,选中“内部提供器”的列表中的每一项都可以。

因此,虽然问题有很多解决方案,但我最终还是使用了NFS。为了实现动态配置,我必须使用外部提供程序。希望这和安装舵图一样简单。

  1. 主节点上的安装NFS (原始指南)。

通过终端运行ssh

代码语言:javascript
复制
sudo apt update
sudo apt install nfs-kernel-server nfs-common
  1. 创建目录Kubernetes将使用并更改所有权
代码语言:javascript
复制
sudo mkdir /var/nfs/kubernetes -p
sudo chown nobody:nogroup /var/nfs/kubernetes
  1. 配置NFS

打开文件/etc/exports

代码语言:javascript
复制
sudo nano /etc/exports

在底部添加以下一行

代码语言:javascript
复制
/var/nfs/kubernetes  client_ip(rw,sync,no_subtree_check)

用主节点ip替换 client_ip。在我的例子中,这个IP是我的路由器租用给运行主节点的机器的DHCP服务器IP (192.168.1.7)

  1. 重新启动NFS以应用更改。
代码语言:javascript
复制
sudo systemctl restart nfs-kernel-server
  1. 在主服务器上设置NFS并假设Helm存在之后,安装提供程序就像运行一样简单。
代码语言:javascript
复制
helm install  nfs-provisioner --set nfs.server=XXX.XXX.XXX.XXX --set nfs.path=/var/nfs/kubernetes --set storageClass.defaultClass=true stable/nfs-client-provisioner

nfs.server标志替换为主节点/NFS服务器的适当IP/主机名。

注意,为了使Kubernetes默认使用插件(provisioner)来创建卷,storageClass.defaultClass必须是true

标志nfs.path与步骤2中创建的路径相同。

如果Helm抱怨找不到图表运行helm repo add stable https://kubernetes-charts.storage.googleapis.com/

  1. 在成功地完成前面的步骤之后,继续安装在他们的文档中,并使用开箱即用的图形部署动态配置集群。

单服务器

代码语言:javascript
复制
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single/dgraph-single.yaml

HA簇

代码语言:javascript
复制
kubectl create --filename https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-ha/dgraph-ha.yaml
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/61893322

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档