首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在状态集pod中使用本地持久卷时出错

在状态集pod中使用本地持久卷时出错
EN

Stack Overflow用户
提问于 2018-09-05 11:13:20
回答 2查看 7.3K关注 0票数 8

我正在尝试使用https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/中提到的本地持久性卷来创建状态集荚。但当我的吊舱试图索取音量时。我收到以下错误:

代码语言:javascript
复制
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  4s (x243 over 20m)  default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

下面是我创建的存储类和持久卷:

storageclass-kafka-broker.yml

代码语言:javascript
复制
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: kafka-broker
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

storageclass-kafka-zookeeper.yml

代码语言:javascript
复制
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: kafka-zookeeper
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

pv-zookeeper.yml

代码语言:javascript
复制
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv-zookeeper
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: kafka-zookeeper
  local:
    path: /D/kubernetes-mount-path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

pv-kafka.yml

代码语言:javascript
复制
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 200Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: kafka-broker
  local:
    path: /D/kubernetes-mount-path
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

下面是使用此卷的pod 50pzoo.yml

代码语言:javascript
复制
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: pzoo
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: zookeeper
      storage: persistent
  serviceName: "pzoo"
  replicas: 1
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        app: zookeeper
        storage: persistent
      annotations:
    spec:
      terminationGracePeriodSeconds: 10
      initContainers:
      - name: init-config
        image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2
        command: ['/bin/bash', '/etc/kafka-configmap/init.sh']
        volumeMounts:
        - name: configmap
          mountPath: /etc/kafka-configmap
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/zookeeper/data
      containers:
      - name: zookeeper
        image: solsson/kafka:2.0.0@sha256:8bc5ccb5a63fdfb977c1e207292b72b34370d2c9fe023bdc0f8ce0d8e0da1670
        env:
        - name: KAFKA_LOG4J_OPTS
          value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
        command:
        - ./bin/zookeeper-server-start.sh
        - /etc/kafka/zookeeper.properties
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: peer
        - containerPort: 3888
          name: leader-election
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -c
            - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
        volumeMounts:
        - name: config
          mountPath: /etc/kafka
        - name: data
          mountPath: /var/lib/zookeeper/data
      volumes:
      - name: configmap
        configMap:
          name: zookeeper-config
      - name: config
        emptyDir: {}
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: kafka-zookeeper
      resources:
        requests:
          storage: 1Gi

下面是kubectl get events命令输出

代码语言:javascript
复制
[root@quagga kafka-kubernetes-testing-single-node]# kubectl get events --namespace kafka
LAST SEEN   FIRST SEEN   COUNT     NAME                           KIND                    SUBOBJECT   TYPE      REASON                 SOURCE                        MESSAGE
1m          1m           1         pzoo.15517ca82c7a4675          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Claim data-pzoo-0 Pod pzoo-0 in StatefulSet pzoo success
1m          1m           1         pzoo.15517ca82caed9bc          StatefulSet                         Normal    SuccessfulCreate       statefulset-controller        create Pod pzoo-0 in StatefulSet pzoo successful
13s         1m           9         data-pzoo-0.15517ca82c726833   PersistentVolumeClaim               Normal    WaitForFirstConsumer   persistentvolume-controller   waiting for first consumer to be created before binding
9s          1m           22        pzoo-0.15517ca82cb90238        Pod                                 Warning   FailedScheduling       default-scheduler             0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.

kubectl get pv的输出是:

代码语言:javascript
复制
[root@quagga kafka-kubernetes-testing-single-node]# kubectl get pv
NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS      REASON    AGE
example-local-pv             200Gi      RWO            Retain           Available             kafka-broker                4m
example-local-pv-zookeeper   2Gi        RWO            Retain           Available             kafka-zookeeper             4m
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2018-09-06 17:37:23

这是个愚蠢的错误。我在my-node文件中的节点名值中提到了pv。修改它以纠正节点名解决了我的问题。

票数 3
EN

Stack Overflow用户

发布于 2019-01-30 10:13:20

感谢您的分享!我也犯了同样的错误。我想k8s文档可以更清楚地说明这一点(尽管它非常明显),所以它是一个复制粘贴陷阱。

更清楚的是:如果您有一个具有3个节点的集群,那么您需要创建三个不同的命名PV,并为'my-node‘(kubectl get节点)提供正确的节点名。volumeClaimTemplate和PV之间的唯一引用是存储类的名称。

我把类似于“本地- PV -节点-X”之类的东西作为PV名称,所以当我查看kubernetes仪表板中的PV部分时,我可以直接看到这个卷所在的位置。

你用“我的笔记”上的提示更新你的清单;-)

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52183750

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档