首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >0/1节点可用:在启动promethus出口商时,有1个节点没有为所请求的pod端口提供空闲端口

0/1节点可用:在启动promethus出口商时,有1个节点没有为所请求的pod端口提供空闲端口
EN

Stack Overflow用户
提问于 2022-08-15 14:35:06
回答 1查看 1.8K关注 0票数 0

在kubernetes集群中使用helm安装promethus后,pod显示如下错误:

代码语言:javascript
复制
0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

这是部署yaml:

代码语言:javascript
复制
apiVersion: v1
kind: Pod
metadata:
  name: kube-prometheus-1660560589-node-exporter-n7rzg
  generateName: kube-prometheus-1660560589-node-exporter-
  namespace: reddwarf-monitor
  uid: 73986565-ccd8-421c-bcbb-33879437c4f3
  resourceVersion: '71494023'
  creationTimestamp: '2022-08-15T10:51:07Z'
  labels:
    app.kubernetes.io/instance: kube-prometheus-1660560589
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: node-exporter
    controller-revision-hash: 65c69f9b58
    helm.sh/chart: node-exporter-3.0.8
    pod-template-generation: '1'
  ownerReferences:
    - apiVersion: apps/v1
      kind: DaemonSet
      name: kube-prometheus-1660560589-node-exporter
      uid: 921f98b9-ccc9-4e84-b092-585865bca024
      controller: true
      blockOwnerDeletion: true
status:
  phase: Pending
  conditions:
    - type: PodScheduled
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2022-08-15T10:51:07Z'
      reason: Unschedulable
      message: >-
        0/1 nodes are available: 1 node(s) didn't have free ports for the
        requested pod ports.
  qosClass: BestEffort
spec:
  volumes:
    - name: proc
      hostPath:
        path: /proc
        type: ''
    - name: sys
      hostPath:
        path: /sys
        type: ''
    - name: kube-api-access-9fj8v
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
        defaultMode: 420
  containers:
    - name: node-exporter
      image: docker.io/bitnami/node-exporter:1.3.1-debian-11-r23
      args:
        - '--path.procfs=/host/proc'
        - '--path.sysfs=/host/sys'
        - '--web.listen-address=0.0.0.0:9100'
        - >-
          --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
        - >-
          --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
      ports:
        - name: metrics
          hostPort: 9100
          containerPort: 9100
          protocol: TCP
      resources: {}
      volumeMounts:
        - name: proc
          readOnly: true
          mountPath: /host/proc
        - name: sys
          readOnly: true
          mountPath: /host/sys
        - name: kube-api-access-9fj8v
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      livenessProbe:
        httpGet:
          path: /
          port: metrics
          scheme: HTTP
        initialDelaySeconds: 120
        timeoutSeconds: 5
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 6
      readinessProbe:
        httpGet:
          path: /
          port: metrics
          scheme: HTTP
        initialDelaySeconds: 30
        timeoutSeconds: 5
        periodSeconds: 10
        successThreshold: 1
        failureThreshold: 6
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      securityContext:
        runAsUser: 1001
        runAsNonRoot: true
  restartPolicy: Always
  terminationGracePeriodSeconds: 30
  dnsPolicy: ClusterFirst
  serviceAccountName: kube-prometheus-1660560589-node-exporter
  serviceAccount: kube-prometheus-1660560589-node-exporter
  hostNetwork: true
  hostPID: true
  securityContext:
    fsGroup: 1001
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchFields:
              - key: metadata.name
                operator: In
                values:
                  - k8smasterone
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          podAffinityTerm:
            labelSelector:
              matchLabels:
                app.kubernetes.io/instance: kube-prometheus-1660560589
                app.kubernetes.io/name: node-exporter
            namespaces:
              - reddwarf-monitor
            topologyKey: kubernetes.io/hostname
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
    - key: node.kubernetes.io/disk-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/memory-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/pid-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/unschedulable
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/network-unavailable
      operator: Exists
      effect: NoSchedule
  priority: 0
  enableServiceLinks: true
  preemptionPolicy: PreemptLowerPriority

我已经检查了主机,发现端口9100是免费的,为什么仍然告诉这个吊舱没有端口?我该怎么做才能避免这个问题?这是主机端口9100 check命令:

代码语言:javascript
复制
[root@k8smasterone grafana]# lsof -i:9100
[root@k8smasterone grafana]#

这是豆荚描述信息:

代码语言:javascript
复制
➜  ~ kubectl describe pod kube-prometheus-1660560589-node-exporter-n7rzg -n reddwarf-monitor
Name:           kube-prometheus-1660560589-node-exporter-n7rzg
Namespace:      reddwarf-monitor
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=kube-prometheus-1660560589
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=node-exporter
                controller-revision-hash=65c69f9b58
                helm.sh/chart=node-exporter-3.0.8
                pod-template-generation=1
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/kube-prometheus-1660560589-node-exporter
Containers:
  node-exporter:
    Image:      docker.io/bitnami/node-exporter:1.3.1-debian-11-r23
    Port:       9100/TCP
    Host Port:  9100/TCP
    Args:
      --path.procfs=/host/proc
      --path.sysfs=/host/sys
      --web.listen-address=0.0.0.0:9100
      --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
      --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
    Liveness:     http-get http://:metrics/ delay=120s timeout=5s period=10s #success=1 #failure=6
    Readiness:    http-get http://:metrics/ delay=30s timeout=5s period=10s #success=1 #failure=6
    Environment:  <none>
    Mounts:
      /host/proc from proc (ro)
      /host/sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9fj8v (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
  kube-api-access-9fj8v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age                      From               Message
  ----     ------            ----                     ----               -------
  Warning  FailedScheduling  2m54s (x233 over 3h53m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

这是netstat:

代码语言:javascript
复制
[root@k8smasterone ~]# netstat -plant |grep 9100
[root@k8smasterone ~]#

我还尝试通过添加以下配置来允许主节点中运行的吊舱:

代码语言:javascript
复制
tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master

还没有解决这个问题。

EN

回答 1

Stack Overflow用户

发布于 2022-08-15 16:09:01

使用hostNetwork: true配置吊舱时,运行在这个吊舱中的容器可以直接看到启动吊舱的主机的网络接口。

容器端口将暴露在外部网络中:,hostPort是配置hostPort中的用户请求的端口。

为了绕过你的问题,你有两个选择:

hostNetwork: false

  • choose设置为不同的
  • (在49152到65535之间更好)
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73362483

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档