我经常在我的豆荚中遇到NodeUnderDiskPressure,这些豆荚都是在Minikube运行的。使用minikube ssh查看df -h,我在所有的坐骑上使用50%的最大值。事实上,一个是50%,另5个是<10%。
$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 7.3G 503M 6.8G 7% /
devtmpfs 7.3G 0 7.3G 0% /dev
tmpfs 7.4G 0 7.4G 0% /dev/shm
tmpfs 7.4G 9.2M 7.4G 1% /run
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
/dev/sda1 17G 7.5G 7.8G 50% /mnt/sda1
$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
rootfs 1.9M 4.1K 1.9M 1% /
devtmpfs 1.9M 324 1.9M 1% /dev
tmpfs 1.9M 1 1.9M 1% /dev/shm
tmpfs 1.9M 657 1.9M 1% /run
tmpfs 1.9M 14 1.9M 1% /sys/fs/cgroup
/dev/sda1 9.3M 757K 8.6M 8% /mnt/sda1大概在1到5分钟后就会消失。奇怪的是,重新启动Minikube似乎并没有加快这一过程。我试过移除所有被逐出的吊舱,但是,同样,磁盘的使用率看起来并不高。
我正在使用的对接图像只有不到2GB,我试图将它们中的几个旋转起来,所以这仍然会给我留下足够的空间。
下面是一些kubectl describe输出:
$ kubectl describe po/consumer-lag-reporter-3832025036-wlfnt
Name: consumer-lag-reporter-3832025036-wlfnt
Namespace: default
Node: <none>
Labels: app=consumer-lag-reporter
pod-template-hash=3832025036
tier=monitor
type=monitor
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"consumer-lag-reporter-3832025036","uid":"342b0f72-9d12-11e8-a735...
Status: Pending
IP:
Created By: ReplicaSet/consumer-lag-reporter-3832025036
Controlled By: ReplicaSet/consumer-lag-reporter-3832025036
Containers:
consumer-lag-reporter:
Image: avery-image:latest
Port: <none>
Command:
/bin/bash
-c
Args:
newrelic-admin run-program python manage.py lag_reporter_runner --settings-module project.settings
Environment Variables from:
local-config ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sjprm (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-sjprm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sjprm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 15s (x7 over 46s) default-scheduler No nodes are available that match all of the following predicates:: NodeUnderDiskPressure (1).这是个虫子吗?我还能做些什么来调试这个呢?
发布于 2018-08-23 21:55:23
我试过:
kubectl get pods -a)minikube ssh + docker images)minikube ssh + docker ps -a)如我的问题所示,磁盘使用率仍然很低。我只是重新创建了一个迷你库集群,并使用了--disk-size标志,这解决了我的问题。需要注意的关键是,尽管df显示我几乎没有使用任何磁盘,但它有助于使磁盘变得更大。
https://stackoverflow.com/questions/51795510
复制相似问题