我在虚拟机中安装了一个干净的K8s集群(Debian 10)。在安装和集成到我的景观之后,我检查了测试高寒图像中的连接性。结果,coreDNS日志中传出通信量的连接不起作用,也没有任何信息。我使用构建映像的解决方案来覆盖/etc/ used并替换DNS条目(例如,将1.1.1.1设置为Nameserver)。在那次短暂的“黑客攻击”之后,与互联网的连接完美地运作起来。但这不是一个长期的解决方案,我想使用官方的方法。在K8s coreDNS的文档中,我找到了forward部分,并像选项一样解释标志,将查询转发到预定义的本地解析器。我认为转发到本地resolv.conf和解决过程不正确。有人能帮我解决这个问题吗?
基本设置:
F 211
状态的CoreDNS Pods
kube-system coredns-xxxx 1/1 Running 1 26h
kube-system coredns-yyyy 1/1 Running 1 26hCoreDNS日志:
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7CoreDNS config:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: ""
name: coredns
namespace: kube-system
resourceVersion: "219"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: xxx输出高寒图:
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached豆荚输出resolv.conf
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5主机输出resolv.conf
cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid主机/run/flannel/subnet.env的输出
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=truekubectl get pods -n kube-system -o wide的输出
coredns-54694b8f47-4sm4t 1/1 Running 0 14d 10.244.1.48 xxx3-node-1 <none> <none>
coredns-54694b8f47-6c7zh 1/1 Running 0 14d 10.244.0.43 xxx2-master <none> <none>
coredns-54694b8f47-lcthf 1/1 Running 0 14d 10.244.2.88 xxx4-node-2 <none> <none>
etcd-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-apiserver-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-controller-manager-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-4w8zl 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-w7m44 1/1 Running 7 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-flannel-ds-amd64-xztqm 1/1 Running 6 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-dfs85 1/1 Running 4 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-m4hl2 1/1 Running 4 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-proxy-s7p4s 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-scheduler-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>发布于 2020-09-12 09:37:24
问题:
(两个) coreDNS荚仅部署在主节点上。您可以使用此命令检查设置。
kubectl get pods -n kube-system -o wide | grep coredns解决方案:
我可以通过扩展coreDNS吊舱和编辑部署配置来解决这个问题。必须执行下列命令。
kubectl edit deployment coredns -n kube-systemSet replicas value to node quantity e.g. 3kubectl patch deployment coredns -n kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"force-update/updated-at\":\"$(date +%s)\"}}}}}"kubectl get pods -n kube-system -o wide | grep coredns源
提示
如果您的coreDNS仍然有问题,并且您的DNS解析偶尔起作用,那么请看一下这个post。
https://stackoverflow.com/questions/63659388
复制相似问题