首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在新加入的控制平面节点中获取现有kubernetes集群的apiserver & etcd误差

在新加入的控制平面节点中获取现有kubernetes集群的apiserver & etcd误差
EN

Stack Overflow用户
提问于 2022-10-26 11:17:48
回答 1查看 58关注 0票数 0

我们将新的控制平面节点加入到现有的kubernetes集群中,当我检查新控制平面节点的吊舱时,kube-控制器管理器和kube-调度器运行良好,但etcd和kube-apiserver是CrashLoopBackOff。

请找出阿皮瑟弗的原木

代码语言:javascript
复制
kubectl logs kube-apiserver -n kube-system
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1026 06:23:27.991788       1 server.go:625] external host was not specified, using serverIP
I1026 06:23:27.992596       1 server.go:163] Version: v1.19.16
I1026 06:23:28.279281       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1026 06:23:28.279305       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1026 06:23:28.280325       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1026 06:23:28.280343       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1026 06:23:28.282885       1 client.go:360] parsed scheme: "endpoint"
I1026 06:23:28.282948       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W1026 06:23:28.283539       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I1026 06:23:29.277962       1 client.go:360] parsed scheme: "endpoint"
I1026 06:23:29.278012       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W1026 06:23:29.278309       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:29.283863       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:30.278671       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:30.708000       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:31.925477       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:33.481906       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:34.349865       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:37.895359       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:39.056593       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:45.305200       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1026 06:23:46.744018       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
Error: context deadline exceeded

请找出原木

代码语言:javascript
复制
# kubectl logs etcd -n kube-system
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-10-26 06:25:32.919285 I | etcdmain: etcd Version: 3.4.9
2022-10-26 06:25:32.919332 I | etcdmain: Git SHA: 54ba674376
2022-10-26 06:25:32.919336 I | etcdmain: Go Version: go1.12.17
2022-10-26 06:25:32.919340 I | etcdmain: Go OS/Arch: linux/amd64
2022-10-26 06:25:32.919346 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2022-10-26 06:25:32.919445 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-10-26 06:25:32.919500 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2022-10-26 06:25:32.920218 I | embed: name = servername
2022-10-26 06:25:32.920229 I | embed: data dir = /var/lib/etcd
2022-10-26 06:25:32.920233 I | embed: member dir = /var/lib/etcd/member
2022-10-26 06:25:32.920236 I | embed: heartbeat = 100ms
2022-10-26 06:25:32.920240 I | embed: election = 1000ms
2022-10-26 06:25:32.920254 I | embed: snapshot count = 10000
2022-10-26 06:25:32.920286 I | embed: advertise client URLs = https://serverIP:2379
2022-10-26 06:25:32.921797 W | etcdserver: could not get cluster response from http://localhost:2380: Get http://localhost:2380/members: dial tcp 127.0.0.1:2380: connect: connection refused
2022-10-26 06:25:32.922815 C | etcdmain: cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs

有人能指点我如何消除这个错误吗?

EN

回答 1

Stack Overflow用户

发布于 2022-10-30 14:24:20

您使用的是哪个容器运行库?您可以尝试通过在所有控制计划和工作节点上运行kubeadm reset来重置kubeadm配置。您还可以查看/var/log/containers中的容器日志并查看错误。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74206876

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档