首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >kubectl获取节点-连接被拒绝

kubectl获取节点-连接被拒绝
EN

Stack Overflow用户
提问于 2018-10-23 11:29:54
回答 1查看 4.8K关注 0票数 3
  • 在虚拟机中运行Ubuntu18.04.1 LTS
  • 我好像和报道的here on SO有同样的问题

我几天前安装的,一切都很好。我可以通过kubectl连接没有问题。然而,现在当我做以下工作时:

代码语言:javascript
复制
$ kubectl get nodes
The connection to the server 192.168.40.101:6443 was refused - did you specify the right host or port?

更新:添加了环境设置.

代码语言:javascript
复制
$ echo $KUBECONFIG

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.40.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
$ kubectl get pods

即使我显式地将变量设置为主目录中的config文件,也是:

代码语言:javascript
复制
$ ls -l .kube/config
-rw------- 1 someuser someuser 5450 Oct 15 21:58 .kube/config

没有关系。“kubectl config视图”仍然返回相同的数据(默认情况下,没有KUBECONFIG变量设置在上述位置查找配置文件)

防火墙也关闭:

代码语言:javascript
复制
$ sudo ufw status
Status: inactive

我看得出库贝利特还好:

代码语言:javascript
复制
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2018-10-15 21:46:55 AEDT; 1 weeks 1 days ago

没有显示apiserver正在运行:

代码语言:javascript
复制
$ ps aux | grep kube
root      10304  9.4  1.5 1380412 136776 ?      Ssl  Oct15 1093:57 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
root      11104  0.7  0.3  43168 32476 ?        Ssl  Oct15  92:07 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
donovan   11757  0.0  0.0  14428  1044 pts/1    S+   22:39   0:00 grep --color=auto kube
root     159921  0.0  0.1  16252  8824 ?        Ssl  Oct19   5:02 /chart-repo sync --mongo-url=kubeapps-mongodb --mongo-user=root stable https://kubernetes-charts.storage.googleapis.com

~$ sudo lsof -i
COMMAND     PID            USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
systemd-r   516 systemd-resolve   12u  IPv4    28394      0t0  UDP localhost:domain
systemd-r   516 systemd-resolve   13u  IPv4    28395      0t0  TCP localhost:domain (LISTEN)
avahi-dae   627           avahi   12u  IPv4    31555      0t0  UDP *:mdns
avahi-dae   627           avahi   13u  IPv6    31556      0t0  UDP *:mdns
avahi-dae   627           avahi   14u  IPv4    31557      0t0  UDP *:47611
avahi-dae   627           avahi   15u  IPv6    31558      0t0  UDP *:35014
xrdp-sesm   750            root    7u  IPv6    33682      0t0  TCP ip6-localhost:3350 (LISTEN)
sshd       2018            root    3u  IPv4  8211858      0t0  TCP *:ssh (LISTEN)
sshd       2018            root    4u  IPv6  8211860      0t0  TCP *:ssh (LISTEN)
sshd       2161            root    3u  IPv4    44589      0t0  TCP KUBE-01:ssh->192.168.40.50:43835 (ESTABLISHED)
sshd       2254         donovan    3u  IPv4    44589      0t0  TCP KUBE-01:ssh->192.168.40.50:43835 (ESTABLISHED)
sshd       6348            root    3u  IPv4    57332      0t0  TCP KUBE-01:ssh->192.168.40.50:46583 (ESTABLISHED)
sshd       6429         donovan    3u  IPv4    57332      0t0  TCP KUBE-01:ssh->192.168.40.50:46583 (ESTABLISHED)
kubelet   10304            root    9u  IPv4    98081      0t0  TCP localhost:38077 (LISTEN)
kubelet   10304            root   19u  IPv4   118188      0t0  TCP localhost:10248 (LISTEN)
kubelet   10304            root   20u  IPv6   117597      0t0  TCP *:10250 (LISTEN)
cupsd     19145            root    6u  IPv6 21711266      0t0  TCP ip6-localhost:ipp (LISTEN)
cupsd     19145            root    7u  IPv4 21711267      0t0  TCP localhost:ipp (LISTEN)
cups-brow 19146            root    7u  IPv4 21710056      0t0  UDP *:ipp

但对于我的生活,我不知道如何检查库贝-apiserver是否正在运行(通过服务检查或类似的),因为我猜这就是导致问题的原因?

更新:似乎API服务器由于etcd而失败

挖掘码头日志:

代码语言:javascript
复制
sudo less /var/log/containers/kube-apiserver-kube-01_kube-system_kube-apiserver-00c9e483c6f0f84520d0f6b41cfb8e6489ef030aac91c8d6ac30c88bde44e9f1.log
{"log":"Flag --insecure-port has been deprecated, This flag will be removed in a future version.\n","stream":"stderr","time":"2018-10-24T10:32:08.316846636Z"}
{"log":"I1024 10:32:08.316937       1 server.go:681] external host was not specified, using 192.168.40.101\n","stream":"stderr","time":"2018-10-24T10:32:08.317214326Z"}
{"log":"I1024 10:32:08.317252       1 server.go:152] Version: v1.12.1\n","stream":"stderr","time":"2018-10-24T10:32:08.317368622Z"}
{"log":"I1024 10:32:09.025904       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026105478Z"}
{"log":"I1024 10:32:09.025981       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026159677Z"}
{"log":"I1024 10:32:09.026595       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026704563Z"}
{"log":"I1024 10:32:09.026625       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026717163Z"}
{"log":"F1024 10:32:29.031135       1 storage_decorator.go:57] Unable to create storage backend: config (\u0026{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true true 1000 0xc420ba1cb0 \u003cnil\u003e 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)\n","stream":"stderr","time":"2018-10-24T10:32:29.032482723Z"}

所以:

  1. 码头内为什么会有故障?
  2. 在Ubuntu机器上,如何确定是否所有的k8s位都在运行?
  3. 如何进一步解决此问题?(这样我就可以让kubectl再次与集群对话)
EN

回答 1

Stack Overflow用户

发布于 2020-01-16 09:19:11

我也有同样的问题,并能够解决它。

使用下面的命令禁用交换临时,但如果您的系统重新启动,问题将再次发生。

代码语言:javascript
复制
sudo -i
swapoff -a

永久修复是从/etc/fstab编辑vim /etc/fstab中删除交换项。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52947938

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档