我试图启动一个kubernetes集群,但是使用一个不同的url来提取kubernetes的图像。AFAIK,它只能通过配置文件。
我不熟悉配置文件,所以我从一个简单的文件开始:
apiVersion: kubeadm.k8s.io/v1alpha2
imageRepository: my.internal.repo:8082
kind: MasterConfiguration
kubernetesVersion: v1.11.3运行命令kubeadm init -config file.yaml一段时间后,它将失败,出现以下错误:
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I1015 12:05:54.066140 27275 kernel_validator.go:81] Validating kernel version
I1015 12:05:54.066324 27275 kernel_validator.go:96] Validating kernel config
[WARNING Hostname]: hostname "kube-master-0" could not be reached
[WARNING Hostname]: hostname "kube-master-0" lookup kube-master-0 on 10.11.12.246:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.5.189]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [kube-master-0 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [kube-master-0 localhost] and IPs [10.10.5.189 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- my.internal.repo:8082/kube-apiserver-amd64:v1.11.3
- my.internal.repo:8082/kube-controller-manager-amd64:v1.11.3
- my.internal.repo:8082/kube-scheduler-amd64:v1.11.3
- my.internal.repo:8082/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster我用systemctl状态kubelet检查了kubelet状态,它正在运行。
我成功地,试图用以下方式大量地拉出图像:
docker pull my.internal.repo:8082/kubee-apiserver-amd64:v1.11.3但是,'docker ps -a‘不返回容器。
-xeu kubelet显示了大量的连接被拒绝,并收到了对k8s.io的请求,我很难理解根错误。
有什么想法吗?
提前感谢!
编辑1:我尝试手动打开端口,但是没有什么改变。centos@kube-master-0 ~$ sudo防火墙-cmd-zone=公共-列表-端口6443/tcp 5000/tcp 2379-2380/tcp 10250-10252/tcp
我还将kube版本从1.11.3改为1.12.1,但没有什么变化。
编辑2:我意识到kubelet试图从k8s.io回购中撤出,这意味着我只改变了kubeadm内部回购。我也要对库贝利特做同样的事。
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.108764 24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to...on refused
Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.110539 24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v...on refused有什么想法吗?
发布于 2018-10-22 19:25:38
您解决了一半的问题,最后的解决方案可能是编辑kubelet (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) init文件。您需要设置--pod_infra_container_image参数,该参数将引用通过内部存储库提取的暂停容器映像。图像名将如下所示:my.internal.repo:8082/pause:[version]。
原因是kubelet无法获得新的图像标记来引用它。
发布于 2018-10-15 14:33:00
由于使用注释以正确的方式格式化文本不可用,我将将我的评论作为回答:
如果您在集群init之前尝试下载映像,会发生什么情况?示例:
主人-config.yaml:
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3命令:
root@kube-master-01:~# kubeadm配置映像-拉-- config ="/root/master-config.yaml“
输出:
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.11.3
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.2.2P.S:在尝试之前添加imageRepository: my.internal.repo:8082。
https://stackoverflow.com/questions/52817266
复制相似问题