首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >在Ubuntu 18.04 LTS (与Docker一起)上安装Kubernetes - init上失败

在Ubuntu 18.04 LTS (与Docker一起)上安装Kubernetes - init上失败
EN

Server Fault用户
提问于 2019-08-22 12:41:48
回答 1查看 1.6K关注 0票数 0

我试图在运行Ubuntu10.04LTS的VM上安装Kubernetes,并且在尝试初始化系统时遇到问题,kubeadm init命令将导致失败(下面是完整的日志)。

VM: 2个CPU,512 VM内存,100 gig磁盘,在VMWare ESXi6下运行。

OS: Ubuntu18.04LTS服务器安装,在开始安装Docker和Kubernetes之前通过apt更新和apt升级完全更新。

按照这里的说明安装Docker,安装完成时没有错误:https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

按照这里的说明安装的Kubernetes,除了Docker部分(如下这些指令生成一个PreFlight错误re /cgroupfs):https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/

所有安装似乎顺利进行,没有报告错误,但是试图启动Kubernetes失败了,如下面的日志所示。

我对Docker和Kubernetes都是完全陌生的,虽然我了解主要的概念,并且已经尝试了关于kubernetes.io的在线教程,但是在我能够安装一个工作系统之前,我已经无法取得进一步的进展了。在kubeadm尝试启动集群时,所有东西都挂起4分钟,然后退出,超时如下所示。

代码语言:javascript
复制
root@k8s-master-dev:~# sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-dev kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.24.0.100]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-dev localhost] and IPs [10.24.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

我查看了日志日志数据和停靠日志,但除了大量超时之外,看不到任何可以解释实际错误的东西。有人能告诉我应该去哪里看看吗?问题最有可能的原因是什么?

已经尝试过了:删除所有IPTables规则,并将默认值设置为“接受”。按照vitux.com指令与Docker一起运行(给出一个PreFlight警告,但没有错误,但在试图插入Kubernetes时也有相同的超时)。

更新:下面是@Crou的评论,如果我尝试使用“kubeadm init”作为root,现在会发生什么:

代码语言:javascript
复制
root@k8s-master-dev:~# uptime
 16:34:49 up  7:23,  3 users,  load average: 10.55, 16.77, 19.31
root@k8s-master-dev:~# kubeadm init
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10251]: Port 10251 is in use
        [ERROR Port-10252]: Port 10252 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Re非常高的负载显示在正常运行时间,这是从init第一次尝试和加载保持非常高,除非做了一个kibeadm重置,以清除一切下来。

EN

回答 1

Server Fault用户

发布于 2019-08-29 13:37:24

进一步的实验:除了VMware ESXi管理程序之外,我们还在相同的硬件平台上运行XenServer管理程序。我尝试在一个Xen刀片上安装相同的VM,但是事实证明根本不可能安装Ubuntu,安装在“安装内核”阶段失败。我试过两种不同的安装方式,它们都在同一个地方失败了。VM规范与在ESXi下相同,2核,512 VM内存,100 gig硬盘。

解决方案:我们最终解决了这个问题,方法是不使用VM,直接在硬件上安装Ubuntu18.04,不涉及管理程序或VM,然后添加Docker和Kubernetes,这一次kubeadm init命令正确完成,出现了预期的消息。我们安装的刀片的规格是2倍的Xeon处理器和48千兆内存,一个1TB硬盘。

票数 0
EN
页面原文内容由Server Fault提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://serverfault.com/questions/980313

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档