首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >活性探针,预期持续时间内未调用的准备性探针

活性探针,预期持续时间内未调用的准备性探针
EN

Stack Overflow用户
提问于 2021-02-21 05:38:21
回答 1查看 10.2K关注 0票数 2

在GKE上,我尝试使用就绪探测/活性探测,并使用监视https://cloud.google.com/monitoring/alerts/using-alerting-ui发布警报。

作为一种测试,我制作了一个有准备探针/活性探针的吊舱。如我所料,每次检查都失败了。

代码语言:javascript
复制
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness
    image: k8s.gcr.io/liveness
    args:
    - /server
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 0
      periodSeconds: 10      
      timeoutSeconds: 10
      successThreshold: 1
      failureThreshold: 3
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
        httpHeaders:
        - name: X-Custom-Header
          value: Awesome
      initialDelaySeconds: 20
      periodSeconds: 60
      timeoutSeconds: 30      
      successThreshold: 1
      failureThreshold: 3 

在检查GCP日志时,两个错误日志都会首先出现在periodSeconds上。

准备就绪探测器:每10秒一次

2021-02-21 13:26:30.000 JST就绪探测失败: HTTP探测失败,状态代码:5002021-02-21:26:40.000 JST就绪探测失败: HTTP探测失败,状态代码: 500

活性探针:每1分钟一次

2021-02-21 13:25:40.000 JST活性探针失败: HTTP探测失败,状态代码: 500 2021-02-21 13:26:40.000 JST活性探测失败: HTTP探测失败,状态代码: 500。

但是,在运行这个吊舱几分钟后

  • 活性探针检查不再被调用
  • 准备就绪探针检查呼叫,但间隔变长(最大间隔约10分钟)
代码语言:javascript
复制
$ kubectl get event
LAST SEEN   TYPE      REASON      OBJECT              MESSAGE
30m         Normal    Pulling     pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
25m         Warning   Unhealthy   pod/liveness-http   Readiness probe failed: HTTP probe failed with statuscode: 500
20m         Warning   BackOff     pod/liveness-http   Back-off restarting failed container
20m         Normal    Scheduled   pod/liveness-http   Successfully assigned default/liveness-http to gke-cluster-default-pool-8bc9c75c-rfgc
17m         Normal    Pulling     pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
17m         Normal    Pulled      pod/liveness-http   Successfully pulled image "k8s.gcr.io/liveness"
17m         Normal    Created     pod/liveness-http   Created container liveness
20m         Normal    Started     pod/liveness-http   Started container liveness
4m59s       Warning   Unhealthy   pod/liveness-http   Readiness probe failed: HTTP probe failed with statuscode: 500
17m         Warning   Unhealthy   pod/liveness-http   Liveness probe failed: HTTP probe failed with statuscode: 500
17m         Normal    Killing     pod/liveness-http   Container liveness failed liveness probe, will be restarted

在我的计划中,我将创建警报策略,其条件类似于

  • 如果活性探针错误在3分钟内发生3次

但是,如果探测检查没有像我预期的那样调用,那么这些策略就不起作用了;即使吊舱没有运行,警报也被修正了。

为什么活性探针不运行,预备探针的间隔发生了变化?

注意:如果有其他好的警告政策来检查吊舱的活性,我不会在意这种行为。如果有人能建议我什么样的警戒政策是理想的检查舱,我会很感激。

EN

回答 1

Stack Overflow用户

发布于 2021-03-09 17:47:09

背景

配置活性、准备度和启动探针文档中,可以找到以下信息:

kubelet使用liveness probes来知道何时重新启动容器。例如,活动探测可能捕获一个死锁,其中一个应用程序正在运行,但无法取得进展。在这种状态下重新启动容器可以帮助使应用程序在bug的情况下更加可用。 kubelet使用readiness probes来知道容器何时准备开始接受通信量。当一个Pod的所有容器都准备好时,它就被认为已经准备好了。此信号的一种用途是控制将哪个Pods用作服务的后端。当Pod尚未准备好时,它将从服务负载平衡器中移除。

由于GKE母版由google管理,所以您将不会使用CLI找到kubelet日志(您可以尝试使用Stackdriver)。我已经在Kubeadm集群上测试了它,并将verbosity级别设置为8

当您使用$ kubectl get events时,您只得到最后一个小时的事件(在Kubernetes设置中可以更改它-- Kubeadm,但我不认为它在GKE中是可以更改的,因为主人是由谷歌管理的)。

代码语言:javascript
复制
$ kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT              MESSAGE
37m         Normal    Starting                  node/kubeadm        Starting kubelet.
...
33m         Normal    Scheduled                 pod/liveness-http   Successfully assigned default/liveness-http to kubeadm
33m         Normal    Pulling                   pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
33m         Normal    Pulled                    pod/liveness-http   Successfully pulled image "k8s.gcr.io/liveness" in 893.953679ms
33m         Normal    Created                   pod/liveness-http   Created container liveness
33m         Normal    Started                   pod/liveness-http   Started container liveness
3m12s       Warning   Unhealthy                 pod/liveness-http   Readiness probe failed: HTTP probe failed with statuscode: 500
30m         Warning   Unhealthy                 pod/liveness-http   Liveness probe failed: HTTP probe failed with statuscode: 500
8m17s       Warning   BackOff                   pod/liveness-http   Back-off restarting failed container

同样,在~1 hour之后使用相同的命令。

代码语言:javascript
复制
$ kubectl get events
LAST SEEN   TYPE      REASON      OBJECT              MESSAGE
33s         Normal    Pulling     pod/liveness-http   Pulling image "k8s.gcr.io/liveness"
5m40s       Warning   Unhealthy   pod/liveness-http   Readiness probe failed: HTTP probe failed with statuscode: 500
15m         Warning   BackOff     pod/liveness-http   Back-off restarting failed container

测试

每10秒执行一次Readiness Probe检查,时间超过一个小时。

代码语言:javascript
复制
Mar 09 14:48:34 kubeadm kubelet[3855]: I0309 14:48:34.222085    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 14:48:44 kubeadm kubelet[3855]: I0309 14:48:44.221782    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 14:48:54 kubeadm kubelet[3855]: I0309 14:48:54.221828    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
...
Mar 09 15:01:34 kubeadm kubelet[3855]: I0309 15:01:34.222491    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4
562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 15:01:44 kubeadm kubelet[3855]: I0309 15:01:44.221877    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 15:01:54 kubeadm kubelet[3855]: I0309 15:01:54.221976    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
...
Mar 09 15:10:14 kubeadm kubelet[3855]: I0309 15:10:14.222163    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 15:10:24 kubeadm kubelet[3855]: I0309 15:10:24.221744    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 15:10:34 kubeadm kubelet[3855]: I0309 15:10:34.223877    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
...
Mar 09 16:04:14 kubeadm kubelet[3855]: I0309 16:04:14.222853    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 16:04:24 kubeadm kubelet[3855]: I0309 16:04:24.222531    3855 prober.go:117] Readiness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500

此外,还有Liveness probe条目。

代码语言:javascript
复制
Mar 09 16:12:58 kubeadm kubelet[3855]: I0309 16:12:58.462878    3855 prober.go:117] Liveness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 16:13:58 kubeadm kubelet[3855]: I0309 16:13:58.462906    3855 prober.go:117] Liveness probe for "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a):liveness" failed (failure): HTTP probe failed with statuscode: 500
Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465470    3855 kuberuntime_manager.go:656] Container "liveness" ({"docker" "95567f85708ffac8b34b6c6f2bdb4
9d8eb57e7704b7b416083c7f296dd40cd0b"}) of pod liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a): Container liveness failed liveness probe, will be restarted
Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465587    3855 kuberuntime_manager.go:712] Killing unwanted container "liveness"(id={"docker" "95567f85708ffac8b34b6c6f2bdb49d8eb57e7704b7b416083c7f296dd40cd0b"}) for pod "liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a)"

试验总时间:

代码语言:javascript
复制
$ kubectl get po -w
NAME            READY   STATUS    RESTARTS   AGE
liveness-http   0/1     Running   21         99m
liveness-http   0/1     CrashLoopBackOff   21         101m
liveness-http   0/1     Running            22         106m
liveness-http   1/1     Running            22         106m
liveness-http   0/1     Running            22         106m
liveness-http   0/1     Running            23         109m
liveness-http   1/1     Running            23         109m
liveness-http   0/1     Running            23         109m
liveness-http   0/1     CrashLoopBackOff   23         112m
liveness-http   0/1     Running            24         117m
liveness-http   1/1     Running            24         117m
liveness-http   0/1     Running            24         117m

结论

活性探针检查不再被调用

Liveness check是在Kubernetes创建pod时创建的,每次重新启动Pod时都会重新创建。在您的配置中,您已经设置了initialDelaySeconds: 20,因此在创建pod之后,Kubernetes将等待20秒,然后它将调用liveness探测3次(就像您设置了failureThreshold: 3一样)。在3次失败后,根据RestartPolicy的说法,库伯内特斯将重新启动这个吊舱。在日志中,您还可以在日志中找到:

代码语言:javascript
复制
Mar 09 16:14:58 kubeadm kubelet[3855]: I0309 16:14:58.465470    3855 kuberuntime_manager.go:656] Container "liveness" ({"docker" "95567f85708ffac8b34b6c6f2bdb4
9d8eb57e7704b7b416083c7f296dd40cd0b"}) of pod liveness-http_default(8c87a08e-34aa-4bb1-be9b-fdca39a4562a): Container liveness failed liveness probe, will be restarted

为什么要重新启动呢?答案可以在容器探针中找到。

livenessProbe:指示容器是否正在运行。如果活性探测失败,kubelet将杀死容器,容器将受其重新启动策略的约束。

默认的重启策略GKE中是Always。所以你的吊舱会一次又一次地重新启动。

准备就绪探针检查呼叫,但间隔变长(最大间隔约10分钟)

我认为你已经得出了这个结论,因为你已经基于$ kubectl get events$ kubectl describe po。在这两种情况下,默认事件将在1小时后移除。在我的Tests部分中,您可以看到Readiness probe条目是从14:48:3416:04:24的,所以每10秒Kubernetes都调用Readiness Probe

为什么活性探针不运行,预备探针的间隔发生了变化?

正如我在Tests部分中所展示的,Readiness probe没有改变。在这种情况下,误导性使用的是$ kubectl events。关于Liveiness Probe,它还在打电话,但只有3次之后,pod将是created/restarted。此外,我还包括了$ kubectl get po -w的输出。在重新创建pod时,您可能会在kubelet日志中找到这些liveness probes

在我的计划中,我将创建警报策略,其条件如下:

  • 如果活性探针错误在3分钟内发生3次

如果liveness probe 3次失败,使用当前的设置,它将重新启动这个吊舱。在这种情况下,您可以使用每个restart创建一个alert

代码语言:javascript
复制
Metric: kubernetes.io/container/restart_count
Resource type: k8s_container

在有关Monitoring alert的堆栈溢出案例中可以找到一些有用的信息,如:

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66299478

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档