首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >K8s没有杀死我的airflow pod服务器pod

K8s没有杀死我的airflow pod服务器pod
EN

Stack Overflow用户
提问于 2017-12-02 08:13:24
回答 0查看 912关注 0票数 0

我让气流在k8s容器里运行。

were服务器遇到DNS错误(无法将我的数据库的url转换为ip),were服务器工作进程被终止。

让我烦恼的是,k8s并没有试图杀死这个豆荚,并在它的位置上开始一个新的豆荚。

Pod日志输出:

代码语言:javascript
复制
OperationalError: (psycopg2.OperationalError) could not translate host name "my.dbs.url" to address: Temporary failure in name resolution
[2017-12-01 06:06:05 +0000] [2202] [INFO] Worker exiting (pid: 2202)
[2017-12-01 06:06:05 +0000] [2186] [INFO] Worker exiting (pid: 2186)
[2017-12-01 06:06:05 +0000] [2190] [INFO] Worker exiting (pid: 2190)
[2017-12-01 06:06:05 +0000] [2194] [INFO] Worker exiting (pid: 2194)
[2017-12-01 06:06:05 +0000] [2198] [INFO] Worker exiting (pid: 2198)
[2017-12-01 06:06:06 +0000] [13] [INFO] Shutting down: Master
[2017-12-01 06:06:06 +0000] [13] [INFO] Reason: Worker failed to boot.

k8s的状态是RUNNING,但当我在k8s UI中打开一个exec外壳时,我得到了以下输出(gunicorn似乎意识到它已经死了):

代码语言:javascript
复制
root@webserver-373771664-3h4v9:/# ps -Al
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
4 S     0     1     0  0  80   0 - 107153 -     ?        00:06:42 /usr/local/bin/
4 Z     0    13     1  0  80   0 -     0 -      ?        00:01:24 gunicorn: maste <defunct>
4 S     0  2206     0  0  80   0 -  4987 -      ?        00:00:00 bash
0 R     0  2224  2206  0  80   0 -  7486 -      ?        00:00:00 ps

以下是我的部署的YAML:

代码语言:javascript
复制
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webserver
  namespace: airflow
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: airflow-webserver
    spec:
      volumes:
      - name: webserver-dags
        emptyDir: {}
      containers:
      - name: airflow-webserver
        image: my.custom.image :latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        ports:
        - containerPort: 80
          protocol: TCP
        env:
        - name: AIRFLOW_HOME
          value: /var/lib/airflow
        - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
          valueFrom:
            secretKeyRef:
              name: db1
              key: sqlalchemy_conn
        volumeMounts:
        - mountPath: /var/lib/airflow/dags/
          name: webserver-dags
        command: ["airflow"]
        args: ["webserver"]
      - name: docker-s3-to-backup
        image: my.custom.image:latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 50m
          limits:
            cpu: 500m
        env:
        - name: ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: access_key_id
        - name: SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: secret_access_key
        - name: S3_PATH
          value: s3://my-s3-bucket/dags/
        - name: DATA_PATH
          value: /dags/
        - name: CRON_SCHEDULE
          value: "*/5 * * * *"
        volumeMounts:
        - mountPath: /dags/
          name: webserver-dags
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: webserver
  namespace: airflow
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: webserver
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 75
---
apiVersion: v1
kind: Service
metadata:
  labels:
  name: webserver
  namespace: airflow
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: airflow-webserver
EN

回答

页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/47603238

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档