摘要
在我们的Kubernetes集群中,我们引入了一个具有内存和cpu限制的HPA。现在,我们不明白为什么我们有两个副本的一个服务。
该服务使用57% / 85%的内存,并且有两个副本而不是一个副本。我们认为这是因为当你总结两个豆荚的记忆时,它超过85%,但如果只有一个豆荚,那就不是了。那么,这会阻止它的规模缩小吗?我们在这里能做什么?
在部署服务时,我们还观察到内存使用的峰值。我们正在使用aks (天蓝色)的弹簧引导服务,并认为它可能在那里升级,永远不会下降。我们是错过了什么,还是有什么建议?
Helm
hpa:
{{- $fullName := include "app.fullname" . -}}
{{- $ := include "app.fullname" . -}}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ $fullName }}-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "app.name" . }}
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 85在部署过程中:
# Horizontal-Pod-Auto-Scaler
resources:
requests:
memory: {{ $requestedMemory }}
cpu: {{ $requesteCpu }}
limits:
memory: {{ $limitMemory }}
cpu: {{ $limitCpu }}使用服务默认值:
hpa:
resources:
request:
memory: 500Mi
cpu: 300m
limits:
memory: 1000Mi
cpu: 999mkubectl获得hpa -n dev
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
xxxxxxxx-load-for-cluster-hpa Deployment/xxxxxxxx-load-for-cluster 34%/85%, 0%/50% 1 10 1 4d7h
xxx5-ccg-hpa Deployment/xxx5-ccg 58%/85%, 0%/50% 1 10 1 4d12h
iotbootstrapping-service-hpa Deployment/iotbootstrapping-service 54%/85%, 0%/50% 1 10 1 4d12h
mocks-hpa Deployment/mocks 41%/85%, 0%/50% 1 10 1 4d12h
user-pairing-service-hpa Deployment/user-pairing-service 41%/85%, 0%/50% 1 10 1 4d12h
aaa-registration-service-hpa Deployment/aaa-registration-service 57%/85%, 0%/50% 1 10 2 4d12h
webshop-purchase-service-hpa Deployment/webshop-purchase-service 41%/85%, 0%/50% 1 10 1 4d12hkubectl描述hpa -n dev
Name: xxx-registration-service-hpa
Namespace: dev
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: vwg-registration-service
meta.helm.sh/release-namespace: dev
CreationTimestamp: Thu, 18 Jun 2020 22:50:27 +0200
Reference: Deployment/xxx-registration-service
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 57% (303589376) / 85%
resource cpu on pods (as a percentage of request): 0% (1m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>如果需要进一步的信息,请随时询问!
非常感谢你抽出时间来!
干杯罗宾
发布于 2020-06-23 10:53:47
desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
对于您的问题来说,最重要的部分是ceil[...]函数包装器:它总是到下一个最近的副本。如果currentReplicas为2,desiredMetricValue为85%,则currentMetricValue必须为42.5%或更低才能触发缩小。
在您的例子中,currentMetricValue是57%,所以您可以
desiredReplicas = ceil[2 * (57 / 85)]
= ceil[2 * 0.671]
= ceil[1.341]
= 2你是对的,如果currentReplicas是1,那么HPA也不会觉得有必要扩大规模;实际利用率需要上升到85%以上才能触发它。
https://stackoverflow.com/questions/62531735
复制相似问题