Topic 1 Question 44
Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?
Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.
Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.
Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.
Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.
ユーザの投票
コメント(14)
Option C
👍 13Charun2021/06/28C is correct
A. Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness Probes. --> using health check as a trigger of scaling is weird. if the response time of the health check is delayed, it may be caused by resources issues such as CPU, memories, and so on. so you should use such values as SLIs.
B. Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand. --> it doesn't referred to pod autoscaling.
D. Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment. --> if you use request metrics as SLIs, you should use custom metrics as SLIs. it is a little bit redundant.
👍 10kubosuke2021/07/26C - https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics You want to scale horizontally
👍 4ralf_cc2021/06/18
シャッフルモード