全部的 K8S学习笔记总目录 ,请点击查看。
Kubernetes HPA (Horizontal Pod Autoscaler) 是 Kubernetes 提供的一个自动扩容的功能,它可以根据 CPU 利用率或者自定义的指标,自动的扩容或者缩容 Pod 的数量,从而保证应用程序的稳定性和高可用性。
通过HPA实现业务应用的动态扩缩容 HPA控制器介绍 当系统资源过高的时候,我们可以使用如下命令来实现 Pod 的扩缩容功能
1 $ kubectl -n test scale deployment myblog --replicas=2
但是这个过程是手动操作的。在实际项目中,我们需要做到是的是一个自动化感知并自动扩容的操作。Kubernetes 也为提供了这样的一个资源对象:Horizontal Pod Autoscaling(Pod 水平自动伸缩),简称HPA
基本原理:HPA 通过监控分析控制器控制的所有 Pod 的负载变化情况来确定是否需要调整 Pod 的副本数量
Metric Server 官方介绍
1 2 3 4 5 ... Metric server collects metrics from the Summary API, exposed by Kubelet on each node. Metrics Server registered in the main API server through Kubernetes aggregator, which was introduced in Kubernetes 1.7 ...
Metric Server 从每个节点的 Kubelet 暴露的 Summary API 中收集指标。
Metric Server 通过 Kubernetes 聚合器注册到主 API 服务器中,该聚合器在 Kubernetes 1.7 中引入。
安装 官方代码仓库地址:https://github.com/kubernetes-sigs/metrics-server
Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:
--kubelet-preferred-address-types
- The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
--kubelet-insecure-tls
- Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.
--requestheader-client-ca-file
- Specify a root certificate bundle for verifying client certificates on incoming requests.
上面的话翻译如下:
根据你的集群设置,你可能需要修改Metrics Server容器的参数。最有用的参数:
--kubelet-preferred-address-types
- 确定连接到特定节点的地址时使用的节点地址类型的优先级(默认值[Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
--kubelet-insecure-tls
- 不要验证Kubelets提供的服务证书的CA。仅用于测试。
--requestheader-client-ca-file
- 指定用于验证传入请求上的客户端证书的根证书包。
准备配置文件 首先,我们需要下载配置文件:
1 2 $ wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
修改配置文中args参数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... 133 containers: 134 - args: 135 - --cert-dir=/tmp 136 - --secure-port=4443 - --kubelet-insecure-tls 137 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname 138 - --kubelet-use-node-status-port 139 - --metric-resolution=15s 140 image: bitnami/metrics-server:0.6.4 141 imagePullPolicy: IfNotPresent ...
我们添加了参数 –kubelet-insecure-tls,因为部署集群的时候,CA 证书并没有把各个节点的 IP 签上去,所以这里 metrics-server 通过 IP 去请求时,提示签的证书没有对应的 IP(错误:x509: cannot validate certificate for xxx.xxx.xxx.xxx because it doesn’t contain any IP SANs)。
当然官方文档也说了,这个参数只用于测试,生产环境不建议使用,正常环境还是要把证书部署完全。
部署 因为我们已经有了配置文件,所以直接使用 kubectl 部署即可:
1 2 3 4 5 6 7 8 9 10 11 $ kubectl apply -f components.yaml $ kubectl -n kube-system get pods $ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master 456m 5% 1829Mi 47% k8s-node1 170m 2% 1435Mi 37% k8s-node2 100m 1% 1649Mi 43%
HPA实践 HPA的实现有两个版本:
autoscaling/v1,只包含了根据CPU指标的检测,稳定版本
autoscaling/v2,支持根据cpu、memory或者用户自定义指标进行伸缩
基于CPU和内存的动态伸缩 创建hpa资源
要想使用HPA,必须要创建HPA资源对象,有两种方法进行创建:
方式一:声明式创建,通过yaml文件创建
方式二:命令式创建,通过kubectl命令创建
以下是两种创建方式:
方法一,声明式创建:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 $ cat hpa-myblog.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-myblog namespace: test spec: maxReplicas: 3 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myblog metrics: - type : Resource resource: name: memory target: type : Utilization averageUtilization: 80 - type : Resource resource: name: cpu target: type : Utilization averageUtilization: 80 $ kubectl -n test apply -f hpa-myblog.yaml
方法二,命令式创建
1 $ kubectl -n test autoscale deployment myblog --cpu-percent=80 --min=1 --max=3
Deployment对象必须配置requests的参数,不然无法获取监控数据,也无法通过HPA进行动态伸缩
验证 查看k8s创建hpa的资源对象:
1 2 3 $ kubectl -n test get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-myblog Deployment/myblog 27%/80%, 1%/80% 1 3 1 23s
可以看到,名字叫做hpa-myblog的hpa资源已经存在了,它的目标对象是myblog的deployment,目标是80%的cpu利用率和80%的内存利用率,当前使用的是27%的cpu和1%的内存,当前副本数是1,最大副本数是3,最小副本数是1。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 $ sudo yum install httpd-tools $ kubectl -n test get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myblog-84985b5b66-smpwb 1/1 Running 0 70d 10.244.1.5 k8s-node1 <none> <none> mysql-7f97cb6cc9-vzxpd 1/1 Running 0 71d 192.168.100.2 k8s-node1 <none> <none> testpod-865855cfc5-m2f99 1/1 Running 0 58d 10.244.2.13 k8s-node2 <none> <none> $ kubectl -n test get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR myblog ClusterIP 10.104.58.9 <none> 80/TCP 70d app=myblog myblog-np NodePort 10.98.222.213 <none> 80:31174/TCP 70d app=myblog mysql ClusterIP 10.110.89.44 <none> 3306/TCP 70d app=mysql $ ab -n 100000 -c 1000 http://10.104.58.9:80/ $ kubectl get hpa $ kubectl -n test get pods
压力降下来后,会有默认5分钟的scaledown
的时间,也就是说k8s在等待5分钟后,没有压力的情况下,会缩容一个副本,之后再等待5分钟,如果还没有压力,再缩容一个副本,以此类推,直到副本数为1。
所以与扩容立刻就扩不同,缩容是一个逐步的过程,比如本例从3个副本降低到1个副本,中间大概会等待2*5分钟 = 10分钟
scaledown
的时间可以通过controller-manager
的如下参数设置:
1 2 3 --horizontal-pod-autoscaler-downscale-stabilization The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).
也可以通过设置每个hpa的behavior
来控制scaleDown
和scaleUp
的行为。
基于自定义指标的动态伸缩 除了基于 CPU 和内存来进行自动扩缩容之外,我们还可以根据自定义的监控指标来进行。这个我们就需要使用 Prometheus Adapter
,Prometheus 用于监控应用的负载和集群本身的各种指标,Prometheus Adapter
可以帮我们使用 Prometheus 收集的指标并使用它们来制定扩展策略,这些指标都是通过 APIServer 暴露的,而且 HPA 资源对象也可以很轻易的直接使用。
架构图:
实现原理 如何获取Pod的监控数据
k8s 1.8以下:使用heapster,1.11版本完全废弃
k8s 1.8以上:使用metric-server
官方从 1.8 版本开始提出了 Metric api 的概念,而 metrics-server 就是这种概念下官方的一种实现,用于从 kubelet获取指标,替换掉之前的 heapster。
Metrics Server
可以通过标准的 Kubernetes API 把监控数据暴露出来,比如获取某一Pod的监控数据:
1 2 3 https://192.168.100.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/<pod-name>
集群中安装了metrics-server
就可以用上述接口获取Pod的基础监控数据了,如:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 $ $ kubectl -n test get po NAME READY STATUS RESTARTS AGE myblog-84985b5b66-smpwb 1/1 Running 0 71d mysql-7f97cb6cc9-vzxpd 1/1 Running 0 71d testpod-865855cfc5-m2f99 1/1 Running 0 58d URL="https://192.168.100.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/test/pods/myblog-84985b5b66-smpwb" $ kubectl -n test create token test-pods-admin eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI4Mzg5LCJpYXQiOjE2NjgyMjQ3ODksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJsdWZmeSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJsdWZmeS1wb2RzLWFkbWluIiwidWlkIjoiNjAzYWEzMDYtNDljZi00Y2UxLWI1OGYtMGNmMjUzYTI4YmY2In19LCJuYmYiOjE2NjgyMjQ3ODksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1wb2RzLWFkbWluIn0.HywSHWpsq9yHraUsPwehpaJKWWZFIQY2xE3mZBcVbbXA6ZerEjBzy1q_7VwonNMyUDo_kgK_vM0CVWKDFbESXqG-tbYd6LLD_PNvL8WyEZsKfiK2LYBiTNOAXnUqReUNt9XD_oHVoaEfeEpIO1WPONnmdcOLl_OBa7sdWFH8iT42hVufOHjYELJrOF8PG741BvtMuAIohYwFgO76G8dTWEaOYCX9Rg8n9jJQTqhMvm1fvW6c0V558q63e3oi7OFFR_V90dg4PYbBMAVMrKrrGAogEPnBhHFiY8YKF9lECzTyoVIxOphyvS5M9noU6G_W3-0w7gsMYGrXQVw7xlywNg $ TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI4Mzg5LCJpYXQiOjE2NjgyMjQ3ODksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJsdWZmeSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJsdWZmeS1wb2RzLWFkbWluIiwidWlkIjoiNjAzYWEzMDYtNDljZi00Y2UxLWI1OGYtMGNmMjUzYTI4YmY2In19LCJuYmYiOjE2NjgyMjQ3ODksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1wb2RzLWFkbWluIn0.HywSHWpsq9yHraUsPwehpaJKWWZFIQY2xE3mZBcVbbXA6ZerEjBzy1q_7VwonNMyUDo_kgK_vM0CVWKDFbESXqG-tbYd6LLD_PNvL8WyEZsKfiK2LYBiTNOAXnUqReUNt9XD_oHVoaEfeEpIO1WPONnmdcOLl_OBa7sdWFH8iT42hVufOHjYELJrOF8PG741BvtMuAIohYwFgO76G8dTWEaOYCX9Rg8n9jJQTqhMvm1fvW6c0V558q63e3oi7OFFR_V90dg4PYbBMAVMrKrrGAogEPnBhHFiY8YKF9lECzTyoVIxOphyvS5M9noU6G_W3-0w7gsMYGrXQVw7xlywNg" $ curl -k -H "Authorization: Bearer $TOKEN " $URL { "kind" : "PodMetrics" , "apiVersion" : "metrics.k8s.io/v1beta1" , "metadata" : { "name" : "myblog-84985b5b66-smpwb" , "namespace" : "test" , "creationTimestamp" : "2023-10-24T03:46:47Z" "labels" : { "app" : "myblog" , "pod-template-hash" : "6b5d9664d8" } }, "timestamp" : "2023-10-24T03:46:00Z" , "window" : "30s" , "containers" : [ { "name" : "myblog" , "usage" : { "cpu" : "2082398n" , "memory" : "3795872Ki" } } ] }
目前的采集流程:
kubelet的指标采集 无论是 heapster 还是 metric-server,都只是数据的中转和聚合,两者都是调用的 kubelet 的 api 接口获取的数据,而 kubelet 代码中实际采集指标的是 cadvisor 模块,你可以在 node 节点访问 10250 端口获取监控数据:
Kubelet Summary metrics: https://127.0.0.1:10250/metrics
,暴露 node、pod 汇总数据
Cadvisor metrics: https://127.0.0.1:10250/metrics/cadvisors
,暴露 container 维度数据
调用示例:
1 2 3 4 5 6 $ kubectl -n kube-system create token metrics-server eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI4ODY0LCJpYXQiOjE2NjgyMjUyNjQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzLXNlcnZlciIsInVpZCI6IjE4MDJmNGVkLWFjMDYtNDcwYS04YmExLWM0MDJjODgwMjI2OCJ9fSwibmJmIjoxNjY4MjI1MjY0LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bWV0cmljcy1zZXJ2ZXIifQ.Wz3kRbwoCJr2v_Gh5gG9oTMcZOPAgUPcSCmS-Nv7TB7a0aiYzSAqlH2PIqDt1wsv_bmNlpg2QifuDF1lnd5FmBmPCsY-zJX-uW-MStAQaCjfV8iBDmlThIP20srpmMf-z6JzkAlIF7JTTeuv03AkZg50FVKvu_2Zk-lm9gEwUhWgXns3oSnhTCWFHh2rOwnNq3IwcypTrRGWBEt5e9BQ6HWWMiCkkZ0WfgATXAAzYsmzRMIp2ZXntoqYEJGLEwgqJNLPVFpCdSn_C3Ft_2Mnfex84uSH0SL08fD5KP23SibWcPyOHIsSrsuPqA03JF-XW_JjRQyhOZsadSjvTBSSMQ $ TOKEN="eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI4ODY0LCJpYXQiOjE2NjgyMjUyNjQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJtZXRyaWNzLXNlcnZlciIsInVpZCI6IjE4MDJmNGVkLWFjMDYtNDcwYS04YmExLWM0MDJjODgwMjI2OCJ9fSwibmJmIjoxNjY4MjI1MjY0LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bWV0cmljcy1zZXJ2ZXIifQ.Wz3kRbwoCJr2v_Gh5gG9oTMcZOPAgUPcSCmS-Nv7TB7a0aiYzSAqlH2PIqDt1wsv_bmNlpg2QifuDF1lnd5FmBmPCsY-zJX-uW-MStAQaCjfV8iBDmlThIP20srpmMf-z6JzkAlIF7JTTeuv03AkZg50FVKvu_2Zk-lm9gEwUhWgXns3oSnhTCWFHh2rOwnNq3IwcypTrRGWBEt5e9BQ6HWWMiCkkZ0WfgATXAAzYsmzRMIp2ZXntoqYEJGLEwgqJNLPVFpCdSn_C3Ft_2Mnfex84uSH0SL08fD5KP23SibWcPyOHIsSrsuPqA03JF-XW_JjRQyhOZsadSjvTBSSMQ" $ curl -k -H "Authorization: Bearer $TOKEN " https://localhost:10250/metrics
kubelet虽然提供了metric接口,但实际监控逻辑由内置的cAdvisor模块负责,早期的时候,cadvisor是单独的组件,从k8s 1.12开始,cadvisor监听的端口在k8s中被删除,所有监控数据统一由Kubelet的API提供。
cadvisor获取指标时实际调用的是runc/libcontainer库,而libcontainer是对cgroup文件的封装,即cadvsior也只是个转发者,它的数据来自于cgroup文件。
所以cgroup文件中的值才是监控数据的最终来源
下图是Metrics数据流
Metrics Server是独立的一个服务,只能服务内部实现自己的api,那么它是如何做到通过标准的kubernetes 的API格式暴露出去的?
这是因为它使用到了k8s的聚合器 kube-aggregator
kube-aggregator聚合器及Metric-Server的实现 kube-aggregator是对 apiserver 的api的一种拓展机制,它允许开发人员编写一个自己的服务,并把这个服务注册到k8s的api里面,即扩展 API 。
看下metric-server的实现:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 $ kubectl get apiservice NAME SERVICE AVAILABLE v1beta1.metrics.k8s.io kube-system/metrics-server True $ kubectl get apiservice v1beta1.metrics.k8s.io -oyaml ... spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system port: 443 version: v1beta1 versionPriority: 100 ... $ kubectl -n kube-system get svc metrics-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics-server ClusterIP 10.105.155.213 <none> 443/TCP 10h $ kubectl -n test create token test-pods-admin eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI5OTMyLCJpYXQiOjE2NjgyMjYzMzIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJsdWZmeSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJsdWZmeS1wb2RzLWFkbWluIiwidWlkIjoiNjAzYWEzMDYtNDljZi00Y2UxLWI1OGYtMGNmMjUzYTI4YmY2In19LCJuYmYiOjE2NjgyMjYzMzIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1wb2RzLWFkbWluIn0.LeKFaXBeaAsyHd06JFc6exenNtfuBadjUj3YtqgHiBMmhNds4zmB9_ysvI4p04W6Kov37YTZg4WSj_AqpsplfCVVg8h-3kAQfPC6cHG5oBN-VUsMC80lu-MZsBTI4C5in7WgddFyFqFMxXL_-TdpguYohOJ6NC90z3IGLCKy8pS5mOCUA34o1_9x5P3JM5e--R-NIbwZmdESkfHejiaENCau_cwP2L2lxmU364kppSrcX_kGLybT7nMV-Bg6Q_-pt0JZVtP5C5NZUuLN0Mtmsd9me8LJFyPDX4fkWtXwZraqiRDx_OTgbckIwQAIDseFEu8ikVBYZ2p6qPCvtIgsMQ $ Token="eyJhbGciOiJSUzI1NiIsImtpZCI6IktJeWtHUFlydXZ2ZncxQVNxUlZyWHhCTkkwb01IbjNKUnFwZ18wUUxkVGcifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY4MjI5OTMyLCJpYXQiOjE2NjgyMjYzMzIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJsdWZmeSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJsdWZmeS1wb2RzLWFkbWluIiwidWlkIjoiNjAzYWEzMDYtNDljZi00Y2UxLWI1OGYtMGNmMjUzYTI4YmY2In19LCJuYmYiOjE2NjgyMjYzMzIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1wb2RzLWFkbWluIn0.LeKFaXBeaAsyHd06JFc6exenNtfuBadjUj3YtqgHiBMmhNds4zmB9_ysvI4p04W6Kov37YTZg4WSj_AqpsplfCVVg8h-3kAQfPC6cHG5oBN-VUsMC80lu-MZsBTI4C5in7WgddFyFqFMxXL_-TdpguYohOJ6NC90z3IGLCKy8pS5mOCUA34o1_9x5P3JM5e--R-NIbwZmdESkfHejiaENCau_cwP2L2lxmU364kppSrcX_kGLybT7nMV-Bg6Q_-pt0JZVtP5C5NZUuLN0Mtmsd9me8LJFyPDX4fkWtXwZraqiRDx_OTgbckIwQAIDseFEu8ikVBYZ2p6qPCvtIgsMQ" $ URL="https://10.105.155.213/apis/metrics.k8s.io/v1beta1/namespaces/test/pods/myblog-84985b5b66-smpwb" $ curl -k -H "Authorization: Bearer $Token " $URL { "kind" : "PodMetrics" , "apiVersion" : "metrics.k8s.io/v1beta1" , "metadata" : { "name" : "myblog-84985b5b66-smpwb" , "namespace" : "test" , "creationTimestamp" : "2023-10-24T03:46:47Z" , "labels" : { "app" : "myblog" , "pod-template-hash" : "6b5d9664d8" } }, "timestamp" : "2023-10-24T03:46:00Z" , "window" : "30s" , "containers" : [ { "name" : "myblog" , "usage" : { "cpu" : "1569859n" , "memory" : "3800032Ki" } } ] }