k8s helm

  |  

全部的 K8S学习笔记总目录,请点击查看。

helm 是 k8s 的包管理工具,类似于 yum/apt/homebrew。

Helm的安装和使用

认识Helm

Helm的重要概念有以下几个:

  • chart,应用的信息集合,包括各种对象的配置模板、参数定义、依赖关系、文档说明等
  • Repository,chart仓库,存储chart的地方,并且提供了一个该 Repository 的 Chart 包的清单文件以供查询。Helm 可以同时管理多个不同的 Repository。
  • release, 当 chart 被安装到 kubernetes 集群,就生成了一个 release , 是 chart 的运行实例,代表了一个正在运行的应用

helm 是包管理工具,包就是指 chart,helm 能够:

  • 从零创建chart
  • 与仓库交互,拉取、保存、更新 chart
  • 在kubernetes集群中安装、卸载 release
  • 更新、回滚、测试 release

安装

下载最新的稳定版本:https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz

helm的版本兼容性可以查看这里,中文版查看这里

最新安装方式查看这里,中文版查看这里,release版本可以通过github进行查找查看这里

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# k8s-master节点
$ wget https://get.helm.sh/helm-v3.13.3-linux-amd64.tar.gz
$ tar -zxf helm-v3.13.3-linux-amd64.tar.gz
$ sudo cp linux-amd64/helm /usr/sbin/

# 验证安装
$ helm version
version.BuildInfo{Version:"v3.13.3", GitCommit:"c8b948945e52abba22ff885446a1486cb5fd3474", GitTreeState:"clean", GoVersion:"go1.20.11"}
$ helm env

# 查看仓库
$ helm repo ls
# 添加仓库
$ helm repo add stable https://charts.bitnami.com/bitnami
# 同步最新charts信息到本地
$ helm repo update

入门实践1:使用helm安装wordpress应用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# helm 搜索chart包
$ helm search repo wordpress
NAME CHART VERSION APP VERSION DESCRIPTION
stable/wordpress 18.1.15 6.4.1 WordPress is the world's most popular blogging ...
stable/wordpress-intel 2.1.31 6.1.1 DEPRECATED WordPress for Intel is the most popu...

$ kubectl create namespace wordpress
# 从仓库安装
$ helm -n wordpress install wordpress stable/wordpress --set mariadb.primary.persistence.enabled=false --set service.type=ClusterIP --set ingress.enabled=true --set persistence.enabled=false --set ingress.hostname=wordpress.test.com
NAME: wordpress
LAST DEPLOYED: Sat Nov 25 00:39:08 2023
NAMESPACE: wordpress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: wordpress
CHART VERSION: 18.1.15
APP VERSION: 6.4.1

** Please be patient while the chart is being deployed **

Your WordPress site can be accessed through the following DNS name from within your cluster:

wordpress.wordpress.svc.cluster.local (port 80)

To access your WordPress site from outside the cluster follow the steps below:

1. Get the WordPress URL and associate WordPress hostname to your cluster external IP:

export CLUSTER_IP=$(minikube ip) # On Minikube. Use: `kubectl cluster-info` on others K8s clusters
echo "WordPress URL: http://wordpress.test.com/"
echo "$CLUSTER_IP wordpress.test.com" | sudo tee -a /etc/hosts

2. Open a browser and access WordPress using the obtained URL.

3. Login with the following credentials below to see your blog:

echo Username: user
echo Password: $(kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)


# 查看release
$ helm -n wordpress ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
wordpress wordpress 1 2023-11-25 00:39:08.988234268 +0800 CST deployed wordpress-18.1.15 6.4.1

# 查看k8s中的资源
$ kubectl -n wordpress get all
NAME READY STATUS RESTARTS AGE
pod/wordpress-6c64d789df-hvpfl 1/1 Running 0 8m2s
pod/wordpress-mariadb-0 1/1 Running 0 8m2s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/wordpress ClusterIP 10.98.202.134 <none> 80/TCP,443/TCP 8m3s
service/wordpress-mariadb ClusterIP 10.96.212.179 <none> 3306/TCP 8m3s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 8m2s

NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-6c64d789df 1 1 1 8m2s

NAME READY AGE
statefulset.apps/wordpress-mariadb 1/1 8m2s

# chart不适配k8s的ingress,需要添加上ingressClassName: nginx
$ kubectl -n wordpress edit ing wordpress
...
spec:
ingressClassName: nginx
rules:
- host: wordpress.test.com
...

# 从chart仓库中把chart包下载到本地
$ helm pull stable/wordpress

# 卸载
$ helm -n wordpress uninstall wordpress

入门实践2:新建nginx的chart并安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ helm create nginx

# 从本地安装到别的命名空间demo
$ kubectl create namespace demo
$ helm -n demo install nginx ./nginx --set replicaCount=2 --set image.tag=alpine
NAME: nginx
LAST DEPLOYED: Sat Nov 25 00:53:52 2023
NAMESPACE: demo
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace demo -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginx" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace demo port-forward $POD_NAME 8080:80

# 查看
$ helm -n demo ls
$ kubectl -n demo get all

Chart的模板语法及开发

接下来我们分析一下nginx的chart实现分析,然后总结出chart的开发规范。

Chart的目录结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ tree nginx/
nginx/
├── charts # 存放子chart
├── Chart.yaml # 该chart的全局定义信息
├── templates # chart运行所需的资源清单模板,用于和values做渲染
│   ├── deployment.yaml
│   ├── _helpers.tpl # 定义全局的命名模板,方便在其他模板中引入使用
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt # helm安装完成后终端的提示信息
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml # 模板使用的默认值信息

很明显,资源清单都在templates中,数据来源于values.yaml,安装的过程就是将模板与数据融合成k8s可识别的资源清单,然后部署到k8s环境中。

1
2
3
4
# 查看模板渲染后的资源清单
# --dry-run:不真正安装,只是渲染模板
# --debug:打印渲染后的资源清单
$ helm install debug-nginx ./nginx --dry-run --set replicaCount=2 --debug

分析模板文件的实现

引用命名模板并传递作用域

1
{{ include "nginx.fullname" . }}

include从_helpers.tpl中引用命名模板,并传递顶级作用域.

内置对象

1
2
3
.Values
.Release.Name
.Chart
  • Release:该对象描述了 release 本身的相关信息,它内部有几个对象:
    • Release.Name:release 名称
    • Release.Namespace:release 安装到的命名空间
    • Release.IsUpgrade:如果当前操作是升级或回滚,则该值为 true
    • Release.IsInstall:如果当前操作是安装,则将其设置为 true
    • Release.Revision:release 的 revision 版本号,在安装的时候,值为1,每次升级或回滚都会增加
    • Release.Service:渲染当前模板的服务,在 Helm 上,实际上该值始终为 Helm
  • Values:从 values.yaml 文件和用户提供的 values 文件传递到模板的 Values 值
  • Chart:获取 Chart.yaml 文件的内容,该文件中的任何数据都可以访问,例如 {{ .Chart.Name }}-{{ .Chart.Version}} 可以渲染成 mychart-0.1.0

模板定义

1
2
3
4
5
6
7
8
9
10
11
12
{{- define "nginx.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

示例

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
drink: {{ .Values.favorite.drink | default "tea" | quote }}
food: {{ .Values.favorite.food | upper | quote }}
{{ if eq .Values.favorite.drink "coffee" }}
mug: true
{{ end }}

渲染完后是:

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-1575971172-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"

mug: true

去掉空格

  • {{- }} 去掉左边的空格及换行
  • {{ -}} 去掉右侧的空格及换行

管道及方法

  1. trunc表示字符串截取,63作为参数传递给trunc方法,trimSuffix表示去掉-后缀

    1
    {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
  2. nindent表示前面的空格数

    1
    2
    3
    selector:
    matchLabels:
    {{- include "nginx.selectorLabels" . | nindent 6 }}
  3. lower表示将内容小写,quote表示用双引号引起来

    1
    value: {{ include "mytpl" . | lower | quote }}
  4. 条件判断语句每个if对应一个end

    1
    2
    3
    4
    5
    {{- if .Values.fullnameOverride }}
    ...
    {{- else }}
    ...
    {{- end }}

    通常用来根据values.yaml中定义的开关来控制模板中的显示:

    1
    2
    3
    {{- if not .Values.autoscaling.enabled }}
    replicas: {{ .Values.replicaCount }}
    {{- end }}
  5. 定义变量,模板中可以通过变量名字去引用

    1
    {{- $name := default .Chart.Name .Values.nameOverride }}
  6. 遍历values的数据

    1
    2
    3
    4
    {{- with .Values.nodeSelector }}
    nodeSelector:
    {{- toYaml . | nindent 8 }}
    {{- end }}

    toYaml处理值中的转义及特殊字符, “kubernetes.io/role”=master , name=”value1,value2” 类似的情况

  7. default设置默认值

    1
    image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"

更多语法参考:

https://helm.sh/docs/topics/charts/

Helm 使用

Helm template

hpa.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "nginx.fullname" . }}
labels:
{{- include "nginx.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "nginx.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

赋值方式

创建Release的时候赋值

  • set的方式

    1
    2
    # 改变副本数和resource值
    $ helm install nginx-2 ./nginx --set replicaCount=2 --set resources.limits.cpu=200m --set resources.limits.memory=256Mi
  • value文件的方式

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    $ cat nginx-values.yaml
    resources:
    limits:
    cpu: 100m
    memory: 128Mi
    requests:
    cpu: 100m
    memory: 128Mi
    autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 3
    targetCPUUtilizationPercentage: 80
    ingress:
    enabled: true
    hosts:
    - host: chart-example.test.com
    paths:
    - /

    $ helm install -f nginx-values.yaml nginx-3 ./nginx

查看渲染后的资源清单

使用helm template查看渲染模板

1
$ helm -n test template nginx ./nginx --set replicaCount=2 --set image.tag=alpine --set autoscaling.enabled=true

实战:使用Helm部署Harbor镜像及chart仓库

harbor架构

架构 https://github.com/goharbor/harbor/wiki/Architecture-Overview-of-Harbor

harbor-architecture

  • Core,核心组件
    • API Server,接收处理用户请求
    • Config Manager :所有系统的配置,比如认证、邮件、证书配置等
    • Project Manager:项目管理
    • Quota Manager :配额管理
    • Chart Controller:chart管理
    • Replication Controller :镜像副本控制器,可以与不同类型的仓库实现镜像同步
      • Distribution (docker registry)
      • Docker Hub
    • Scan Manager :扫描管理,引入第三方组件,进行镜像安全扫描
    • Registry Driver :镜像仓库驱动,目前使用docker registry
  • Job Service,执行异步任务,如同步镜像信息
  • Log Collector,统一日志收集器,收集各模块日志
  • GC Controller
  • Chart Museum,chart仓库服务,第三方
  • Docker Registry,镜像仓库服务
  • kv-storage,redis缓存服务,job service使用,存储job metadata
  • local/remote storage,存储服务,比较镜像存储
  • SQL Database,postgresl,存储用户、项目等元数据

通常用作企业级镜像仓库服务,实际功能强大很多。

组件众多,因此使用helm部署

准备repo

1
2
3
4
5
6
7
8
# 添加harbor chart仓库
$ helm repo add harbor https://helm.goharbor.io

# 搜索harbor的chart
$ helm search repo harbor

# 不知道如何部署,因此拉到本地
$ helm pull harbor/harbor

创建pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ kubectl create namespace harbor
$ cat harbor-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-data
namespace: harbor
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi

# 创建pvc
kubectl create -f harbor-pvc.yaml

修改helm配置

修改harbor配置:

  • ingress访问的配置(36行和46行)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    ingress:
    hosts:
    core: core.harbor.domain
    notary: notary.harbor.domain
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    # set to `alb` if using the ALB ingress controller
    controller: default
    ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
    kubeVersionOverride: ""
    className: "nginx"
  • externalURL,web访问入口,和ingress的域名相同(126行)

    1
    126 externalURL: https://harbor.test.com
  • 持久化,使用PVC对接的nfs(215,220,225,227,249,251,258,260)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    204 persistence:
    205 enabled: true
    206 # Setting it to "keep" to avoid removing PVCs during a helm delete
    207 # operation. Leaving it empty will delete PVCs after the chart deleted
    208 # (this does not apply for PVCs that are created for internal database
    209 # and redis components, i.e. they are never deleted automatically)
    210 resourcePolicy: "keep"
    211 persistentVolumeClaim:
    212 registry:
    213 # Use the existing PVC which must be created manually before bound,
    214 # and specify the "subPath" if the PVC is shared with other components
    215 existingClaim: "harbor-data"
    216 # Specify the "storageClass" used to provision the volume. Or the default
    217 # StorageClass will be used (the default).
    218 # Set it to "-" to disable dynamic provisioning
    219 storageClass: ""
    220 subPath: "registry"
    221 accessMode: ReadWriteOnce
    222 size: 5Gi
    223 annotations: {}
    224 chartmuseum:
    225 existingClaim: "harbor-data"
    226 storageClass: ""
    227 subPath: "chartmuseum"
    228 accessMode: ReadWriteOnce
    229 size: 5Gi
    230 annotations: {}


    246 # If external database is used, the following settings for database will
    247 # be ignored
    248 database:
    249 existingClaim: "harbor-data"
    250 storageClass: ""
    251 subPath: "database"
    252 accessMode: ReadWriteOnce
    253 size: 1Gi
    254 annotations: {}
    255 # If external Redis is used, the following settings for Redis will
    256 # be ignored
    257 redis:
    258 existingClaim: "harbor-data"
    259 storageClass: ""
    260 subPath: "redis"
    261 accessMode: ReadWriteOnce
    262 size: 1Gi
    263 annotations: {}
    264 trivy:
    265 existingClaim: "harbor-data"
    266 storageClass: ""
    267 subPath: "trivy"
    268 accessMode: ReadWriteOnce
    269 size: 5Gi
    270 annotations: {}
  • 管理员账户密码(382行)

    1
    382 harborAdminPassword: "Harbor12345!"
  • trivy、notary漏洞扫描组件,暂不启用(639,711行)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    637 trivy:
    638 # enabled the flag to enable Trivy scanner
    639 enabled: false
    640 image:
    641 # repository the repository for Trivy adapter image
    642 repository: goharbor/trivy-adapter-photon
    643 # tag the tag for Trivy adapter image
    644 tag: v2.6.2

    710 notary:
    711 enabled: false
    712 server:
    713 # set the service account to be used, default if left empty
    714 serviceAccountName: ""
    715 # mount the service account token
    716 automountServiceAccountToken: false

helm创建

1
2
# 使用本地chart安装
$ helm -n harbor install harbor ./harbor

推送镜像到Harbor仓库

配置hosts及docker非安全仓库:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ cat /etc/hosts
...
172.21.65.226 k8s-master harbor.test.com
...

$ cat /etc/docker/daemon.json
{
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.test.com"],
"registry-mirrors": ["https://hub-mirror.c.163.com"],
}

# 重启docker,使配置生效
$ systemctl restart docker

# 使用账户密码登录admin/Harbor12345
$ docker login harbor.test.com

$ docker tag nginx:alpine harbor.test.com/library/nginx:alpine
$ docker push harbor.test.com/library/nginx:alpine

# containerd使用harbor.test.com
$ mkdir -p /etc/containerd/certs.d/harbor.test.com
$ cat /etc/containerd/certs.d/harbor.test.com/hosts.toml
server = "https://harbor.test.com"
[host."https://harbor.test.com"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true

# 快速启动服务验证
$ kubectl create deployment test --image=harbor.test.com/library/nginx:alpine

推送chart到Harbor仓库

helm3默认没有安装helm push插件,需要手动安装。插件地址 https://github.com/chartmuseum/helm-push

安装插件

在线安装

1
$ helm plugin install https://github.com/chartmuseum/helm-push

离线安装

1
2
3
4
$ mkdir helm-push
$ wget https://github.com/chartmuseum/helm-push/releases/download/v0.8.1/helm-push_0.8.1_linux_amd64.tar.gz
$ tar zxf helm-push_0.8.1_linux_amd64.tar.gz -C helm-push
$ helm plugin install ./helm-push

添加repo

1
2
3
4
5
6
7
8
9
10
11
12
13
$ helm repo add myharbor https://harbor.test.com/chartrepo/test
# x509错误

# 添加证书信任,根证书为配置给ingress使用的证书
$ kubectl get secret harbor-ingress -n harbor -o jsonpath="{.data.ca\.crt}" | base64 -d >harbor.ca.crt

$ cp harbor.ca.crt /etc/pki/ca-trust/source/anchors
$ update-ca-trust enable; update-ca-trust extract

# 再次添加
$ helm repo add test https://harbor.test.com/chartrepo/test --ca-file=harbor.ca.crt --username admin --password Harbor12345!

$ helm repo ls

推送chart到仓库:

1
$ helm push harbor test --ca-file=harbor.ca.crt -u admin -p Harbor12345!

实战:使用Helm部署NFS StorageClass

准备repo

1
2
3
4
5
6
7
8
# 添加nfs-subdir-external-provisioner chart仓库
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

# 搜索nfs-subdir-external-provisioner的chart
$ helm search repo nfs-subdir-external-provisioner

# 拉到本地,目前我看到最新版本是4.0.18,我们就使用指定版本的方式安装
$ helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --version 4.0.18

修改helm配置

这里我们之前创建了nfs服务器,服务器地址是192.168.100.1,共享目录是/,因此我们修改对应的配置即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
$ tar xf nfs-subdir-external-provisioner-4.0.18.tgz
$ cd nfs-subdir-external-provisioner

$ vim values.yaml
...
image:
# 如果不放心镜像的下载可以修改仓库地址
repository: willdockerhub/nfs-subdir-external-provisioner
...
nfs:
server: 192.168.100.1
path: /
...

helm创建

1
2
3
4
# 创建命名空间
$ kubectl create namespace nfs-subdir-external-provisioner
# 使用本地chart安装,因为我进入了nfs-subdir-external-provisioner目录,所以直接使用.表示当前目录
$ helm -n nfs-subdir-external-provisioner install nfs-subdir-external-provisioner .

确认StorageClass

1
2
3
4
5
# 查看pod运行情况
$ kubectl -n nfs-subdir-external-provisioner get pod

# 查看StorageClass
$ kubectl get sc

实战:使用Helm部署Ingress Controller

准备repo

1
2
3
4
5
6
7
8
# 添加ingress-nginx chart仓库
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# 搜索ingress-nginx的chart
$ helm search repo ingress-nginx

# 拉到本地,目前我看到最新版本是4.8.3
$ helm pull ingress-nginx/ingress-nginx

修改helm配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ tar xf ingress-nginx-4.8.3.tgz
$ vim ingress-nginx/values.yaml

...
controller:
# 之前使用的是hostNetwork方式,这里我们使用hostPort方式
hostPort:
enabled: true
...
# 做集群负载均衡需要将所有的节点都部署上ingress-controller,可以使用daemonset方式部署
kind: DaemonSet
...
nodeSelector:
kubernetes.io/os: linux
# 这里我们多加一个标签,控制在哪些节点上部署ingress-controller
ingress: "true"
...

helm创建

1
2
3
4
5
6
7
# 给节点打标签,所有想要部署ingress-controller的节点都需要打上该标签
$ kubectl label node k8s-master ingress=true
$ kubectl label node k8s-node1 ingress=true
$ kubectl label node k8s-node2 ingress=true

# 创建命名空间并使用本地chart安装
$ helm upgrade -n ingress-nginx --create-namespace --install ingress-nginx ./ingress-nginx

确认Ingress Controller

1
2
3
4
5
6
7
# 查看pod运行情况
$ kubectl -n ingress-nginx get pod

# 查看ingress-controller的service
$ kubectl -n ingress-nginx get svc

# 部署测试应用,使用ingress暴露服务,然后访问测试确认是否生效。这里具体的应用部署方式不再赘述。
文章目录
  1. 1. Helm的安装和使用
    1. 1.1. 认识Helm
    2. 1.2. 安装
    3. 1.3. 入门实践1:使用helm安装wordpress应用
    4. 1.4. 入门实践2:新建nginx的chart并安装
  2. 2. Chart的模板语法及开发
    1. 2.1. Chart的目录结构
    2. 2.2. 分析模板文件的实现
      1. 2.2.1. 引用命名模板并传递作用域
      2. 2.2.2. 内置对象
      3. 2.2.3. 模板定义
        1. 2.2.3.1. 示例
      4. 2.2.4. 去掉空格
      5. 2.2.5. 管道及方法
  3. 3. Helm 使用
    1. 3.1. Helm template
    2. 3.2. 赋值方式
    3. 3.3. 查看渲染后的资源清单
  4. 4. 实战:使用Helm部署Harbor镜像及chart仓库
    1. 4.1. harbor架构
    2. 4.2. 准备repo
    3. 4.3. 创建pvc
    4. 4.4. 修改helm配置
    5. 4.5. helm创建
    6. 4.6. 推送镜像到Harbor仓库
    7. 4.7. 推送chart到Harbor仓库
      1. 4.7.1. 安装插件
      2. 4.7.2. 添加repo
      3. 4.7.3. 推送chart到仓库:
  5. 5. 实战:使用Helm部署NFS StorageClass
    1. 5.1. 准备repo
    2. 5.2. 修改helm配置
    3. 5.3. helm创建
    4. 5.4. 确认StorageClass
  6. 6. 实战:使用Helm部署Ingress Controller
    1. 6.1. 准备repo
    2. 6.2. 修改helm配置
    3. 6.3. helm创建
    4. 6.4. 确认Ingress Controller