kuberenetes 操作实践

常用操作命令

常规命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 设置主机名
hostnamectl --static set-hostname xxxx

# 获取集群加入信息
kubeadm token create --print-join-command

# 设置docker仓库口令
kubectl create secret docker-registry registry-secret --docker-server=registry.cn-shanghai.aliyuncs.com --docker-username=xupeng@patsnap --docker-password=patsnap2019! --docker-email=xupeng@patsnap -n ningbo

# etcd3.3 node状态查看
ETCDCTL_API=3 etcdctl endpoint health --endpoints "https://10.40.20.41:2379,https://10.40.20.46:2379,https://10.40.20.233:2379" --cacert=/etc/etcd/ssl/etcd-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem

# etcd key 查看
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
get /registry/minions/ningbo-db --prefix

kubernetes node 常用操作命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#设置node 不可调度
kubectl cordon k8s-node-1

#驱逐node 上的pod
kubectl drain k8s-node-1 --delete-local-data --force --ignore-daemonsets

#node 重新加入
kubectl uncordon k8s-node-1

#删除node
kubectl delete node k8s-node-1

#设置master不进行调度
kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule

# 设置label
kubectl label node k8s-master project=ipms-app

#设置role标签
kubectl label node k8s-node-01 node-role.kubernetes.io/node=

#删除标签
kubectl label nodes k8s-master mtype-

如何调整网络插件

calico网络插件调整,原本没使用IPIP模式,现在需要使用CrossSubnet

1
2
3
4
5
6
7
#先删除原本的calico插件
kubectl delete -f calico.yml

#修改calico.yaml,将CALICO_IPV4POOL_IPIP调整为CrossSubnet

#启用calico插件
kubectl apply -f calico.yaml

如何将kubeProxy的iptables修改为ipvs

1. 加载内核模块

查看内核模块是否加载

1
lsmod|grep ip_vs

如果没有加载,使用如下命令加载ipvs相关模块

1
2
3
4
5
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

2. 更改kube-proxy配置

1
kubectl edit configmap kube-proxy -n kube-system

找到并修改如下部分的内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "" #=========================> 为空表示默认的负载均衡算法为轮询, rr, wrr, lc, wlc, sh, dh, lblc...
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: iptables #=========================> 修改此处为ipvs
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms

编辑完,保存退出

3. 删除所有kube-proxy的pod

1
kubectl delete pod $(kubectl get pod -n kube-system | grep kube-proxy | awk -F ' ' '{print $1}') -n kube-system

4. 查看kube-proxy的pod日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
kubectl logs kube-proxy-xxx -n kube-system

#I0308 02:16:02.980965 1 server_others.go:183] Using ipvs Proxier.
#W0308 02:16:02.991188 1 proxier.go:356] IPVS scheduler not specified, use rr by default
#I0308 02:16:02.991338 1 server_others.go:210] Tearing down inactive rules.
#I0308 02:16:03.022123 1 server.go:448] Version: v1.11.6
#I0308 02:16:03.028801 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
#I0308 02:16:03.029030 1 conntrack.go:52] Setting nf_conntrack_max to 131072
#I0308 02:16:03.029208 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
#I0308 02:16:03.029296 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
#I0308 02:16:03.029639 1 config.go:102] Starting endpoints config controller
#I0308 02:16:03.029682 1 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
#I0308 02:16:03.029723 1 config.go:202] Starting service config controller
#I0308 02:16:03.029777 1 controller_utils.go:1025] Waiting for caches to sync for service config controller
#I0308 02:16:03.129930 1 controller_utils.go:1032] Caches are synced for endpoints config controller
#I0308 02:16:03.129931 1 controller_utils.go:1032] Caches are synced for service config controller

看到有 Using ipvs Proxier 即表明切换成功.

5. 安装ipvsadm

使用ipvsadm查看ipvs相关规则,如果没有这个命令可以直接yum安装

1
yum install ipvsadm

kubelet GC 机制及对应配置

可通过kubelet gc的能力来自动清理无用的container以及image,从而来释放磁盘空间。kubelet会启动两个GC,分别回收container和image。其中container的回收频率为1分钟一次,而image回收频率为2分钟一次。

容器GC

退出的容器也会继续占用系统资源,比如还会在文件系统存储很多数据、docker 应用也要占用 CPU 和内存去维护这些容器。docker 本身并不会自动删除已经退出的容器,因此 kubelet 就负起了这个责任。kubelet 容器的回收是为了删除已经退出的容器以节省节点的空间,提升性能。容器 GC 虽然有利于空间和性能,但是删除容器也会导致错误现场被清理,不利于 debug 和错误定位,因此不建议把所有退出的容器都删除。因此容器的清理需要一定的策略,主要是告诉 kubelet 你要保存多少已经退出的容器。和容器 GC 有关的可以配置的 kubelet 启动参数 /var/lib/kubelet/config.yaml

  • MinimumGCAge:container 结束多长时间之后才能够被回收,默认是一分钟
  • MaxPerPodContainerCount:每个 container 最终可以保存多少个已经结束的容器,默认是 1,设置为负数表示不做限制
  • MaxContainerCount:节点上最多能保留多少个结束的容器,默认是 -1,表示不做限制

gc的步骤如下:

  1. 获取可以清除的容器,这些容器都是非活动的,并且创建时间比 gcPolicy.MinAge 要早
  2. 通过强制执行 gcPolicy.MaxPerPodContainer,为每个pod删除最老的死亡容器
  3. 通过强制执行 gcPolicy.MaxContainers 来移除最老的死亡容器
  4. 获取未准备好且不包含容器的可清除沙箱
  5. 移除可移除的沙箱

镜像GC

镜像主要占用磁盘空间,虽然 docker 使用镜像分层可以让多个镜像共享存储,但是长时间运行的节点如果下载了很多镜像也会导致占用的存储空间过多。如果镜像导致磁盘被占满,会造成应用无法正常工作。docker 默认也不会做镜像清理,镜像一旦下载就会永远留在本地,除非被手动删除。其实很多镜像并没有被实际使用,这些不用的镜像继续占用空间是非常大的浪费,也是巨大的隐患,因此 kubelet 也会周期性地去清理镜像。镜像的清理和容器不同,是以占用的空间作为标准的,用户可以配置当镜像占据多大比例的存储空间时才进行清理。清理的时候会优先清理最久没有被使用的镜像,镜像被 pull 下来或者被容器使用都会更新它的最近使用时间。启动 kubelet 的时候,可以配置这些参数控制镜像清理的策略 /var/lib/kubelet/config.yaml

  • imageMinimumGCAge:镜像最少多久没有被使用才会被清理
  • imageGCHighThresholdPercent:磁盘使用率的上限,当达到这一使用率的时候会触发镜像清理。默认值为 90%
  • imageGCLowThresholdPercent:磁盘使用率的下限,每次清理直到使用率低于这个值或者没有可以清理的镜像了才会停止.默认值为 80%

也就是说,默认情况下,当镜像占满所在盘 90% 容量的时候,kubelet 就会进行清理,一直到镜像占用率低于 80% 为止。

使用kubeadm进行cluster升级

使用kubeadm 安装好后kubernetes,后续如何进行升级,升级 kubernetes集群,只能逐版本升级。只能从 1.12 升级到 1.13 而不能从 1.1 直接升级到 1.13,升级步骤: 1.11—>1.12—>1.13—>1.14

需要注意的地方是,kubernetes从1.11版本开始变化比较大,CoreDNS已作为默认DNS。
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf中的环境变量被分配为三个文件:/var/lib/kubelet/config.yaml (其中cgroup驱动默认cgroupfs)、/var/lib/kubelet/kubeadm-flags.env (cgroup驱动默认systemd,优先权)、/etc/sysconfig/kubelet
全新安装的kubernetes集群是有网络CNI配置的,升级安装的是没有CNI配置的,配置文件/var/lib/kubelet/kubeadm-flags.env。依赖的镜像tag抬头从gcr.io/google_containers变成k8s.gcr.io,升级基本使用gcr.io/googlecontainers,全新安装则使用k8s.gcr.io><

从1.8开始为kube-proxy组件引入了IPVS模式,1.11版本开始正式支持IPVS,默认不开启,1.12以上版本默认开启,不开启则使用iptables模式

准备升级镜像

提前准备好升级所需的镜像image,并打成官方标准tag。
master节点需要所有镜像,node节点仅需要proxy、pause镜像

在所有Master节点下载各版本镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#1.12.7
#k8s.gcr.io/kube-proxy:v1.12.7
#k8s.gcr.io/kube-scheduler:v1.12.7
#k8s.gcr.io/kube-controller-manager:v1.12.7
#k8s.gcr.io/kube-apiserver:v1.12.7
#k8s.gcr.io/etcd:3.2.24
#k8s.gcr.io/pause:3.1
#k8s.gcr.io/coredns:1.2.2

#1.13.5
#k8s.gcr.io/kube-proxy:v1.13.5
#k8s.gcr.io/kube-scheduler:v1.13.5
#k8s.gcr.io/kube-controller-manager:v1.13.5
#k8s.gcr.io/kube-apiserver:v1.13.5
#k8s.gcr.io/etcd:3.2.24
#k8s.gcr.io/pause:3.1
#k8s.gcr.io/coredns:1.2.6

Node节点下载各版本镜像

1
2
3
#k8s.gcr.io/kube-proxy:v1.12.7
#k8s.gcr.io/kube-proxy:v1.13.5
#k8s.gcr.io/pause:3.1

1.11.6升级到1.12.7

master节点(使用内部etcd)

前置准备
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#在所有master节点上执行以下命令
kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

#编辑第一台master节点kubeadm-config-cm.yaml配置,修改以下参数
api.advertiseAddress #---> 当前master ip地址
etcd.local.extraArgs.advertise-client-urls #---> 当前master ip地址
etcd.local.extraArgs.initial-advertise-peer-urls #---> 当前master ip地址
etcd.local.extraArgs.listen-client-urls #---> 当前master ip地址
etcd.local.extraArgs.listen-peer-urls #---> 当前master ip地址
etcd.local.extraArgs.initial-cluster #---> 当前master ip地址和master主机名,例如:"ip-172-31-92-42=https://172.31.92.42:2380,ip-172-31-89-186=https://172.31.89.186:2380,ip-172-31-90-42=https://172.31.90.42:2380"

You must also pass an additional argument (initial-cluster-state: existing) to etcd.local.extraArgs.

#编辑其余master主机上的kubeadm-config-cm.yaml 修改ClusterConfiguration以下参数:
etcd.local.extraArgs.advertise-client-urls #---> 当前master ip地址
etcd.local.extraArgs.initial-advertise-peer-urls #---> 当前master ip地址
etcd.local.extraArgs.listen-client-urls #---> 当前master ip地址
etcd.local.extraArgs.listen-peer-urls #---> 当前master ip地址

You must also modify the ClusterStatus to add a mapping for the current host under apiEndpoints.
开始master升级
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#设置升级docker的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon master其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain master其中一台的节点名 --ignore-daemonsets --delete-local-data

#先升级kubeadm
yum install -y kubeadm-1.12.7 kubelet-1.12.7 kubectl-1.12.7 kubernetes-cni-0.7.5-0

执行升级,确保上面已经准备好镜像image

#查看升级计划
kubeadm upgrade plan

#升级第一台master节点
kubectl apply -f kubeadm-config-cm.yaml --force
kubeadm upgrade apply v1.12.7

#升级其余master节点
kubectl annotate node <nodename> kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
kubectl apply -f kubeadm-config-cm.yaml --force
kubeadm upgrade apply v1.12.7

#看到如下,说明升级成功
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.7". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.

#容器进行初始化,查看系统服务pod是否恢复正常运行。
kubectl get pod -o wide --all-namespaces

#确保容器AGE在一分钟以上,再重启服务
systemctl daemon-reload && systemctl restart kubelet

#查看master节点版本已成功升级
kubectl get nodes

#恢复调度
kubectl uncordon master节点名称

master节点(使用外部etcd)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#在所有master节点上执行以下命令
kubectl get configmap -n kube-system kubeadm-config -o jsonpath={.data.MasterConfiguration} > kubeadm-config.yaml

编辑kubeadm-config.yaml, 修改api.advertiseAddress值为当前master的ip地址

#设置升级docker的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon master其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain master其中一台的节点名 --ignore-daemonsets --delete-local-data

#先升级kubeadm
yum install -y kubeadm-1.12.3 kubelet-1.12.7 kubectl-1.12.7 kubernetes-cni-0.7.5-0

执行升级,确保上面已经准备好镜像image

#查看升级计划
kubeadm upgrade plan

#正式升级命令
kubeadm upgrade apply v1.12.7 --config kubeadm-config.yaml
#看到如下,说明升级成功
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.

#容器进行初始化,查看系统服务pod是否恢复正常运行。
kubectl get pod -o wide --all-namespaces

#确保容器AGE在一分钟以上,再重启服务
systemctl daemon-reload && systemctl restart kubelet

#查看master节点版本已成功升级
kubectl get nodes

#恢复调度
kubectl uncordon master节点名称

node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#设置升级kubernetes的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon node其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain node其中一台的节点名 --ignore-daemonsets --delete-local-data

#节点上执行,升级kubelet kubeadm kubectl
yum install -y kubeadm-1.12.7 kubelet-1.12.7 kubectl-1.12.7 kubernetes-cni-0.7.5-0

#升级node节点的配置,配置文件/var/lib/kubelet/config.yaml中的cgroupDriver需要保持与docker的Cgroup Driver一致
kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)

#重启服务
systemctl daemon-reload && systemctl restart kubelet

#恢复调度
kubectl uncordon node节点名称

1.12.7升级到1.13.5

master节点(使用内部etcd)

前置准备
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#修改当前节点的configmap/kubeadm-config ClusterConfiguration值
kubectl edit configmap -n kube-system kubeadm-config

1. 移除etcd相关部分
2. 修改apiEndpoints值 Add an entry for each of the additional control plane hosts,如下例子:
# ip-10-40-40-14.cn-northwest-1.compute.internal:
# advertiseAddress: 10.40.40.14
# bindPort: 6443
# ip-10-40-40-190.cn-northwest-1.compute.internal:
# advertiseAddress: 10.40.40.190
# bindPort: 6443
# ip-10-40-40-195.cn-northwest-1.compute.internal:
# advertiseAddress: 10.40.40.195
# bindPort: 6443
开始升级master
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#设置升级docker的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon master其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain master其中一台的节点名 --ignore-daemonsets --delete-local-data

#先升级kubeadm
yum install -y kubeadm-1.13.5

#执行升级,确保上面已经准备好镜像image

#查看升级计划
kubeadm upgrade plan

#第一台master 使用以下命令执行
kubeadm upgrade apply v1.13.5

#其余master使用以下命令执行
kubeadm upgrade node experimental-control-plane

#第一台master升级成功信息
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.5". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
#其余master升级成功信息
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!

#容器进行初始化,查看系统服务pod是否恢复正常运行。
kubectl get pod -o wide --all-namespaces

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.13.5 kubectl-1.13.5
systemctl daemon-reload && systemctl restart kubelet

#查看master节点版本已成功升级
kubectl get nodes

#恢复调度
kubectl uncordon master节点名称

master节点(使用外部etcd)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#设置升级docker的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon master其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain master其中一台的节点名 --ignore-daemonsets --delete-local-data

#先升级kubeadm
yum install -y kubeadm-1.13.5

执行升级,确保上面已经准备好镜像image

#查看升级计划
kubeadm upgrade plan

#第一台master使用以下命令执行:
kubeadm upgrade apply v1.13.5

#剩余master使用以下命令执行:
kubeadm upgrade node experimental-control-plane

#第一台master升级成功信息
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.5". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
#其余master升级成功信息
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!

#容器进行初始化,查看系统服务pod是否恢复正常运行。
kubectl get pod -o wide --all-namespaces

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.13.5 kubectl-1.13.5
systemctl daemon-reload && systemctl restart kubelet

#查看master节点版本已成功升级
kubectl get nodes

#恢复调度
kubectl uncordon master节点名称

node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#设置升级kubernetes的节点为不可调度,并且将剩余的pod驱逐,通过kubectl get nodes命令看到该节点已被标记不可调度
kubectl cordon node其中一台的节点名

#忽略了所有的daemonset的pod,并且将剩余的pod驱逐
kubectl drain node其中一台的节点名 --ignore-daemonsets --delete-local-data

#节点上执行,升级kubelet kubeadm kubectl
yum install -y kubeadm-1.13.5 kubelet-1.13.5 kubectl-1.13.5 kubernetes-cni-0.7.5-0

#升级node节点的配置,配置文件/var/lib/kubelet/config.yaml中的cgroupDriver需要保持与docker的Cgroup Driver一致
kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)

#重启服务
systemctl daemon-reload && systemctl restart kubelet

#恢复调度
kubectl uncordon node节点名称

1.13.5升级到1.14.0

如果使用的外部的etcd则进行如下操作

Modify configmap/kubeadm-config for this control plane node by removing the etcd section completely

1
2
#修改当前节点的configmap/kubeadm-config ClusterConfiguration值
kubectl edit configmap -n kube-system kubeadm-config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:

certSANs:
- 10.164.178.238
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.164.178.238:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
scheduler: {}
ClusterStatus: |
apiEndpoints:

k8s-master1:
advertiseAddress: 10.164.178.161
bindPort: 6443
k8s-master2:
advertiseAddress: 10.164.178.162
bindPort: 6443
k8s-master3:
advertiseAddress: 10.164.178.163
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterStatus
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-21T10:08:03Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "209870"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: 52419642-7bb0-11e9-8a89-0800270fde1d

master节点

升级第一个控制平面(mater1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#安装对应的组件
yum install -y kubeadm-1.14.0-0 --disableexcludes=kubernetes

#查看版本
kubeadm version

#查看升级计划
kubeadm upgrade plan

#执行升级
kubeadm upgrade apply v1.14.0

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.14.0-0 kubectl-1.14.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet
升级其他控制平面(mater2、master3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
#安装对应的组件
yum install -y kubeadm-1.14.0-0 --disableexcludes=kubernetes

#执行升级
kubeadm upgrade node experimental-control-plane

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.14.0-0 kubectl-1.14.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet

#master2日志
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.0"...
Static pod: kube-apiserver-k8s-master2 hash: ba03afd84d454d318c2cc6e3a6e23f53
Static pod: kube-controller-manager-k8s-master2 hash: 0a9f25af4e4ad5e5427feb8295fc055a
Static pod: kube-scheduler-k8s-master2 hash: 8cea5badbe1b177ab58353a73cdedd01
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master2 hash: d990ad5b88743835159168644453f90b
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-45-09/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-master2 hash: d990ad5b88743835159168644453f90b
Static pod: etcd-k8s-master2 hash: e56ee6ac7c0de512a17ef30c3a44e01c
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests998233672"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-45-09/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master2 hash: ba03afd84d454d318c2cc6e3a6e23f53
Static pod: kube-apiserver-k8s-master2 hash: 94e207e0d84e092ae98dc64af5b870ba
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-45-09/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master2 hash: 0a9f25af4e4ad5e5427feb8295fc055a
Static pod: kube-controller-manager-k8s-master2 hash: e45f10af1ae684722cbd74cb11807900
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-45-09/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master2 hash: 8cea5badbe1b177ab58353a73cdedd01
Static pod: kube-scheduler-k8s-master2 hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!

#master3 升级日志
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.0"...
Static pod: kube-apiserver-k8s-master3 hash: 556e7d43da7a389c6b0b116ae5a46d97
Static pod: kube-controller-manager-k8s-master3 hash: 0a9f25af4e4ad5e5427feb8295fc055a
Static pod: kube-scheduler-k8s-master3 hash: 8cea5badbe1b177ab58353a73cdedd01
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests859456185"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-48-13/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master3 hash: 556e7d43da7a389c6b0b116ae5a46d97
Static pod: kube-apiserver-k8s-master3 hash: 1a94c94ecfa9f698cfc902fc37c15be9
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-48-13/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master3 hash: 0a9f25af4e4ad5e5427feb8295fc055a
Static pod: kube-controller-manager-k8s-master3 hash: e45f10af1ae684722cbd74cb11807900
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-21-23-48-13/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master3 hash: 8cea5badbe1b177ab58353a73cdedd01
Static pod: kube-scheduler-k8s-master3 hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!

node节点(node1、node2、node3 …)

升级kubeadm
1
yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes
驱除节点上的容器并让节点不可调度
1
2
kubectl drain $WORKERNODE --ignore-daemonsets
#$WORKERNODE: node1、node2、node3 ...
升级 kubelet 配置
1
kubeadm upgrade node config --kubelet-version v1.14.0
升级 kubelet 与 kubectl
1
2
3
4
5
6
7
yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes

#重启kubelet
systemctl daemon-reload && systemctl restart kubelet

#让节点重新在线
kubectl uncordon $WORKERNODE
验证集群的状态
1
kubectl get node

1.14.0升级到1.15.0

如果使用的外部的etcd则进行如下操作

Modify configmap/kubeadm-config for this control plane node by removing the etcd section completely

1
2
#修改当前节点的configmap/kubeadm-config ClusterConfiguration值
kubectl edit configmap -n kube-system kubeadm-config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:

certSANs:
- 10.164.178.238
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 10.164.178.238:6443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
ClusterStatus: |
apiEndpoints:

k8s-master1:
advertiseAddress: 10.164.178.161
bindPort: 6443
k8s-master2:
advertiseAddress: 10.164.178.162
bindPort: 6443
k8s-master3:
advertiseAddress: 10.164.178.163
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterStatus
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-25T04:12:53Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "1337635"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: 81334a15-96ff-11e9-b14f-0800270fde1d

master节点

升级第一个控制平面(mater1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#安装对应的组件
yum install -y kubeadm-1.15.0-0 --disableexcludes=kubernetes

#查看版本
kubeadm version

#查看升级计划
kubeadm upgrade plan

#执行升级
kubeadm upgrade apply v1.15.0

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.15.0-0 kubectl-1.15.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet
升级其他控制平面(mater2、master3)
1
2
3
4
5
6
7
8
9
#安装对应的组件
yum install -y kubeadm-1.15.0-0 --disableexcludes=kubernetes

#执行升级
kubeadm upgrade node

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.15.0-0 kubectl-1.15.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet

node节点(node1、node2、node3 …)

升级kubeadm
1
yum install -y kubeadm-1.15.x-0 --disableexcludes=kubernetes
驱除节点上的容器并让节点不可调度
1
2
kubectl drain $WORKERNODE --ignore-daemonsets
#$WORKERNODE: node1、node2、node3 ...
升级 kubelet 配置
1
kubeadm upgrade node
升级 kubelet 与 kubectl
1
2
3
4
5
6
7
yum install -y kubelet-1.15.x-0 kubectl-1.15.x-0 --disableexcludes=kubernetes

#重启kubelet
systemctl daemon-reload && systemctl restart kubelet

#让节点重新在线
kubectl uncordon $WORKERNODE
验证集群的状态
1
kubectl get node

1.15.0升级到1.16.0

如果使用的外部的etcd则进行如下操作

Modify configmap/kubeadm-config for this control plane node by removing the etcd section completely

1
2
#修改当前节点的configmap/kubeadm-config ClusterConfiguration值
kubectl edit configmap -n kube-system kubeadm-config

master节点

升级第一个控制平面(mater1)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#安装对应的组件
yum install -y kubeadm-1.16.0-0 --disableexcludes=kubernetes

#查看版本
kubeadm version

#让master1下线
kubectl drain $MASTER --ignore-daemonsets

#查看升级计划
kubeadm upgrade plan

#执行升级
kubeadm upgrade apply v1.16.0

#master 上线
kubectl uncordon $MASTER

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.15.0-0 kubectl-1.15.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet
升级其他控制平面(mater2、master3)
1
2
3
4
5
6
7
8
9
#安装对应的组件
yum install -y kubeadm-1.16.0-0 --disableexcludes=kubernetes

#执行升级
kubeadm upgrade node

#确保容器AGE在一分钟以上,再重启服务
yum install -y kubelet-1.16.0-0 kubectl-1.16.0-0 --disableexcludes=kubernetes
systemctl daemon-reload && systemctl restart kubelet

node节点(node1、node2、node3 …)

升级kubeadm
1
yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes
驱除节点上的容器并让节点不可调度
1
2
kubectl drain $WORKERNODE --ignore-daemonsets
#$WORKERNODE: node1、node2、node3 ...
升级 kubelet 配置
1
kubeadm upgrade node
升级 kubelet 与 kubectl
1
2
3
4
5
6
7
yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes

#重启kubelet
systemctl daemon-reload && systemctl restart kubelet

#让节点重新在线
kubectl uncordon $WORKERNODE
验证集群的状态
1
kubectl get node

1.16.0升级到1.17.0

同1.15.0到1.16.0的升级

1.17.0升级到1.18.0

同1.15.0到1.16.0的升级

参考:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-12/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13/
https://github.com/truongnh1992/upgrade-kubeadm-cluster

kubeadm init config 配置

kubeadm v1.12

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: 0.0.0.0
bindPort: 6443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master

---

# https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1beta1/types.go
# https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1alpha3/types.go
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManagerExtraArgs:
address: 0.0.0.0
schedulerExtraArgs:
address: 0.0.0.0
etcd:
local:
dataDir: /var/lib/etcd
image: ""
imageRepository: k8s.gcr.io
kubernetesVersion: v1.12.0
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""

---

apiVersion: kubeadm.k8s.io/v1alpha3
kind: JoinConfiguration
apiEndpoint:
advertiseAddress: 0.0.0.0
bindPort: 6443
caCertPath: /etc/kubernetes/pki/ca.crt
clusterName: kubernetes
discoveryFile: ""
discoveryTimeout: 5m0s
discoveryToken: abcdef.0123456789abcdef
discoveryTokenAPIServers:
- kube-apiserver:6443
discoveryTokenUnsafeSkipCAVerification: true
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
name: master
tlsBootstrapToken: abcdef.0123456789abcdef
token: abcdef.0123456789abcdef

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms

---

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

traefik如何设置路由

http://yunke.science/2018/03/28/Ingress-traefik/

etcd 3.2.x -> 3.3.x

1
2
3
4
5
6
7
8
9
10
11
12
#先进行备份
ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert=/etc/etcd/ssl/etcd-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem snapshot save snapshot.db

#停止etcd 状态
systemctl stop etcd

#升级
wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/etcd-3.3.11-2.el7.centos.x86_64.rpm
yum localinstall -y etcd-3.2.22-1.el7.x86_64.rpm

#启动
systemctl start etcd

参考资料:
https://www.jianshu.com/p/aa528c57f3be

Coredns 相关操作

如何使用外部dns解析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
# 默认
upstream
# 用于解析外部主机主机(外部服务)
# upstream 114.114.114.114 223.5.5.5
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
# 默认使用主机的
proxy . /etc/resolv.conf
# 任何不在集群域内的查询将转发到预定义的解析器,默认:/etc/resolv.conf;
# 在coredns “Deployment”资源中“dnsPolicy“设置为”Default”,即提供dns服务的pod从所在节点继承/etc/resolv.conf,如果节点的上游解析地址与”upstream”一致,则设置任意一个参数即可
#proxy . 114.114.114.114 223.5.5.5
cache 30
loop
reload
loadbalance
}
#自定义dns记录,对应kube-dns中的stubdomains;
#每条记录,单独设置1各zone
patsnap.local:53 {
errors
cache 30
proxy . 192.168.3.108
}

如何给kubernetes service做cname解析

coredns 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "coredns",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/coredns",
"uid": "aa45aaab-4c79-11e9-9629-00163e022859",
"resourceVersion": "118616",
"creationTimestamp": "2019-03-22T08:08:24Z"
},
"data": {
//格式化内容如下
}

#格式化后的效果:
Corefile:
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
autopath @kubernetes
reload
file /etc/coredns/patsnap.io.zone nexus.patsnap.io {
upstream
}
file /etc/coredns/patsnap.local.zone patsnap.local {
upstream
}
file /etc/coredns/patsnap.com.zone nexus.patsnap.com {
upstream
}
}

patsnap.com.zone:
;@ 当前域名 nexus.patsnap.com
;900 表示ttl
;ns-1304.awsdns-35.org. 表示根
;icyboy.jiunile.com. 表示邮箱 icyboy@jiunile.com
@ 900 IN SOA ns-1304.awsdns-35.org. icyboy.jiunile.com. (
2017033001 ; serial
7200 ; refresh (2 hour)
900 ; retry (15 min)
1209600 ; expire
86400 ; min TTL (1 day)
)

;@ 等价于nexus.patsnap.com CNAME 后面的域名必须写全
@ IN CNAME s-ops-maven-nexus.ops-qa.svc.cluster.local.

patsnap.io.zone:
@ 900 IN SOA ns-196.awsdns-24.com. icyboy.jiunile.com. (
1
7200
900
1209600
86400
)

;npm 等价于npm.nexus.patsnap.io.
npm IN CNAME s-ops-npm-pypi-nexus.ops-qa.svc.cluster.local.
pypi IN CNAME s-ops-npm-pypi-nexus.ops-qa.svc.cluster.local.

patsnap.local.zone:
;@ 当前域名 patsnap.local
;900 表示ttl
;192.168.3.108. 表示根
;icyboy.jiunile.com. 表示邮箱 icyboy@jiunile.com
@ 900 IN SOA 192.168.3.108. icyboy.jiunile.com. (
2019092202
21600
3600
604800
86400
)

;npm.nexus 等价于 npm.nexus.patsnap.local.
npm.nexus IN CNAME s-ops-npm-pypi-nexus.ops-qa.svc.cluster.local.
pypi.nexus IN CNAME s-ops-npm-pypi-nexus.ops-qa.svc.cluster.local.

coredns deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"volumes": [
{
"name": "config-volume",
"configMap": {
"name": "coredns",
"items": [
{
"key": "Corefile",
"path": "Corefile"
},
{
"key": "patsnap.io.zone",
"path": "patsnap.io.zone"
},
{
"key": "patsnap.local.zone",
"path": "patsnap.local.zone"
},
{
"key": "patsnap.com.zone",
"path": "patsnap.com.zone"
}
],
"defaultMode": 420
}
}

参考链接:
https://www.cnblogs.com/netonline/p/9935228.html
https://yuerblog.cc/2018/12/29/k8s-dns/#post-4008-_Toc533670192
https://github.com/coredns/coredns.io/blob/master/content/blog/custom-dns-and-kubernetes.md

-------------本文结束感谢您的阅读-------------

本文标题:kuberenetes 操作实践

文章作者:icyboy

发布时间:2019年03月08日 - 12:00

最后更新:2020年05月07日 - 10:46

原始链接:http://team.jiunile.com/blog/2019/03/k8s-best-practices.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。