在阿里云使用Kubeadm 1.13.x 部署多Master集群

前言(坑)

负载均衡问题

  • 阿里不支持LVS,没有vip可用,必须通过申请SLB来固定VIP
  • 因Kubernetes apiserver为https协议,阿里SLB中能负载均衡HTTPS的只有TCP方式,但TCP协议不能转发到发起主机(apiserver 需要有回环路访问,简单说就是自己给自己发请求

为了解决kubernets apiserver高可用问题,故用以下方式来解决:

  • 申请一个内网的SLB(获取VIP),8443为监听端口,6443为apiserver的后端端口
  • 在每台master机器上搭建keepalived+haproxy,VIP 用SLB的VIP

环境介绍

IP Hostname 备注
172.19.170.183 master-1 安装keepalived + haproxy
172.19.170.184 master-2 安装keepalived + haproxy
172.19.170.185 master-3 安装keepalived + haproxy
172.19.170.186 node-1
172.19.170.187 node-2
172.19.170.100 vip 申请阿里云的SLB获取到

环境初始化

修改hosts信息,在所有master机器上运行

1
2
3
4
5
cat <<EOF >> /etc/hosts
172.19.170.183 master-1
172.19.170.184 master-2
172.19.170.185 master-3
EOF

在master-1 机器上执行以下命令:

1
2
3
ssh-keygen -t rsa
ssh-copy-id root@master-2
ssh-copy-id root@master-3

优化所有master & node 节点的机器环境

在所有master & node 机器上都要执行以下命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125

#################################################################
# 安装4.18版本内核
# 由于最新稳定版4.19内核将nf_conntrack_ipv4更名为nf_conntrack,目前的kube-proxy不支持在4.19版本内核下开启ipvs
# 详情可以查看:https://github.com/kubernetes/kubernetes/issues/70304
# 对于该问题的修复10月30日刚刚合并到代码主干,所以目前还没有包含此修复的kubernetes版本发出
# 可以选择安装4.18版本内核,或者不开启IPVS
#################################################################
############################# start #############################
# 升级内核
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm ;yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

# 检查默认内核版本是否大于4.14,否则请调整默认启动参数
grub2-editenv list

#重启以更换内核
reboot

# 开启ip_vs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed -r 's#(.*).ko.*#\1#'\`; do
/sbin/modinfo -F filename \$i &> /dev/null
if [ \$? -eq 0 ]; then
/sbin/modprobe \$i
fi
done
EOF

#检查是否开启了ip_vs
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

############################# end #############################

# 优化内核
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max=2310720
fs.may_detach_mounts = 1
fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances = 8192
fs.file-max=52706963
fs.nr_open=52706963
vm.swappiness = 0
vm.panic_on_oom=0
vm.overcommit_memory=1
EOF
sysctl --system

# 执行sysctl报错请参考centos7添加bridge-nf-call-ip6tables出现No such file or directory: https://blog.csdn.net/airuozhaoyang/article/details/40534953

# 优化文件打开数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf

# 关闭Selinux/firewalld
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭交换分区
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 同步时间
yum install -y ntpdate
ntpdate -u ntp.api.bz

# 设置kubernet yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubernet
yum install kubeadm-1.13.3-0 kubelet-1.13.3-0 kubectl-1.13.3-0 --disableexcludes=kubernetes -y
systemctl enable kubelet

# 设置docker源
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=Docker Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker/yum/repo/centos7
enabled=1
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker/yum/gpg
EOF

# 安装docker-engine
yum -y install docker-engine-1.13.1 --disableexcludes=docker
systemctl enable docker
systemctl start docker

# 设置kubelet启动配置
cgroupDriver=$(docker info|grep Cg)
driver=${cgroupDriver##*: }
echo "driver is ${driver}"

cat <<EOF > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=${driver}"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=--system-reserved=cpu=200m,memory=250Mi --kube-reserved=cpu=200m,memory=250Mi --eviction-soft=memory.available<500Mi,nodefs.available<2Gi --eviction-soft-grace-period=memory.available=1m30s,nodefs.available=1m30s --eviction-max-pod-grace-period=120 --eviction-hard=memory.available<500Mi,nodefs.available<1Gi --eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=500Mi,imagefs.available=2Gi --node-status-update-frequency=10s --eviction-pressure-transition-period=30s"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS \$KUBELET_AUTHZ_ARGS \$KUBELET_CGROUP_ARGS \$KUBELET_CERTIFICATE_ARGS \$KUBELET_EXTRA_ARGS
EOF

搭建keepalived + haproxy

在master-1上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
docker run --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth0 \ #网卡名称
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['172.19.170.100']" \ #VIP地址
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.19.170.183']" \ #master-1地址
-e KEEPALIVED_PASSWORD=k8s \
--name k8s-keepalived \
--restart always \
-d osixia/keepalived:2.0.12

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode http
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout check 2s

listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:k8s
stats hide-version
stats admin if TRUE

frontend k8s-https
bind 0.0.0.0:8443
mode tcp
#maxconn 50000
default_backend k8s-https

backend k8s-https
mode tcp
balance roundrobin
server master-1 172.19.170.183:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-2 172.19.170.184:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-3 172.19.170.185:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

docker run -d --name k8s-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
haproxy:1.7.8-alpine

在master-2上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
docker run --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth0 \ #网卡名称
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['172.19.170.100']" \ #VIP地址
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.19.170.184']" \ #master-2地址
-e KEEPALIVED_PASSWORD=k8s \
--name k8s-keepalived \
--restart always \
-d osixia/keepalived:2.0.12

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode http
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout check 2s

listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:k8s
stats hide-version
stats admin if TRUE

frontend k8s-https
bind 0.0.0.0:8443
mode tcp
#maxconn 50000
default_backend k8s-https

backend k8s-https
mode tcp
balance roundrobin
server master-1 172.19.170.183:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-2 172.19.170.184:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-3 172.19.170.185:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

docker run -d --name k8s-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
haproxy:1.7.8-alpine

在master-3上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
docker run --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth0 \ #网卡名称
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['172.19.170.100']" \ #VIP地址
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.19.170.185']" \ #master-3地址
-e KEEPALIVED_PASSWORD=k8s \
--name k8s-keepalived \
--restart always \
-d osixia/keepalived:2.0.12

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid

defaults
mode http
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout check 2s

listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:k8s
stats hide-version
stats admin if TRUE

frontend k8s-https
bind 0.0.0.0:8443
mode tcp
#maxconn 50000
default_backend k8s-https

backend k8s-https
mode tcp
balance roundrobin
server master-1 172.19.170.183:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-2 172.19.170.184:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server master-3 172.19.170.185:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

docker run -d --name k8s-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
haproxy:1.7.8-alpine

搭建kubernetes master 集群

配置阿里云 SLB

在阿里云SLB 管理界面添加8443监听端口,使用TCP协议,后端服务器选择一台你即将初始化的master机器,后端服务器端口为6443,健康检查默认配置即可,保存配置。此时你的SLB是不进行工作的,因为后端服务器的6443端口还未监听。

初始化master-1

在master-1 上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat << EOF > $HOME/kubeadm-1.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
controlPlaneEndpoint: 172.19.170.100:8443
apiServer:
certSANs:
- 172.19.170.183
- 172.19.170.184
- 172.19.170.185
- 172.19.170.100
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: 192.168.0.0/16
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # image的仓库源
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: iptables
EOF

kubeadm config images pull --config $HOME/kubeadm-1.yaml
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
kubeadm init --config $HOME/kubeadm-1.yaml

mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config

安装calico 网络插件

在master-1 上执行

确保 Kubernetes controller manager (/etc/kubernetes/manifests/kube-controller-manager.yaml) 设置了以下标志:--cluster-cidr=192.168.0.0/16和--allocate-node-cidrs=true

提示:在kubeadm上,您可以传递--pod-network-cidr=192.168.0.0/16 给kubeadm以设置两个Kubernetes控制器标志。

在ConfigMap命名中calico-config,找到typha_service_name,删除none值,并替换为calico-typha

我们建议每200个节点至少复制一个副本,不超过20个副本。在生产中,我们建议至少使用三个副本来减少滚动升级和故障的影响。

警告:如果您设置typha_service_name而不增加其默认值为0 Felix的副本计数将尝试连接到Typha,找不到要连接的Typha实例,并且无法启动。

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.12.3/calico.yaml

初始化其它master节点

在master-1 上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#初始化master-2
ssh master-2 "kubeadm reset -f
rm -rf /etc/kubernetes/pki/
mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1"


#初始化master-3
ssh master-3 "kubeadm reset -f
rm -rf /etc/kubernetes/pki/
mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1"


#拷贝文件到master-2
scp /etc/kubernetes/pki/ca.crt master-2:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key master-2:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key master-2:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub master-2:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt master-2:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key master-2:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master-2:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master-2:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf master-2:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf master-2:~/.kube/config

#拷贝文件到master-3
scp /etc/kubernetes/pki/ca.crt master-3:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key master-3:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key master-3:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub master-3:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt master-3:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key master-3:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master-3:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master-3:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf master-3:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf master-3:~/.kube/config

#获取加入口令
JOIN_CMD=`kubeadm token create --print-join-command`

#将master-2 加入
ssh master-2 "${JOIN_CMD} --experimental-control-plane"

#将master-3 加入
ssh master-3 "${JOIN_CMD} --experimental-control-plane"

再次配置阿里云 SLB

同样进入阿里云SLB管理界面,将其余的两台master机器加入到后端服务器中。这样你的apiserver 就高可用了

开启kubernetes 插件

在随意一台master执行

安装heapster 监控插件

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.12.3/heapster.yaml

安装dashboard 插件

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.12.3/dashboard.yaml

获取加入master集群命令

1
kubeadm token create --print-join-command

初始化所有kubernetes node

1
2
3
4
5
6
7
8
9
10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

# 拉取calico网络对应需要的镜像,拉取比教缓慢
docker pull quay.io/calico/typha:v3.3.2
docker pull quay.io/calico/node:v3.3.2
docker pull quay.io/calico/cni:v3.3.2

# 加入集群
kubeadm join 172.19.170.100:8443 --token mrvxeb.mfr1wx6upq5bbwqt --discovery-token-ca-cert-hash sha256:8ee893d8bf69f1f622f55a983f1401a9f2a236ffa9248894cb614c972de47f48

可以在master机器上查看对应的node状态

1
2
kubectl get node
kubectl get pod -n kube-system

测试

测试应用和dns是否正常

在随意一台master机器上运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 部署一个服务
cd /root && mkdir nginx && cd nginx
cat << EOF > nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
type: NodePort
ports:
- port: 80
nodePort: 31000
name: nginx-port
targetPort: 80
protocol: TCP

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF

kubectl apply -f nginx.yaml

# 创建一个POD来测试DNS解析
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
# Server: 10.96.0.10
# Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

# Name: kubernetes
# Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
curl nginx
# 有返回结果表明ok
exit
kubectl delete deployment curl

测试master & Haproxy高可用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#随机关掉一台master机器
init 0

# 在别的master机器上执行命令
kubectl get node
#NAME STATUS ROLES AGE VERSION
#master-1 NotReady master 1h v1.13.3
#master-2 Ready master 59m v1.13.3
#master-3 Ready master 59m v1.13.3
#node-1 Ready <none> 58m v1.13.3
#node-2 Ready <none> 58m v1.13.3

#重新创建一个pod,看看是否能创建成功
kubectl run curl --image=radial/busyboxplus:curl -i --tty
exit
kubectl delete deployment curl
-------------本文结束感谢您的阅读-------------

本文标题:在阿里云使用Kubeadm 1.13.x 部署多Master集群

文章作者:icyboy

发布时间:2019年02月15日 - 21:00

最后更新:2020年05月22日 - 11:28

原始链接:http://team.jiunile.com/blog/2019/02/k8s-kubeadm-ha-1-13-x.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。