使用Kubeadm 1.11.x 部署多Master集群

环境介绍

IP Hostname 备注
172.19.170.183 master-1 私有云安装keepalived + haproxy
172.19.170.184 master-2 私有云安装keepalived + haproxy
172.19.170.185 master-3 私有云安装keepalived + haproxy
172.19.170.186 node-1
172.19.170.187 node-2
172.19.170.100 vip 私有云使用keepalive+haproxy搭建,公有云使用内部负载均衡器

环境初始化

在master-1 机器上执行以下命令:

1
2
3
ssh-keygen -t rsa
ssh-copy-id root@master-2
ssh-copy-id root@master-3

修改hosts信息,在所有master机器上运行

1
2
3
4
5
echo <<EOF >> /etc/hosts
172.19.170.183 master-1
172.19.170.184 master-2
172.19.170.185 master-3
EOF

优化所有master & node 节点的机器环境

在所有master & node 机器上都要执行以下命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126

#################################################################
# 安装4.18版本内核
# 由于最新稳定版4.19内核将nf_conntrack_ipv4更名为nf_conntrack,目前的kube-proxy不支持在4.19版本内核下开启ipvs
# 详情可以查看:https://github.com/kubernetes/kubernetes/issues/70304
# 对于该问题的修复10月30日刚刚合并到代码主干,所以目前还没有包含此修复的kubernetes版本发出
# 可以选择安装4.18版本内核,或者不开启IPVS
#################################################################
############################# start #############################
# 升级内核
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm ;yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

# 检查默认内核版本是否大于4.14,否则请调整默认启动参数
grub2-editenv list

#重启以更换内核
reboot

# 开启ip_vs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed -r 's#(.*).ko.*#\1#'\`; do
/sbin/modinfo -F filename \$i &> /dev/null
if [ \$? -eq 0 ]; then
/sbin/modprobe \$i
fi
done
EOF

#检查是否开启了ip_vs
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

############################# end #############################

# 优化内核
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.netfilter.nf_conntrack_max=2310720
fs.may_detach_mounts = 1
fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances = 8192
fs.file-max=52706963
fs.nr_open=52706963
vm.swappiness = 0
vm.panic_on_oom=0
vm.overcommit_memory=1
EOF
sysctl --system

# 执行sysctl报错请参考centos7添加bridge-nf-call-ip6tables出现No such file or directory: https://blog.csdn.net/airuozhaoyang/article/details/40534953

# 优化文件打开数
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf

# 关闭Selinux/firewalld
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭交换分区
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 同步时间
yum install -y ntpdate
ntpdate -u ntp.api.bz

# 设置kubernet yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubernet
yum install kubeadm-1.11.5-0 kubelet-1.11.5-0 kubectl-1.11.5-0 --disableexcludes=kubernetes -y
systemctl enable kubelet

# 设置docker源
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=Docker Repository
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker/yum/repo/centos7
enabled=1
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker/yum/gpg
EOF

# 安装docker-engine
yum -y install docker-engine-1.13.1 --disableexcludes=docker
systemctl enable docker
systemctl start docker

# 设置kubelet启动配置
cgroupDriver=$(docker info|grep Cg)
driver=${cgroupDriver##*: }
echo "driver is ${driver}"

cat <<EOF > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=${driver}"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=--system-reserved=cpu=200m,memory=250Mi --kube-reserved=cpu=200m,memory=250Mi --eviction-soft=memory.available<500Mi,nodefs.available<2Gi --eviction-soft-grace-period=memory.available=1m30s,nodefs.available=1m30s --eviction-max-pod-grace-period=120 --eviction-hard=memory.available<300Mi,nodefs.available<1Gi --eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=500Mi,imagefs.available=2Gi --node-status-update-frequency=10s --eviction-pressure-transition-period=30s"
ExecStart=
ExecStart=/usr/bin/kubelet \$KUBELET_KUBECONFIG_ARGS \$KUBELET_SYSTEM_PODS_ARGS \$KUBELET_NETWORK_ARGS \$KUBELET_DNS_ARGS \$KUBELET_AUTHZ_ARGS \$KUBELET_CADVISOR_ARGS \$KUBELET_CGROUP_ARGS \$KUBELET_CERTIFICATE_ARGS \$KUBELET_EXTRA_ARGS
EOF

搭建keepalived + haproxy (公有云不需要搭建)

在master-1上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
yum install -y keepalived

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost #发送邮箱
}
notification_email_from keepalived@localhost #邮箱地址
smtp_server 127.0.0.1 #邮件服务器地址
smtp_connect_timeout 30
router_id node1 #主机名,每个节点不同即可
}

vrrp_instance VI_1 {
state MASTER #在另一个节点上为BACKUP
interface eth0 #IP地址漂移到的网卡
virtual_router_id 6 #多个节点必须相同
priority 100 #优先级,备用节点的值必须低于主节点的值
advert_int 1 #通告间隔1秒
authentication {
auth_type PASS #预共享密钥认证
auth_pass 571f97b2 #密钥
}
virtual_ipaddress {
172.19.170.100 #VIP地址
}
}
EOF

systemctl enable keepalived
systemctl start keepalived

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
yum install  -y haproxy

cat << EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m

frontend kubernetes
bind *:8443
mode tcp
default_backend kubernetes-master

backend kubernetes-master
balance roundrobin
server master-1 172.19.170.183:6443 check maxconn 2000
server master-2 172.19.170.183:6443 check maxconn 2000
server master-3 172.19.170.183:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy

在master-2上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
yum install -y keepalived

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost #发送邮箱
}
notification_email_from keepalived@localhost #邮箱地址
smtp_server 127.0.0.1 #邮件服务器地址
smtp_connect_timeout 30
router_id node2 #主机名,每个节点不同即可
}

vrrp_instance VI_1 {
state BACKUP #在另一个节点上为BACKUP
interface eth0 #IP地址漂移到的网卡
virtual_router_id 6 #多个节点必须相同
priority 80 #优先级,备用节点的值必须低于主节点的值
advert_int 1 #通告间隔1秒
authentication {
auth_type PASS #预共享密钥认证
auth_pass 571f97b2 #密钥
}
virtual_ipaddress {
172.19.170.100 #VIP地址
}
}
EOF

systemctl enable keepalived
systemctl start keepalived

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
yum install  -y haproxy

cat << EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m

frontend kubernetes
bind *:8443
mode tcp
default_backend kubernetes-master

backend kubernetes-master
balance roundrobin
server master-1 172.19.170.183:6443 check maxconn 2000
server master-2 172.19.170.183:6443 check maxconn 2000
server master-3 172.19.170.183:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy

在master-3上配置

keepalived 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
yum install -y keepalived

cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost #发送邮箱
}
notification_email_from keepalived@localhost #邮箱地址
smtp_server 127.0.0.1 #邮件服务器地址
smtp_connect_timeout 30
router_id node3 #主机名,每个节点不同即可
}

vrrp_instance VI_1 {
state BACKUP #在另一个节点上为BACKUP
interface eth0 #IP地址漂移到的网卡
virtual_router_id 6 #多个节点必须相同
priority 80 #优先级,备用节点的值必须低于主节点的值
advert_int 1 #通告间隔1秒
authentication {
auth_type PASS #预共享密钥认证
auth_pass 571f97b2 #密钥
}
virtual_ipaddress {
172.19.170.100 #VIP地址
}
}
EOF

systemctl enable keepalived
systemctl start keepalived

haproxy 安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
yum install  -y haproxy

cat << EOF > /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m

frontend kubernetes
bind *:8443
mode tcp
default_backend kubernetes-master

backend kubernetes-master
balance roundrobin
server master-1 172.19.170.183:6443 check maxconn 2000
server master-2 172.19.170.183:6443 check maxconn 2000
server master-3 172.19.170.183:6443 check maxconn 2000
EOF

systemctl enable haproxy
systemctl start haproxy

搭建kubernetes master 集群

初始化master-1

在master-1 上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
cat << EOF > $HOME/kubeadm-1.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.5
api:
advertiseAddress: 172.19.170.183
bindPort: 6443
controlPlaneEndpoint: 172.19.170.100:8443
apiServerCertSANs:
- 172.19.170.183
- 172.19.170.184
- 172.19.170.185
- master-1
- master-2
- master-3
- 172.19.170.100
- 127.0.0.1
kubeProxy:
config:
mode: iptables
etcd:
local:
extraArgs:
listen-client-urls: https://127.0.0.1:2379,https://172.19.170.183:2379
advertise-client-urls: https://172.19.170.183:2379
listen-peer-urls: https://172.19.170.183:2380
initial-advertise-peer-urls: https://172.19.170.183:2380
initial-cluster: master-1=https://172.19.170.183:2380
initial-cluster-state: new
serverCertSANs:
- master-1
- 172.19.170.183
peerCertSANs:
- master-1
- 172.19.170.183
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: 192.168.0.0/16
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # image的仓库源
EOF

kubeadm config images pull --config $HOME/kubeadm-1.yaml
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
kubeadm init --config $HOME/kubeadm-1.yaml

mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config

# 查看etcd是否启动,等待返回Running 说明启动成功
kubectl get pods -n kube-system 2>&1|grep etcd|awk '{print $3}'

# 设置其他master
ssh master-2 "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube"
ssh master-3 "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube"

# 拷贝文件至 master-2
scp /etc/kubernetes/pki/ca.crt master-2:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key master-2:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key master-2:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub master-2:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt master-2:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key master-2:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master-2:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master-2:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf master-2:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf master-2:~/.kube/config

#拷贝文件至 master-3
scp /etc/kubernetes/pki/ca.crt master-3:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key master-3:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key master-3:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub master-3:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt master-3:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key master-3:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt master-3:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key master-3:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf master-3:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf master-3:~/.kube/config

初始化master-2

在master-2 上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

cat << EOF > $HOME/kubeadm-2.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.5
api:
advertiseAddress: 172.19.170.184
bindPort: 6443
controlPlaneEndpoint: 172.19.170.100:8443
apiServerCertSANs:
- 172.19.170.183
- 172.19.170.184
- 172.19.170.185
- master-1
- master-2
- master-3
- 172.19.170.100
- 127.0.0.1
kubeProxy:
config:
mode: iptables
etcd:
local:
extraArgs:
listen-client-urls: https://127.0.0.1:2379,https://172.19.170.184:2379
advertise-client-urls: https://172.19.170.184:2379
listen-peer-urls: https://172.19.170.184:2380
initial-advertise-peer-urls: https://172.19.170.184:2380
initial-cluster: master-1=https://172.19.170.183:2380,master-2=https://172.19.170.184:2380
initial-cluster-state: existing
serverCertSANs:
- master-2
- 172.19.170.184
peerCertSANs:
- master-2
- 172.19.170.184
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: 192.168.0.0/16
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # image的仓库源
EOF

######################################################
################## 切换至master-1执行 ##################
######################################################
kubectl exec \
-n kube-system etcd-master-1 -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://172.19.170.183:2379 \
member add master-2 https://172.19.170.184:2380
# 在master-1 执行后会卡在那边,等待其他的etcd节点加入


######################################################
################## 切换至master-2执行 ##################
######################################################
kubeadm alpha phase certs all --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubeconfig controller-manager --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubeconfig scheduler --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubelet config write-to-disk --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubelet write-env-file --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubeconfig kubelet --config $HOME/kubeadm-2.yaml
systemctl restart kubelet
kubeadm alpha phase etcd local --config $HOME/kubeadm-2.yaml
kubeadm alpha phase kubeconfig all --config $HOME/kubeadm-2.yaml
kubeadm alpha phase controlplane all --config $HOME/kubeadm-2.yaml
kubeadm alpha phase mark-master --config $HOME/kubeadm-2.yaml

初始化master-3

在master-3 上执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

cat << EOF > $HOME/kubeadm-3.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.5
api:
advertiseAddress: 172.19.170.185
bindPort: 6443
controlPlaneEndpoint: 172.19.170.100:8443
apiServerCertSANs:
- 172.19.170.183
- 172.19.170.184
- 172.19.170.185
- master-1
- master-2
- master-3
- 172.19.170.100
- 127.0.0.1
kubeProxy:
config:
mode: iptables
etcd:
local:
extraArgs:
listen-client-urls: https://127.0.0.1:2379,https://172.19.170.185:2379
advertise-client-urls: https://172.19.170.185:2379
listen-peer-urls: https://172.19.170.185:2380
initial-advertise-peer-urls: https://172.19.170.185:2380
initial-cluster: master-1=https://172.19.170.183:2380,master-2=https://172.19.170.184:2380,master-3=https://172.19.170.185:2380
initial-cluster-state: existing
serverCertSANs:
- master-3
- 172.19.170.185
peerCertSANs:
- master-3
- 172.19.170.185
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: 192.168.0.0/16
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # image的仓库源
EOF

######################################################
################## 切换至master-1执行 ##################
######################################################
kubectl exec \
-n kube-system etcd-master-1 -- etcdctl \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--endpoints=https://172.19.170.183:2379 \
member add master-3 https://172.19.170.185:2380
# 在master-1 执行后会卡在那边,等待其他的etcd节点加入

######################################################
################## 切换至master-3执行 ##################
######################################################
kubeadm alpha phase certs all --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubeconfig controller-manager --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubeconfig scheduler --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubelet config write-to-disk --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubelet write-env-file --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubeconfig kubelet --config $HOME/kubeadm-3.yaml
systemctl restart kubelet
kubeadm alpha phase etcd local --config $HOME/kubeadm-3.yaml
kubeadm alpha phase kubeconfig all --config $HOME/kubeadm-3.yaml
kubeadm alpha phase controlplane all --config $HOME/kubeadm-3.yaml
kubeadm alpha phase mark-master --config $HOME/kubeadm-3.yaml

调整配置,使用vip来进行调度

在所有master机器上都执行以下命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 在master-1 上执行
sed -i 's/etcd-servers=https:\/\/127.0.0.1:2379/etcd-servers=https:\/\/172.19.170.183:2379,https:\/\/172.19.170.184:2379,https:\/\/172.19.170.185:2379/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sed -i 's/172.19.170.183:6443/172.19.170.100:8443/g' ~/.kube/config
sed -i 's/172.19.170.183:6443/172.19.170.100:8443/g' /etc/kubernetes/kubelet.conf
systemctl restart kubelet

# 在master-2 上执行
sed -i 's/etcd-servers=https:\/\/127.0.0.1:2379/etcd-servers=https:\/\/172.19.170.183:2379,https:\/\/172.19.170.184:2379,https:\/\/172.19.170.185:2379/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sed -i 's/172.19.170.184:6443/172.19.170.100:8443/g' ~/.kube/config
sed -i 's/172.19.170.184:6443/172.19.170.100:8443/g' /etc/kubernetes/kubelet.conf
systemctl restart kubelet

# 在master-3 上执行
sed -i 's/etcd-servers=https:\/\/127.0.0.1:2379/etcd-servers=https:\/\/172.19.170.183:2379,https:\/\/172.19.170.184:2379,https:\/\/172.19.170.185:2379/g' /etc/kubernetes/manifests/kube-apiserver.yaml
sed -i 's/172.19.170.185:6443/172.19.170.100:8443/g' ~/.kube/config
sed -i 's/172.19.170.185:6443/172.19.170.100:8443/g' /etc/kubernetes/kubelet.conf
systemctl restart kubelet

######################################################
################## 至此集群搭建完成 #####################
######################################################

开启kubernetes 插件

在随意一台master执行

安装calico 网络插件

确保 Kubernetes controller manager (/etc/kubernetes/manifests/kube-controller-manager.yaml) 设置了以下标志:--cluster-cidr=192.168.0.0/16和--allocate-node-cidrs=true

提示:在kubeadm上,您可以传递--pod-network-cidr=192.168.0.0/16 给kubeadm以设置两个Kubernetes控制器标志。

在ConfigMap命名中calico-config,找到typha_service_name,删除none值,并替换为calico-typha

我们建议每200个节点至少复制一个副本,不超过20个副本。在生产中,我们建议至少使用三个副本来减少滚动升级和故障的影响。

警告:如果您设置typha_service_name而不增加其默认值为0 Felix的副本计数将尝试连接到Typha,找不到要连接的Typha实例,并且无法启动。

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.11.5/calico.yaml

安装heapster 监控插件

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.11.5/heapster.yaml

安装dashboard 插件

1
kubectl apply -f www.jiunile.com/k8s/plugin/1.11.5/dashboard.yaml

获取加入master集群命令

1
kubeadm token create --print-join-command

初始化所有kubernetes node

1
2
3
4
5
6
7
8
9
10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

# 拉取calico网络对应需要的镜像,拉取比教缓慢
docker pull quay.io/calico/typha:v3.2.4
docker pull quay.io/calico/node:v3.2.4
docker pull quay.io/calico/cni:v3.2.4

# 加入集群
kubeadm join 172.19.170.100:8443 --token mrvxeb.mfr1wx6upq5bbwqt --discovery-token-ca-cert-hash sha256:8ee893d8bf69f1f622f55a983f1401a9f2a236ffa9248894cb614c972de47f48

可以在master机器上查看对应的node状态

1
2
kubectl get node
kubectl get pod -n kube-system

测试

测试应用和dns是否正常

在随意一台master机器上运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 部署一个服务
cd /root && mkdir nginx && cd nginx
cat << EOF > nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
type: NodePort
ports:
- port: 80
nodePort: 31000
name: nginx-port
targetPort: 80
protocol: TCP

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF

kubectl apply -f nginx.yaml

# 创建一个POD来测试DNS解析
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
# Server: 10.96.0.10
# Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

# Name: kubernetes
# Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
curl nginx
# 有返回结果表明ok
exit
kubectl delete deployment curl

测试master & Haproxy高可用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#随机关掉一台master机器
init 0

# 在别的master机器上执行命令
kubectl get node
#NAME STATUS ROLES AGE VERSION
#master-1 NotReady master 1h v1.11.5
#master-2 Ready master 59m v1.11.5
#master-3 Ready master 59m v1.11.5
#node-1 Ready <none> 58m v1.11.5
#node-2 Ready <none> 58m v1.11.5

#重新创建一个pod,看看是否能创建成功
kubectl run curl --image=radial/busyboxplus:curl -i --tty
exit
kubectl delete deployment curl
-------------本文结束感谢您的阅读-------------

本文标题:使用Kubeadm 1.11.x 部署多Master集群

文章作者:icyboy

发布时间:2018年12月17日 - 21:00

最后更新:2020年05月22日 - 11:34

原始链接:http://team.jiunile.com/blog/2018/12/k8s-kubeadm-ha-1-11-x.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。