基于kubeadm方式部署kubernetes v1.12.1

Posted by Sunday on 2018-10-22

https://kubernetes.io/docs/setup/independent/install-kubeadm/

环境说明

系统 IP hostname
CentOS 7.4 192.168.10.101 master
CentOS 7.4 192.168.10.102 node2
CentOS 7.4 192.168.10.103 node3

11

环境准备

配置host解析(所有节点)

1
2
3
4
5
cat << EOF >> /etc/hosts
192.168.10.101 master
192.168.10.102 slave1
192.168.10.103 slave2
EOF

关闭防火墙(所有节点)

1
2
3
4
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's#^SELINUX=.*#SELINUX=disabled#g' /etc/selinux/config

关闭swap(所有节点)

1
2
swapoff -a
sed -i 's/.swap./#&/' /etc/fstab

ssh 免密码登陆 (在master上操作)

1
2
3
ssh-keygen
ssh-copy-id root@slave1
ssh-copy-id root@slave2

内核配置(所有节点)

1
2
3
4
5
6
7
modprobe br_netfilter
cat <<EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p

1
2
3
4
5
6
7
8
cat << EOF >> /etc/security/limits.conf
* soft nofile 65536"
* hard nofile 65536"
* soft nproc 65536"
* hard nproc 65536"
* soft memlock unlimited"
* hard memlock unlimited"
EOF

时间同步

1
2
3
4
5
yum install -y ntpdate
timedatectl set-timezone Asia/Shanghai
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
systemctl enable ntpdate
systemctl start ntpdate

部署Docker

移除相关包

1
2
3
yum remove docker docker-client docker-client-latest docker-common \
docker-latest docker-latest-logrotate docker-logrotate docker-selinux \
docker-engine-selinux docker-engine docker-ce

添加相关依赖包

1
2
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2

添加Docker源
所有节点操作

1
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

1
2
3
yum install docker-ce -y
systemctl start docker
systemctl enable docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#docker version

Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:03 2018
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:25:29 2018
OS/Arch: linux/amd64
Experimental: false

添加镜像加速器

1
2
3
4
5
6
7
8
mkdir -p /etc/docker 
cat << EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"live-restore": true
}
EOF
systemctl restart docker

部署Kubernetes

所有节点操作
添加kubernetes源

1
2
3
4
5
6
7
8
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
enabled=1
EOF

安装kubernetes

1
2
3
4
yum makecache fast
yum install -y kubelet kubeadm kubectl kubernetes-cni socat
systemctl enable kubelet
systemctl start kubelet

添加代理IP和不代理网段

1
2
3
4
5
6
sed -i 's#StartLimitInterval=60s#StartLimitInterval=60s\
Environment="HTTP_PROXY=http://192.168.10.251:1080" "HTTPS_PROXY=http://192.168.10.251:1080" "NO_PROXY=localhost,127.0.0.1,192.168.10.0/24,docker-registry.sundayle.com"'# /usr/lib/systemd/system/docker.service

systemctl daemon-reload
systemctl restart docker
systemctl show docker --property Environment # 查看https_proxy生效

https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

忽略Swap

1
2
sed -i 's#UBELET_EXTRA_ARGS=.*#UBELET_EXTRA_ARGS="--fail-swap-on=false"#' /etc/sysconfig/kubelet
systemctl restart kubelet

下载镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master ~]# kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.12.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.12.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.12.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.12.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.2

[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.12.1 61afff57f010 2 weeks ago 96.6MB
k8s.gcr.io/kube-scheduler v1.12.1 d773ad20fd80 2 weeks ago 58.3MB
k8s.gcr.io/kube-apiserver v1.12.1 dcb029b5e3ad 2 weeks ago 194MB
k8s.gcr.io/kube-controller-manager v1.12.1 aa2dd57c7329 2 weeks ago 164MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 4 weeks ago 220MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 7 weeks ago 39.2MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 9 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742kB

初始化集群

master上操作
kubeadm init 初始化

1
2
3
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.10.101 \
--ignore-preflight-errors=Swap

kubeadm init 输出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.10.101:6443 --token mtplaq.2jbbpziao4387pjg --discovery-token-ca-cert-hash sha256:27e77c5b53716b1d42940e6a74797597be55165487c6f30f4

注意:请记录kubeadm join命令
若初始化失败,请执行kubeadm reset,重新kubeadm init初始化。

要使kubectl为非root用户工作,这些命令也是kubeadm init输出的一部分:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果您是root用户,则可以运行:

1
2
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" /etc/profile
source /etc/profile

添加pod网络组件flannel
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kube-flannel.yml中配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 此处的ip配置要与上面kubeadm的pod-network一致
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}

# 默认的镜像是quay.io/coreos/flannel:v0.10.0-amd64,如果你能pull下来就不用修改镜像地址,否则,修改yml中镜像地址为阿里镜像源
image: registry.cn-shenzhen.aliyuncs.com/kubernetes-cn/flannel:v0.10.0-amd64

# 如果Node有多个网卡的话,参考flannel issues 39701,
# https://github.com/kubernetes/kubernetes/issues/39701
# 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,
# 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,
# flanneld启动参数加上--iface=<iface-name>
containers:
- name: kube-flannel
image: registry.cn-shenzhen.aliyuncs.com/kubernetes-cn/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth1

#--iface=eth1 的值,是你当前的网卡
# kubectl apply -f kube-flannel.yml # 启动

简单命令

1
2
3
4
5
6
7
8
kubectl get node # 查看节点状态
kubectl get pod -n kube-system -o wide # 查看pod资源
kubectl describe pod [pod_name] -n kube-system # 查看指定pod详情
kubectl logs [pod_name] -n kube-system
kubectl get cm coredns -oyaml -nkube-system
kubectl get svc -n kube-system # kube-dns
kubectl get componentstatus # 查看组件状态
systemctl status kubelet

添加从节点到集群中

从节点操作

1
2
3
4
5
kubeadm join 192.168.10.101:6443 --token mtplaq.2jbbpziao4387pjg --discovery-token-ca-cert-hash sha256:27e77c5b53716b1d42940e6a74797597be55165487c6f30f4
#kubeadm join 192.168.10.101:6443 --token 8dtbdm.jxn9lzx6it54alcl --discovery-token-ca-cert-hash sha256:27e77c5b53716b1d42940e6a74797597be55165487c6f30f4978a2ac97d8df00 --ignore-preflight-errors=Swap

#若token忘记,可在master查询
kubeadm token create --print-join-command

添加完成,就可在master查询节点了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 8m24s v1.12.1
node2 Ready <none> 95s v1.12.1
node3 Ready <none> 60s v1.12.1

[root@master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-576cbf47c7-5dmr4 1/1 Running 0 8m7s 10.244.0.9 master <none>
coredns-576cbf47c7-8h6l9 1/1 Running 0 8m7s 10.244.0.8 master <none>
etcd-master 1/1 Running 0 7m31s 192.168.10.101 master <none>
kube-apiserver-master 1/1 Running 0 7m34s 192.168.10.101 master <none>
kube-controller-manager-master 1/1 Running 0 7m21s 192.168.10.101 master <none>
kube-flannel-ds-amd64-6z4v6 1/1 Running 0 6m22s 192.168.10.101 master <none>
kube-flannel-ds-amd64-9tgjf 1/1 Running 0 97s 192.168.10.102 node2 <none>
kube-flannel-ds-amd64-lvgn2 1/1 Running 0 63s 192.168.10.103 node3 <none>
kube-proxy-7nhxx 1/1 Running 0 63s 192.168.10.103 node3 <none>
kube-proxy-ljt6l 1/1 Running 0 8m7s 192.168.10.101 master <none>
kube-proxy-nfcxq 1/1 Running 0 97s 192.168.10.102 node2 <none>
kube-scheduler-master 1/1 Running 0 7m28s 192.168.10.101 master <none>

dashboard

1
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

参考链接

基于kubeadm方式部署kubernetes v1.12.1
kubernetes官方文档kubeadm部署
kubernetes官网文档命令指南
kubernetes官方文档