环境信息
- 操作系统:centos7.6 4C8G
- 节点个数:1master+2node
- 内核版本:3.10.0-1127.el7.x86_64
- 网络信息:1管理 1业务
准备(master+node)
每个节点修改hostname
1
hostnamectl set-hostname nodex
配置/etc/hosts
1
2
3
4
cat /etc/hosts
179.20.23.30 node1
179.20.23.31 node2
179.20.23.32 node3
配置免密登录
1
2
3
4
ssh-keygen
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3
同步hosts文件
1
2
scp /etc/hosts node2:/etc
scp /etc/hosts node3:/etc
配置内核参数
1
2
3
4
5
6
7
8
9
vi /etc/sysctl.conf
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.send_redirects = 0
sysctl -p
关闭防火墙
1
2
3
4
5
6
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
关闭swap
1
2
3
swapoff -a
vi /etc/sysctl.d/k8s.conf 添加下面一行:
vm.swappiness=0
安装docker-ce
1
2
3
4
yum install -y yum-utils device-mapper-persistent-data lvm2 wget
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install docker-ce -y
安装kubectl kubeadm kubelet(版本自定义)
1
2
3
4
5
6
7
8
9
10
11
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
yum install kubelet-1.18.15 kubeadm-1.18.15 kubectl-1.18.15 -y
systemctl enable kubelet && sudo systemctl start kubelet
预下载镜像(master节点即可)
1
2
3
4
5
6
7
8
9
10
kubeadm config images list --kubernetes-version=v1.18.15
cat images.sh
#!/bin/bash
images=(kube-proxy:v1.18.15 kube-scheduler:v1.18.15 kube-controller-manager:v1.18.15 kube-apiserver:v1.18.15 etcd:3.4.3-0 pause:3.2 coredns:1.6.7)
for imageName in ${images[@]} ; do
docker pull harbor.archeros.cn/huayun-kubernetes/amd64/$imageName
docker tag harbor.archeros.cn/huayun-kubernetes/amd64/$imageName k8s.gcr.io/$imageName
docker rmi harbor.archeros.cn/huayun-kubernetes/amd64/$imageName
done
部署master节点
创建配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "179.20.23.33"
bindPort: 6443
nodeRegistration:
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.15
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/16
配置k8s集群master节点的管理ip和端口 配置k8s集群版本 设置pod/service cidr
开始部署
1
kubeadm init --config=kubeadm.yaml
post deploy配置
1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
允许master节点作为node节点使用
1
kubectl taint node node1 node-role.kubernetes.io/master-
部署node节点
在master节点获取token
1
kubeadm token list |grep "system:bootstrappers:kubeadm:default-node-token" |awk '{print $1}'
join集群
1
kubeadm join --token <mastertoken> --discovery-token-unsafe-skip-ca-verification <k8s_master_ip>:6443
部署cni
此时k8s集群实际已经可以运行,只是无法创建pod资源,这里我们来部署cni组件,以calico为例
1
2
curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O
kubectl apply -f calico.yaml
这里如果需要管理和业务分离部署,可以修改配置文件指定业务网卡
总结
我们这里的大部分准备操作,可以通过制作模板的方式提高效率,在实际的开发验证或者学习中可以极大节省我们的环境部署时间