部署kubernetes有太多因不可描述原因导致镜像拉取失败的问题,多处使用国内镜像源替换官方镜像源处理
ip规划
| 角色 | ip | 
|---|---|
| master01 | 192.168.0.101 | 
| master02 | 192.168.0.102 | 
| master03 | 192.168.0.103 | 
| worker01 | 192.168.0.104 | 
| worker02 | 192.168.0.105 | 
| worker03 | 192.168.0.106 | 
基础环境部署
所有服务器执行1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26# 关闭swap
swapoff -a
sed -i '/swap/s/^/#/g' /etc/fstab
# 关闭selinux
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 配置yum仓库
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
echo '
[kuberntente]
name=kubernetes aliyun
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgckeck=1
enabled=1
' > /etc/yum.repos.d/kubernetes.repo
# 安装docker kubelet kubeadm
yum repolist
yum install kubeadm kubelet docker-ce -y
# 启动docker和kubelet并设置为自启动
systemctl start docker
systemctl enable docker
systemctl enable kubelet
部署第一台master
| 1 | # 导出默认kubeadm配置并修改 | 
| 1 | apiVersion: kubeadm.k8s.io/v1beta1 | 
| 1 | # 写入host指向 | 
配置第二和第三台master
master01上执行1
2
3
4
5
6
7
8# 复制证书到其他两台master
scp /etc/kubernetes/admin.conf master02-ip:/etc/kubernetes/
scp /etc/kubernetes/pki/etcd/ca.* master02-ip:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/front-proxy-ca.* master02-ip:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master02-ip:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* master02-ip:/etc/kubernetes/pki/
# 生成join命令{KUBE_JOIN_COMMAND},在其他节点加入时需要用到
kubeadm token create --print-join-command
master02或master03上执行1
2
3
4
5
6# 写入host指向
echo '192.168.0.101 k8s.doman.io' >> /etc/hosts
# master02或者master03加入集群
{KUBE_JOIN_COMMAND} --experimental-control-plane
# 修改host指向
sed -i 's/101/102/g' /etc/hosts
配置worker节点
配置worker节点只需要在worker节点上运行{KUBE_JOIN_COMMAND}
检查node状态
status皆为ready
kubernetes get nodes
查看kubernetes组件pod状态
kubectl get pods -n kube-system
查看kubernetes组件状态
kubectl get cs1
2
3
4NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
安装dashboard
| 1 | # 先下载官方dashboard配置文件 | 
| 1 | # 修改镜像地址,将k8s.gcr.io的镜像修改为docker.io的镜像 | 
| 1 | # 安装dashboard | 
安装ingress
在ingress的github仓库可以找到支持的版本
找到部署文件deploy/mandatory.yaml并下载下来wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.20.0/deploy/mandatory.yaml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18修改两处镜像仓库
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          # image: k8s.gcr.io/defaultbackend-amd64:1.5
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
    ....
    ....
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          #image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
| 1 | # 安装ingress | 
master配置keepalived
| 1 | # 所有节点安装keepalived | 
| 1 | # 修改keepalived配置文件,需要根据不同节点单独配置 |