部署环境
IP地址 | 主机名 | 功能 |
---|---|---|
10.1.104.200 | k8s-deploy | 部署节点,不承担实际作用 |
10.1.104.201 | k8s-master01 | master节点 |
10.1.104.202 | k8s-master02 | master节点 |
10.1.104.203 | k8s-master03 | master节点 |
10.1.104.204 | k8s-nginx | 负载均衡节点,实际生产中应为HA架构 |
10.1.104.205 | k8s-node01 | node节点 |
10.1.104.206 | k8s-node02 | node节点 |
10.1.104.207 | k8s-node03 | node节点 |
部署etcd集群
etcd用于存储kubernetes集群的所有元数据,是kubernetes集群不可或缺的一部分。
获取etcd程序并分发到各节点(k8s-deploy):
下载并解压etcd:
cd /opt/k8s/work
wget http://download.wenjun1984.cn/Kubernetes/Etcd/etcd-v3.3.13-linux-amd64.tar.gz
tar -xvf etcd-v3.3.13-linux-amd64.tar.gz
将etcd分发到各节点:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-v3.3.13-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin
ssh root@${node_ip} "chmod +x /opt/k8s/bin/*"
done
- etcd通常部署在master节点上,也可以独立部署,需在environment.sh增加有关etcd节点的数组。
确保etcd已复制到各节点:
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "ls -l /opt/k8s/bin/etcd"
done
复制etcdctl到k8s-deploy本地目录:
cp /opt/k8s/work/etcd-v3.3.13-linux-amd64/etcdctl /opt/k8s/bin/
创建etcd证书和私钥(k8s-deploy):
创建ectd证书签名请求:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.1.104.201",
"10.1.104.202",
"10.1.104.203"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shanghai",
"L": "Shanghai",
"O": "dominos",
"OU": "ops"
}
]
}
EOF
- hosts中为全部必须添加全部etcd节点的地址。
通过CA生成etcd证书和私钥,其结构如下图:
cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
分发etcd证书和私钥到各节点(k8s-deploy):
将ectd证书复制到各节点:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p /etc/etcd/cert"
scp etcd*.pem root@${node_ip}:/etc/etcd/cert/
done
确保各节点上已经正确了复制了etcd证书:
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "ls -l /etc/etcd/cert/*"
done
创建etcd的服务文件(k8s-deploy):
创建etcd的service文件:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > etcd.service.template << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=${ETCD_DATA_DIR}
ExecStart=/opt/k8s/bin/etcd \\
--data-dir=${ETCD_DATA_DIR} \\
--wal-dir=${ETCD_WAL_DIR} \\
--name=##NODE_NAME## \\
--cert-file=/etc/etcd/cert/etcd.pem \\
--key-file=/etc/etcd/cert/etcd-key.pem \\
--trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-cert-file=/etc/etcd/cert/etcd.pem \\
--peer-key-file=/etc/etcd/cert/etcd-key.pem \\
--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--listen-peer-urls=https://##NODE_IP##:2380 \\
--initial-advertise-peer-urls=https://##NODE_IP##:2380 \\
--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\
--advertise-client-urls=https://##NODE_IP##:2379 \\
--initial-cluster-token=etcd-cluster-0 \\
--initial-cluster=${ETCD_NODES} \\
--initial-cluster-state=new \\
--auto-compaction-mode=periodic \\
--auto-compaction-retention=1 \\
--max-request-bytes=33554432 \\
--quota-backend-bytes=6442450944 \\
--heartbeat-interval=250 \\
--election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
- --data-dir,指定工作目录和数据目录为${ETCD_DATA_DIR},需在启动服务前创建这个目录;
- --wal-dir,指定wal目录。
- --name,指定节点名称,当--initial-cluster-state值为new时,--name的参数值必须位于--initial-cluster的列表中。
- --cert-file,etcd服务端与client通信时使用的证书。
- --key-file,etcd服务端与client通信时使用的私钥。
- --trusted-ca-file:签名client证书的CA证书,用于验证client证书。
- --peer-cert-file,etcd与peer通信使用的证书。
- --peer-key-file,etcd与peer通信使用的私钥。
- --peer-trusted-ca-file,签名peer证书的CA证书,用于验证peer证书。
生成各个节点的etcd服务文件:
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${NODE_MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_MASTER_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service
done
分发etcd服务文件(k8s-deploy):
将生成的etcd服务文件分发到各节点:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service
done
确保每个节点都存在服务文件并且配置正确:
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "ls -l /etc/systemd/system/etcd.service"
done
启动etcd服务(k8s-deploy):
启动每个节点上的etcd服务:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " &
done
确保每个节点上etcd服务正常启动,即该节点状态为”Running”:
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status etcd.service | grep Active"
done
验证etcd服务状态(k8s-deploy):
输出节点健康检查结果:
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${NODE_MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
--endpoints=https://${node_ip}:2379 \
--cacert=/opt/k8s/work/ca.pem \
--cert=/opt/k8s/work/etcd.pem \
--key=/opt/k8s/work/etcd-key.pem endpoint health
done
以表格的形式输出各节点状态:
ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
-w table --cacert=/opt/k8s/work/ca.pem \
--cert=/opt/k8s/work/etcd.pem \
--key=/opt/k8s/work/etcd-key.pem \
--endpoints=${ETCD_ENDPOINTS} endpoint status
文档更新时间: 2020-10-22 15:55 作者:闻骏