[TOC]
需要准备三台服务器并设置静态IP,这里不再赘述。本文档的配置如下
节点名称 | ip |
---|---|
master | 192.168.238.20 |
node1 | 192.168.238.21 |
node2 | 192.168.238.22 |
# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
cat /etc/selinux/config
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
free -m
# 根据规划设置主机名
hostnamectl set-hostname <hostname>
# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.238.20 master
192.168.238.21 node1
192.168.238.22 node2
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
# 修改当前时间为北京时间
# 查看当前系统时间
date
# 修改当前系统时间
date -s "2018-2-22 19:10:30
# 查看硬件时间
hwclock --show
# 修改硬件时间
hwclock --set --date "2018-2-22 19:10:30"
# 同步系统时间和硬件时间
hwclock --hctosys
# 保存时钟
clock -w
上述操作执行完毕后重启
reboot
三台服务器,开始下一步的安装
# 1.创建目录存放相应的安装包
mkdir -p /opt/package/docker
# 2.上传安装包下的docker文件夹到上述的目录中
# 3.进入目录,进行解压缩操作
cd /opt/package/docker
unzip docker19-rpm.zip
# 4.安装docker
rpm -ivh *.rpm --force --nodeps
systemctl enable docker && systemctl start docker
# 5.查看是否安装成功
docker --version
将安装包目录下的docker-compose-linux-x86_64文件上传到服务机的/opt/package
目录下使用命令
// 1.将下载好的文件传入linux系统中,并重命名未docker-compose
mv docker-compose-linux-x86_64 docker-compose
// 2.给予docker-compose文件可执行权限
chmod u+x docker-compose
// 3.将docker-compose文件移至/usr/local/bin
mv docker-compose /usr/local/bin
// 4.查看版本
docker-compose --version
将压缩包harbor-offline-installer-v2.3.2.tgz上传到
/opt/package/
目录下
解压该压缩包
tar xf harbor-offline-installer-v2.3.2.tgz
修改harbor安装的配置文件
首先备份一份压缩包
# 复制配置文件内容到harbor.yml 中(安装时只识别harbor.yml)
cp harbor.yml.tmpl harbor.yml
# 用于存放harbor的持久化数据
mkdir -p /home/mkcloud/software/harbor/data
# 用于存放harbor的日志
mkdir -p /home/mkcloud/software/harbor/log
其次对harbor.yml文件进行修改配置
# 需要修改为本机的ip
hostname: 10.168.59.60
# http related config
http:
port: 80
# 需要全部注释
# https related config
# https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
data_volume: /home/mkcloud/software/harbor/data # 需要添加一个自己的目录
log:
location: /home/mkcloud/software/harbor/log # 需要添加一个自己的目录
保证此时在harbor安装文件中,执行install.sh文件进行安装,命令为:
./install.sh
通过自己的ip+端口访问
首先修改服务机的hosts
# 将下面的ip缓存harbor的ip
echo "10.168.59.60 server.harbor.com">> /etc/hosts
docker添加harbor配置-----注意这里要加harbor的端口号,这里配置的端口号为上述harbor配置文件的端口号
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"insecure-registries": ["server.harbor.com:80"]
}
EOF
systemctl daemon-reload && systemctl restart docker
输入命令
docker login server.harbor.com:80
输入用户名:admin
密码:Harbor12345
至此,harbor配置完成
# 1.三台服务机创建目录
mkdir -p /opt/package/k8s
# 2.上传文件到指定目录
scp -r kube1.9.0.tar.gz root@192.168.238.20:/opt/package/k8s
scp -r kube1.9.0.tar.gz root@192.168.238.21:/opt/package/k8s
scp -r kube1.9.0.tar.gz root@192.168.238.22:/opt/package/k8s
# 1.master下,进入/opt/package/k8s目录下解压,执行脚本
tar -zxvf kube1.9.0.tar.gz
cd kube/shell
init.sh
# 2.等待init.sh执行完毕,之后执行
master.sh
# 3. node1和node2执行下面的命令
cd /opt/package/k8s
tar -zxvf kube1.9.0.tar.gz
cd kube/shell
init.sh
# 4. 进入master将/etc/kubernetes/admin.conf文件复制到node1节点和node2节点
scp -r /etc/kubernetes/admin.conf root@192.168.238.21:/etc/kubernetes
scp -r /etc/kubernetes/admin.conf root@192.168.238.22:/etc/kubernetes
在master节点生成token
kubeadm token create --print-join-command
效果如下
[root@master shell]# kubeadm token create --print-join-command
W1009 17:15:31.782757 37720 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.0.90:6443 --token ul68zs.dkkvpwfex9rpzo0d --discovery-token-ca-cert-hash sha256:3e3ee481f5603621f216e707321aa26a68834939e440be91322c62eb8540ffce
kubeadm join 192.168.0.90:6443 --token ul68zs.dkkvpwfex9rpzo0d --discovery-token-ca-cert-hash sha256:3e3ee481f5603621f216e707321aa26a68834939e440be91322c62eb8540ffce
结果如下
[root@node1 shell]# kubeadm join 192.168.0.90:6443 --token ul68zs.dkkvpwfex9rpzo0d --discovery-token-ca-cert-hash sha256:3e3ee481f5603621f216e707321aa26a68834939e440be91322c62eb8540ffce
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
watch kubectl get pod -n kube-system -o wide 效果如下
[root@master shell]# watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide Fri Oct 9 17:45:03 2020
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5d7686f694-94fcc 1/1 Running 0 48m 100.89.161.131 master <none> <none>
calico-node-42bwj 1/1 Running 0 48m 192.168.0.90 master <none> <none>
calico-node-k6k6d 1/1 Running 0 27m 192.168.0.189 node2 <none> <none>
calico-node-lgwwj 1/1 Running 0 29m 192.168.0.68 node1 <none> <none>
coredns-f9fd979d6-2ncmm 1/1 Running 0 48m 100.89.161.130 master <none> <none>
coredns-f9fd979d6-5s4nw 1/1 Running 0 48m 100.89.161.129 master <none> <none>
etcd-master 1/1 Running 0 48m 192.168.0.90 master <none> <none>
kube-apiserver-master 1/1 Running 0 48m 192.168.0.90 master <none> <none>
kube-controller-manager-master 1/1 Running 0 48m 192.168.0.90 master <none> <none>
kube-proxy-5g2ht 1/1 Running 0 29m 192.168.0.68 node1 <none> <none>
kube-proxy-wpf76 1/1 Running 0 27m 192.168.0.189 node2 <none> <none>
kube-proxy-zgcft 1/1 Running 0 48m 192.168.0.90 master <none> <none>
kube-scheduler-master 1/1 Running 0 48m 192.168.0.90 master <none> <none>
kubectl get nodes 效果如下
[root@master shell]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 22m v1.19.0
node1 Ready <none> 2m17s v1.19.0
node2 Ready <none> 24s v1.19.0
master | NFS服务端+NFS客户端 |
---|---|
node1 | NFS客户端 |
node2 | NFS客户端 |
scp -r nfs root@192.168.238.20:/opt/package
scp -r nfs root@192.168.238.21:/opt/package
scp -r nfs root@192.168.238.22:/opt/package
# 进入master节点内/opt/package/nfs文件夹内执行以下命令
# 进入/opt/package/nfs
cd /opt/package/nfs
# 安装nfs
yum rpm -ivh nfs-utils-1.3.0-0.68.el7.2.x86_64.rpm
# 执行命令 vi /etc/exports,创建 exports 文件,文件内容如下:
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
# 执行以下命令,启动 nfs 服务
# 创建共享目录
mkdir -p /nfs/data
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -r
# 检查配置是否生效
exportfs
# 输出结果如下所示
/nfs/data *
安装NFS客户端(三台服务机操作)
# 进入node1,node2节点内/opt/package/nfs文件夹内执行以下命令
cd /opt/package/nfs
yum rpm -ivh nfs-utils-1.3.0-0.68.el7.2.x86_64.rpm
systemctl start nfs && systemctl enable nfs
K8S中安装NFS(任意K8S节点,这里选择master节点) ```shell
cd /opt/package/nfs
docker load < nfs-client-provisioner.tar.gz
vim /root/nfs/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner
namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels:
app: nfs-client-provisioner
spec: serviceAccountName: nfs-client-provisioner containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest ##默认是latest版本
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.238.20 ##这里写NFS服务器的IP地址
- name: NFS_PATH
value: /nfs/data ##这里写NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)
volumes:
- name: nfs-client-root
nfs:
server: 192.168.238.20 ##这里写NFS服务器的IP地址
path: /nfs/data ##NFS服务器中的共享挂载目录(强调:这里的路径必须是目录中最后一层的文件夹,否则部署的应用将无权限创建目录导致Pending)
[root@k8s-client nfs]# kubectl apply -f .
[root@k8s-client nfs]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-78697f488
-5p52r 1/1 Running 0 16h
[root@k8s-client nfs]# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 16h
[root@k8s-client nfs]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/managed-nfs-storage patched
[root@k8s-client nfs]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage (default) fuseim.pri/ifs Delete Immediate false
```
将安装包目录下的kubesphere文件夹上传至服务机master节点的
/opt/package/
目录下
# 在服务机master节点中执行命令
# 进入该路径
cd /opt/package/kubesphere/
# 上传安装包 这里的最后一行改成自己harbor仓库的ip+端口号+项目名称
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r server.harbor.com:80/kubesphere
# 等待上传完毕
# 执行以下命令
# 1.编辑cluster-configuration.yaml添加您的私有镜像仓库
vim cluster-configuration.yaml
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
local_registry: server.harbor.com:80/kubesphere #添加内容
# 2.编辑完成后保存 cluster-configuration.yaml,使用以下命令将 ks-installer 替换为您自己仓库的地址---(本文的harbor安装地址server.harbor.com:80/kubesphere)
sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: server.harbor.com:80/kubesphere/kubesphere/ks-installer:v3.1.1#" kubesphere-installer.yaml
# 3.请按照如下先后顺序安装(必须)
kubectl apply -f kubesphere-installer.yaml
kubectl get pods -A
# 4.等待ks-installer容器运行完毕,执行
kubectl apply -f cluster-configuration.yaml
# 检查安装日志等待安装成功
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
本次需要安装额外的几个插件
KubeSphere kubeedge
# 1.编辑cluster-configuration.yaml
vim cluster-configuration.yaml
devops:
enabled: true # 将“false”更改为“true”。
kubeedge:
enabled: true # 将“false”更改为“true”。
logging:
enabled: true # 将“false”更改为“true”。
# 2.执行以下命令开始安装
kubectl apply -f cluster-configuration.yaml
# 3.监控安装过程
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
kubectl -n kubesphere-system patch svc ks-apiserver -p '{"spec":{"type":"NodePort","ports":[{"port":80,"protocal":"TCP","targetPort":9090,"nodePort":30881}]}}'
软件 | 版本 |
---|---|
centos | 7.7 |
docker | 19.03.7 |
docker-compose | 2.1.0 |
Harbor | 2.3.2 |
kubernetes, | 1.19.0 |
kubesphere | 3.1.1 |