Anolis OS 离线安装 KubeSphere
为了降低离线安装难度,我们准备一套有网络,可在线安装的集群,安装好 KubeSphere,这里假设,已经准备好了 3台 Anolis OS 主机:
前提条件
要开始进行多节点安装,您需要参考如下示例准备至少三台主机。
主机 IP | 主机名称 | 操作系统 | 备注 |
---|---|---|---|
192.168.0.1 | node1 | Anolis OS | 主节点 |
192.168.0.2 | node2 | Anolis OS | 从节点 |
192.168.0.3 | node3 | Anolis OS | 从节点 |
部署准备
1、确保已经安装了 KubeKey, 没有则参考下面步骤进行安装
首先运行以下命令,以确保您从正确的区域下载 KubeKey。
export KKZONE=cn
运行以下命令来下载 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
2、在已有集群中执行 KubeKey 命令生成 manifest 文件
./kk create manifest
命令执行成功,则生成 manifest-sample.yaml 文件,内容如下:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- amd64
operatingSystems:
- arch: amd64
type: linux
id: anolis
version: "Can't get the os version. Please edit it manually."
osImage: Anolis OS 8.9
repository:
iso:
localPath:
url:
kubernetesDistributions:
- type: kubernetes
version: v1.23.10
components:
helm:
version: v3.9.0
cni:
version: v1.2.0
etcd:
version: v3.4.13
containerRuntimes:
- type: docker
version: 24.0.6
crictl:
version: v1.24.0
##
# docker-registry:
# version: "2"
# harbor:
# version: v2.4.1
# docker-compose:
# version: v2.2.2
images:
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:v5.8_0729
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:v5.8_0801
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:v2.0_0603_casfix
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:v2.1_0801_casfix
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-gateway:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-h5-v2:v.0726
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-print-v3:v.0726
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-task:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-task:v4.8_0729
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-third:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-third:v2.0.0_0620
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-ui-v2:v.0726-1
- docker.io/calico/cni:v3.26.1
- docker.io/calico/kube-controllers:v3.26.1
- docker.io/calico/node:v3.26.1
- docker.io/calico/pod2daemon-flexvol:v3.26.1
- docker.io/coredns/coredns:1.8.6
- docker.io/csiplugin/snapshot-controller:v4.0.0
- docker.io/kubesphere/k8s-dns-node-cache:1.15.12
- docker.io/kubesphere/ks-apiserver:v3.4.1
- docker.io/kubesphere/ks-console:v3.4.1
- docker.io/kubesphere/ks-controller-manager:v3.4.1
- docker.io/kubesphere/ks-installer:v3.4.1
- docker.io/kubesphere/kube-apiserver:v1.23.10
- docker.io/kubesphere/kube-controller-manager:v1.23.10
- docker.io/kubesphere/kube-proxy:v1.23.10
- docker.io/kubesphere/kube-rbac-proxy:v0.11.0
- docker.io/kubesphere/kube-scheduler:v1.23.10
- docker.io/kubesphere/kube-state-metrics:v2.6.0
- docker.io/kubesphere/kubectl:v1.22.0
- docker.io/kubesphere/notification-manager-operator:v2.3.0
- docker.io/kubesphere/notification-manager:v2.3.0
- docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
- docker.io/kubesphere/pause:3.6
- docker.io/kubesphere/prometheus-config-reloader:v0.55.1
- docker.io/kubesphere/prometheus-operator:v0.55.1
- docker.io/library/haproxy:2.3
- docker.io/prom/alertmanager:v0.23.0
- docker.io/prom/node-exporter:v1.3.1
- docker.io/prom/prometheus:v2.39.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
registry:
auths: {}
执行 cat /etc/*release*
获取系统信息
Anolis OS release 8.9
NAME="Anolis OS"
VERSION="8.9"
ID="anolis"
ID_LIKE="rhel fedora centos"
VERSION_ID="8.9"
PLATFORM_ID="platform:an8"
PRETTY_NAME="Anolis OS 8.9"
ANSI_COLOR="0;31"
HOME_URL="https://openanolis.cn/"
参照离线安装,步骤 2 中的示例配置 manifest 文件内容。
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- amd64
operatingSystems:
- arch: amd64
type: linux
id: anolis
version: "8.9"
osImage: Anolis OS 8.9
repository:
iso:
localPath:
url:
kubernetesDistributions:
- type: kubernetes
version: v1.23.10
components:
helm:
version: v3.9.0
cni:
version: v1.2.0
etcd:
version: v3.4.13
containerRuntimes:
- type: docker
version: 24.0.6
crictl:
version: v1.24.0
##
# docker-registry:
# version: "2"
# harbor:
# version: v2.4.1
# docker-compose:
# version: v2.2.2
images:
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:v5.8_0729
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-admin-v2:v5.8_0801
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:v2.0_0603_casfix
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-auth:v2.1_0801_casfix
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-gateway:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-h5-v2:v.0726
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-print-v3:v.0726
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-task:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-task:v4.8_0729
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-third:latest
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-third:v2.0.0_0620
- aiban-docker.pkg.coding.net/growth-portfolio/docker/tianyin-portfolio-ui-v2:v.0726-1
- docker.io/calico/cni:v3.26.1
- docker.io/calico/kube-controllers:v3.26.1
- docker.io/calico/node:v3.26.1
- docker.io/calico/pod2daemon-flexvol:v3.26.1
- docker.io/coredns/coredns:1.8.6
- docker.io/csiplugin/snapshot-controller:v4.0.0
- docker.io/kubesphere/k8s-dns-node-cache:1.15.12
- docker.io/kubesphere/ks-apiserver:v3.4.1
- docker.io/kubesphere/ks-console:v3.4.1
- docker.io/kubesphere/ks-controller-manager:v3.4.1
- docker.io/kubesphere/ks-installer:v3.4.1
- docker.io/kubesphere/kube-apiserver:v1.23.10
- docker.io/kubesphere/kube-controller-manager:v1.23.10
- docker.io/kubesphere/kube-proxy:v1.23.10
- docker.io/kubesphere/kube-rbac-proxy:v0.11.0
- docker.io/kubesphere/kube-scheduler:v1.23.10
- docker.io/kubesphere/kube-state-metrics:v2.6.0
- docker.io/kubesphere/kubectl:v1.22.0
- docker.io/kubesphere/notification-manager-operator:v2.3.0
- docker.io/kubesphere/notification-manager:v2.3.0
- docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
- docker.io/kubesphere/pause:3.6
- docker.io/kubesphere/prometheus-config-reloader:v0.55.1
- docker.io/kubesphere/prometheus-operator:v0.55.1
- docker.io/library/haproxy:2.3
- docker.io/prom/alertmanager:v0.23.0
- docker.io/prom/node-exporter:v1.3.1
- docker.io/prom/prometheus:v2.39.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.26.1
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0
registry:
auths: {}
导出制品 artifact。
export KKZONE=cn
./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz
离线安装集群
将下载的 KubeKey 和制品 artifact 通过 U 盘等介质拷贝至离线环境安装节点。
执行以下命令创建离线集群配置文件:
./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.23.15 -f config-sample.yaml
执行以下命令修改配置文件:
vim config-sample.yaml
- 按照实际离线环境配置修改节点信息。
- 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。
- registry 里必须指定 type 类型为 harbor,否则默认安装 docker registry。
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 10.16.8.73, internalAddress: 10.16.8.73, user: root, password: "XkE0!M%bqJ"}
- {name: node2, address: 10.16.8.41, internalAddress: 10.16.8.41, user: root, password: "6z&HLeL8"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
#storage:
# openebs:
# basePath: /www/data/openebs/local
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: ["https://5vjocojd.mirror.aliyuncs.com"]
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.2
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600
作者:Jeebiz 创建时间:2024-08-02 13:42
最后编辑:Jeebiz 更新时间:2024-08-02 14:28
最后编辑:Jeebiz 更新时间:2024-08-02 14:28