1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297
| # 后面三种方式需要给所有的节点都需要安排上glusterfs-client,否则无法使用 yum -y install openssh-server wget fuse fuse-libs openmpi libibverbs dnf install centos-release-gluster9 -y dnf install -y glusterfs glusterfs-api glusterfs-fuse glusterfs-rdma glusterfs-libs glusterfs-cli glusterfs-client-xlators # 然后参考github上的配置:https://github.com/rootsongjc/kubernetes-handbook/tree/master/manifests/glusterfs
# 也可以通过Heketi集群进行自动化的glusterfs的api式管理(但此方式是基于heketi对一些全新的glusterfs节点进行自动配置(无需提前配置glusterfs))
# glusterfs卷的每个节点均需要允许kubernetes访问的iptables iptables -N HEKETI iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT iptables -A HEKETI -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT service iptables save
# 需要提前先准备好内核模块(heketi要求) # 检查 lsmod | egrep 'dm_snapshot|dm_mirror|dm_thin_pool' modprobe dm_snapshot modprobe dm_mirror modprobe dm_thin_pool # 对应的ansible剧本为 for i in dm_snapshot dm_mirror dm_thin_pool;do ansible nodes -m command -a 'modprobe '"$i"'';done
# 需要确认好的sshd(否则创建heketi集群时会报错) echo "PubkeyAcceptedKeyTypes=+ssh-rsa" >> /etc/ssh/sshd_config echo "HostKeyAlgorithms=+ssh-rsa" >> /etc/ssh/sshd_config systemctl restart sshd ansible nodes -m shell -a 'echo "PubkeyAcceptedKeyTypes=+ssh-rsa" >> /etc/ssh/sshd_config;echo "HostKeyAlgorithms=+ssh-rsa" >> /etc/ssh/sshd_config;systemctl restart sshd'
# 以下是配置Heketi集群,与glusterfs配置在相同节点上形成集群 wget https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-v10.4.0-release-10.linux.amd64.tar.gz tar -zxvf heketi-v10.4.0-release-10.linux.amd64.tar.gz cp heketi/{heketi,heketi-cli} /usr/bin/
# 因为heketi不会使用root进行操作 useradd -d /var/lib/heketi -s /sbin/nologin heketi ssh-keygen -N '' -t rsa -q -f /etc/heketi/heketi_key chown -R heketi.heketi /etc/heketi ssh-copy-id -i /etc/heketi/heketi_key root@node1 ssh-copy-id -i /etc/heketi/heketi_key root@node2 ssh-copy-id -i /etc/heketi/heketi_key root@node3 ssh-copy-id -i /etc/heketi/heketi_key root@node6
# heketi的相关配置 mkdir -p /etc/heketi cat << EOF > /etc/heketi/heketi.json { "_port_comment": "Heketi Server Port Number", "port": "18080",
"_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": false,
"_cert_file_comment": "Path to a valid certificate file", "cert_file": "",
"_key_file_comment": "Path to a valid private key file", "key_file": "",
"_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": true,
"_jwt": "Private keys for access", "jwt": { "_admin": "Admin has access to all APIs", "admin": { "_key_comment": "Set the admin key in the next line", "key": "admin@P@88W0rd" }, "_user": "User only has access to /volumes endpoint", "user": { "_key_comment": "Set the user key in the next line", "key": "user@P@88W0rd" } },
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.", "backup_db_to_kube_secret": false,
"_profiling": "Enable go/pprof profiling on the /debug/pprof endpoints.", "profiling": false,
"_glusterfs_comment": "GlusterFS Configuration", "glusterfs": { "_executor_comment": [ "Execute plugin. Possible choices: mock, ssh", "mock: This setting is used for testing and development.", " It will not send commands to any node.", "ssh: This setting will notify Heketi to ssh to the nodes.", " It will need the values in sshexec to be configured.", "kubernetes: Communicate with GlusterFS containers over", " Kubernetes exec api." ], "executor": "ssh",
"_sshexec_comment": "SSH username and private key file information", "sshexec": { "keyfile": "/etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/etc/fstab" },
"_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db",
"_refresh_time_monitor_gluster_nodes": "Refresh time in seconds to monitor Gluster nodes", "refresh_time_monitor_gluster_nodes": 120,
"_start_time_monitor_gluster_nodes": "Start time in seconds to monitor Gluster nodes when the heketi comes up", "start_time_monitor_gluster_nodes": 10,
"_loglevel_comment": [ "Set log level. Choices are:", " none, critical, error, warning, info, debug", "Default is warning" ], "loglevel" : "warning" } } EOF
# 以systemd服务方式启动heketi cat << EOF > /usr/lib/systemd/system/heketi.service [Unit] Description=Heketi Server
[Service] Type=simple WorkingDirectory=/var/lib/heketi User=heketi ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json Restart=on-failure StandardOutput=syslog StandardError=syslog
[Install] WantedBy=multi-user.target
EOF systemctl enable heketi --now systemctl status heketi -l # 检查heketi服务状态
# 以配置文件创建heketi集群 cat << EOF > /etc/heketi/topology.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "node1" ], "storage": [ "192.168.40.239" ] }, "zone": 1 }, "devices": [ { "name": "/dev/vdb", "destroydata": false }, ] }, { "node": { "hostnames": { "manage": [ "node2" ], "storage": [ "192.168.50.207" ] }, "zone": 1 }, "devices": [ { "name": "/dev/vdb", "destroydata": false }, ] }, { "node": { "hostnames": { "manage": [ "node3" ], "storage": [ "192.168.50.208" ] }, "zone": 1 }, "devices": [ { "name": "/dev/vdb", "destroydata": false }, ] }, { "node": { "hostnames": { "manage": [ "node6" ], "storage": [ "192.168.50.177" ] }, "zone": 1 }, "devices": [ { "name": "/dev/vdb", "destroydata": false }, ] } ] } ] } EOF
# 在heketi-manager机器上进行配置 heketi-cli --server http://192.168.40.239:18080 --user admin --secret admin@P@88W0rd topology load --json=/etc/heketi/topology.json # 此alias选做 echo "alias heketi-cli='heketi-cli --server http://192.168.40.239:18080 --user admin --secret admin@P@88W0rd'" >> ~/.bashrc heketi-cli cluster list # 查看集群信息,获取集群信息,给后续的k8s配置sc使用
# kubernetes中使用 # 所有节点仍然要进行基础配置,否则无法使用该存储 yum -y install openssh-server wget fuse fuse-libs openmpi libibverbs dnf install centos-release-gluster9 -y dnf install -y glusterfs glusterfs-api glusterfs-fuse glusterfs-rdma glusterfs-libs glusterfs-cli glusterfs-client-xlators
heketiSecret=$(echo -n "admin@P@88W0rd" | base64) cat << EOF > /etc/heketi/heketi-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: kube-system data: key: ${heketiSecret} type: kubernetes.io/glusterfs EOF
kubectl apply -f heketi-secret.yaml
cat << EOF > /etc/heketi/heketi-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs namespace: kube-system parameters: resturl: "http://192.168.40.239:18080" clusterid: "9ad37206ce6575b5133179ba7c6e0935" restauthenabled: "true" restuser: "admin" secretName: "heketi-secret" secretNamespace: "kube-system" volumetype: "replicate:3" # 副本卷 3副本 # disperse 4 2 分散卷 4Data 2冗余 # none 条带卷 provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete # Retain 保留,Recycle 回收,Delete 删除 EOF
kubectl apply -f heketi-storageclass.yaml
cat << EOF > /etc/heketi/heketi-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: heketi-pvc annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs spec: storageClassName: "glusterfs" accessModes: - ReadWriteMany resources: requests: storage: 20Gi EOF
kubectl apply -f heketi-pvc.yaml
|