4 min read

Kubernetes存储之GlusterFS

Kubernetes存储之GlusterFS

环境

操作系统 : CenOS-7.9 amd64

平台:Kubernetes-1.20.15

GlusterFS版本:6.10

  • 192.168.90.9
  • 192.168.90.10
  • 192.168.90.11

GlusterFS 部署

配置FQDN

<略>

分区挂载

<略>

glusterFS安装

yum -y install centos-release-gluster6
yum -y install glusterfs glusterfs-fuse glusterfs-server

开启glusterfs服务

 systemctl enable glusterd.service --now

配置步骤

添加集群信任池

# gluster peer probe node-02
# gluster peer probe node-03

查看集群信息

# gluster peer status 
# gluster pool list

创建集群数据卷

gluster volume status all

gluster volume create gfs-data replica 3 node-01:/data/brick1 \
                                         node-02:/data/brick1 \
                                         node-03:/data/brick1 force
                                         
gluster volume list

启动集群数据卷

# gluster volume start gfs-data

# gluster volume info

Volume Name: gfs-data
Type: Replicate
Volume ID: 617624d3-5a87-4022-b92e-3b43d99d1077
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node-01:/data/brick1
Brick2: node-02:/data/brick1
Brick3: node-03:/data/brick1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

查看卷信息

# gluster volume status gfs-data
Status of volume: gfs-data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node-01:/data/brick1                49152     0          Y       39317
Brick node-02:/data/brick1                49152     0          Y       39129
Brick node-03:/data/brick1                49152     0          Y       38991
Self-heal Daemon on localhost               N/A       N/A        Y       39338
Self-heal Daemon on node-02               N/A       N/A        Y       39150
Self-heal Daemon on node-03               N/A       N/A        Y       39012

Task Status of Volume gfs-data
------------------------------------------------------------------------------
There are no active volume tasks


# gluster volume info gfs-data

安全

# 配置权限
gluster volume set gfs-data storage.owner-uid 1000
gluster volume set gfs-data storage.owner-gid 1000

# gluster volume set gfs auth.allow 10.0.0.2,10.0.0.3,10.0.0.4
# gluster volume set gfs auth.allow *

客户端挂载

yum -y install centos-release-gluster6
yum --enablerepo=centos-gluster6 install glusterfs glusterfs-fuse

挂载选项

# mount -t glusterfs -o backupvolfile-server=node02,use-readdirp=no,noatime,log-level=WARNING,_netdev node01:/storage_volumes /data

# mount -t glusterfs -o backup-volfile-servers=node-02:node-03,log-level=WARNING node-01:/gfs-data /mnt/


# mount -t glusterfs -obackup-volfile-servers=192.168.90.10:192.168.90.11,log-level=WARNING 192.168.90.9:/gfs-data /mnt
mount: unknown filesystem type 'glusterfs'

K8S后端

# https://github.com/kubernetes/examples/blob/master/volumes/glusterfs/glusterfs-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
- addresses:
  - ip: 192.168.90.9
  ports:
  - port: 49152
- addresses:
  - ip: 192.168.90.10
  ports:
  - port: 49152
- addresses:
  - ip: 192.168.90.11
  ports:
  - port: 49152
---
apiVersion: v1
kind: Service
metadata:
  name: glusterfs-cluster
spec:
  ports:
  - port: 49152

测试Pod 直接关联

apiVersion: v1
kind: Pod
metadata:
  name: glusterfs
spec:
  containers:
  - name: glusterfs
    image: nginx
    volumeMounts:
    - mountPath: "/mnt/glusterfs"
      name: glusterfsvol
  volumes:
  - name: glusterfsvol
    glusterfs:
      endpoints: glusterfs-cluster
      endpointsNamespace: gfs
      path: gfs-data
      readOnly: false

创建PVC

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitea-shared-storage
  annotations:
    pv.beta.kubernetes.io/gid: "1000"
spec:
  capacity:
    storage: 50Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: ""
  glusterfs:
    endpoints: glusterfs-cluster
    endpointsNamespace: gfs
    path: /gfs-data/gitea
    readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gitea-shared-storage
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  volumeName: gitea-shared-storage
  resources:
    requests:
      storage: 50Gi
  storageClassName: ""
注意:该特性已经在1.25版本后废弃,参考:https://docs.openshift.com/container-platform/3.11/install_config/storage_examples/gluster_example.html

glusterfs 性能调优

开启 指定 volume 的配额

gluster volume quota gfs-data enable

限制 models 中 / (既总目录) 最大使用 80GB 空间

gluster volume quota gfs-data limit-usage / 80GB

设置 cache 4GB

gluster volume set gfs-data performance.cache-size 4GB
需要确保客户端的物理内存有4GB容量,否则挂载失败

开启 异步 , 后台操作

gluster volume set gfs-data performance.flush-behind on

设置 io 线程 32

gluster volume set gfs-data performance.io-thread-count 32

设置回写 (写数据时间,先写入缓存内,再写入硬盘)

gluster volume set gfs-data performance.write-behind on

其它

# 设置 io 线程, 太大会导致进程崩溃
$ gluster volume set gfs-data performance.io-thread-count 16

# 设置 网络检测时间, 默认42s
$ gluster volume set gfs-data network.ping-timeout 10

# 设置 写缓冲区的大小, 默认1M
$ gluster volume set gfs-data performance.write-behind-window-size 1024MB

引用参考

小结

本方案适用于Kubernetes版本低于1.25版本的数据声明,并且采用endpoint + service的方式实现数据高可用。然而采用了复制卷,在性能方面会差强人意,可结合底层ZFS文件系统来规避,或者采用分散式卷来中和。