scribble

466300750.github.io

Blog GitHub

28 Dec 2019
Ceph

基于 Rook 的 Ceph 存储中的块设备、文件系统以及对象存储

0 基于kubernetes搭建rook-ceph

0.1 部署 Rook Operator

我们这里部署 release-1.1 版本的 Rook,点击查看部署使用的 部署清单文件

从上面链接中下载 common.yaml 与 operator.yaml 两个资源清单文件:

$ kubectl apply -f common.yaml
$ kubectl apply -f operator.yaml

0.2 创建 Rook Ceph 集群

现在 Rook Operator 处于 Running 状态,接下来我们就可以创建 Ceph 集群了。为了使集群在重启后不受影响,请确保设置的 dataDirHostPath 属性值为有效得主机路径。

$ vim cluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    # 最新得 ceph 镜像, 可以查看 https://hub.docker.com/r/ceph/ceph/tags
    image: ceph/ceph:v14.2.4-20190917
  dataDirHostPath: /data/rook  # 主机有效目录
  mon:
    count: 3
  dashboard:
    enabled: true
  storage:
    useAllNodes: true
    useAllDevices: false
    # 重要: Directories 应该只在预生产环境中使用
    directories:
    - path: /var/lib/rook
$ kubectl apply -f cluster.yaml

OSD Pod 的数量将取决于集群中的节点数量以及配置的设备和目录的数量。如果用上面我们的资源清单,则每个节点将创建一个 OSD。rook-ceph-agent 和 rook-discover 是否存在也是依赖于我们的配置的。

0.3 ceph dashboard

$ kubectl get service -n rook-ceph
$ kubectl apply -f dashboard-external.yaml

Rook 创建了一个默认的用户 admin,并在运行 Rook 的命名空间中生成了一个名为 rook-ceph-dashboard-admin-password 的 Secret,要获取密码,可以运行以下命令:

$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
xxxx(登录密码)

0.4 开启 Object Gateway 管理

为了在 Dashboard 上面使用 Object Gateway 管理功能,你需要提供一个一个带有 system 标志的登录认证用户。如果没有这样的用户,可以使用下面的命令创建一个:

# 先进入 Rook 工具箱 Pod
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
# 创建用户
$ radosgw-admin user create --uid=myuser --display-name=test-user \
    --system
{
    "user_id": "myuser",
    "display_name": "test-user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "myuser",
            "access_key": "<记住ak这个值>",
            "secret_key": "<记住sk这个值>"
        }
    ],
    ......
}

创建后会为当前用户生成一个 access_key 和 secret_key 这两个值,记住这两个值,下面需要使用到。 然后执行下面的命令进行配置:

$ ceph dashboard set-rgw-api-user-id myuser
Option RGW_API_USER_ID updated
$ ceph dashboard set-rgw-api-access-key <access-key>
Option RGW_API_ACCESS_KEY updated
$ ceph dashboard set-rgw-api-secret-key <secret-key>
Option RGW_API_SECRET_KEY updated

现在就可以访问 Object Gateway 的菜单了。

1 Block 块存储

1.1 创建 CephBlockPool、StorageClass

$ vim storageclass.yaml 
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  replicated:
    size: 3
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
  blockPool: replicapool
  # Specify the namespace of the rook cluster from which to create volumes.
  # If not specified, it will use `rook` as the default namespace of the cluster.
  # This is also the namespace where the cluster will be
  clusterNamespace: rook-ceph
  # Specify the filesystem type of the volume. If not specified, it will use `ext4`.
  fstype: xfs
  # (Optional) Specify an existing Ceph user that will be used for mounting storage with this StorageClass.
  #mountUser: user1
  # (Optional) Specify an existing Kubernetes secret name containing just one key holding the Ceph user secret.
  # The secret must exist in each namespace(s) where the storage will be consumed.
  #mountSecret: ceph-user1-secret

1.2 使用

// PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: busybox-pvc
  namespace: default
spec:
  storageClassName: rook-ceph-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1024Mi

// POD
apiVersion: v1
kind: Pod
metadata:
  name: busy-box-test1
  namespace: default
spec:
  restartPolicy: OnFailure
  containers:
  - name: busy-box-test1
    image: busybox
    volumeMounts:
    - name: busy-box-test-pv1
      mountPath: /mnt/busy-box
    command: ["sleep", "60000"]
  volumes:
  - name: busy-box-test-pv1
    persistentVolumeClaim:
      claimName: busybox-pvc

如果其他pod也挂载到这个

2 File System 文件系统

2.1 创建 CephFileSystem

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: busy-box-fs
  namespace: rook-ceph
spec:
  metadataPool:
    replicated:
      size: 3
  dataPools:
    - failureDomain: osd
      replicated:
        size: 3
  metadataServer:
    activeCount: 1
    activeStandby: true
    placement:
    resources:
    
$ kubectl -n rook-ceph get pod -l app=rook-ceph-mds
NAME                                          READY   STATUS    RESTARTS   AGE
rook-ceph-mds-busy-box-fs-a-d8955cd94-wthsc   1/1     Running   0          98s
rook-ceph-mds-busy-box-fs-b-8f5b6cf8-b27q9    1/1     Running   0          97s

可以看到,默认会创建两个相关 MDS:rook-ceph-mds-busy-box-fs,同时他还会在底层创建两个 Pool:busy-box-fs-metadata 元数据和 busy-box-fs-data0 数据

2.2 创建一个实例来挂载 fs 到指定目录 /mnt/busy-box/fs。

apiVersion: v1
kind: ReplicationController
metadata:
  name: busy-box-test3
  namespace: default
  labels:
    k8s-app: busy-box-test3
spec:
  replicas: 3
  selector:
    k8s-app: busy-box-test3
  template:
    metadata:
      labels:
        k8s-app: busy-box-test3
        kubernetes.io/cluster-service: "true"
    spec:
      restartPolicy: Always
      containers:
      - name: busy-box-test3
        image: busybox
        volumeMounts:
        - name: busy-box-test-fs
          mountPath: /mnt/busy-box/fs
        command: ["sleep", "60000"]
      volumes:
      - name: busy-box-test-fs
        flexVolume:
          driver: ceph.rook.io/rook
          fsType: ceph
          options:
              fsName: busy-box-fs
              clusterNamespace: rook-ceph

3 Object 对象存储

3.1 创建 Object Store

apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  creationTimestamp: "2019-05-30T11:10:49Z"
  generation: 1
  name: my-store
  namespace: rook-ceph
  resourceVersion: "248920"
  selfLink: /apis/ceph.rook.io/v1/namespaces/rook-ceph/cephobjectstores/my-store
  # The pool spec used to create the metadata pools. Must use replication.
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  # The pool spec used to create the data pool. Can use replication or erasure coding.
  dataPool:
    failureDomain: host
    replicated:
      size: 3
  # The gaeteway service configuration
  gateway:
    # type of the gateway (s3)
    type: s3
    # A reference to the secret in the rook namespace where the ssl certificate is stored
    sslCertificateRef:
    # The port that RGW pods will listen on (http)
    port: 80
    # The port that RGW pods will listen on (https). An ssl certificate is required.
    securePort:
    # The number of pods in the rgw deployment (ignored if allNodes=true)
    instances: 1
    # Whether the rgw pods should be deployed on all nodes as a daemonset
    allNodes: false
    # The affinity rules to apply to the rgw deployment or daemonset.
    placement:
    #  nodeAffinity:
    #    requiredDuringSchedulingIgnoredDuringExecution:
    #      nodeSelectorTerms:
    #      - matchExpressions:
    #        - key: role
    #          operator: In
    #          values:
    #          - rgw-node
    #  tolerations:
    #  - key: rgw-node
    #    operator: Exists
    #  podAffinity:
    #  podAntiAffinity:
    # A key/value list of annotations
    annotations:
    #  key: value
    resources:
    # The requests and limits set here, allow the object store gateway Pod(s) to use half of one CPU core and 1 gigabyte of memory
    #  limits:
    #    cpu: "500m"
    #    memory: "1024Mi"
    #  requests:
    #    cpu: "500m"
    #    memory: "1024Mi"

3.2 创建 User 用户

然后创建一个 CephObjectStoreUser User 账户,来生成 AccessKey 和 SecretKey,为了后边该用户访问 S3 存储使用。

apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
  name: my-user
  namespace: rook-ceph
spec:
  store: my-store
  displayName: "my display name"

默认 Rook 将生成的包含认证 Key 的用户信息存在 secret 中,那么,查看下该 secret 生成的 Key 是什么。

$kubectl -n rook-ceph get cephobjectstoreuser
NAME      AGE
my-user   211d
$kubectl -n rook-ceph get secret | grep my-user
rook-ceph-object-user-my-store-my-user   kubernetes.io/rook
$kubectl -n rook-ceph get secret/rook-ceph-object-user-my-store-my-user -o jsonpath='{.data.AccessKey}' | base64 --decode
NZWKPRBWDIIQTJLA
$kubectl -n rook-ceph get secret/rook-ceph-object-user-my-store-my-user -o jsonpath='{.data.SecretKey}' | base64 --decode
FvLpBbAfdiqyYe9gqcuKosBybnI41WAq34

此时,我们查看 Rook Dashboard Pools 一下,会发现新增了 6 个跟 rgw 相关的底层 pools。

然后就可以通过s3cmd在集群内访问改S3(配置环境变量),如果要在集群外访问,那么可以创建service,以nodeport的方式暴露出去


Til next time,
at 09:42

scribble