Ceph
Less than 1 minute
Ceph
prepare
- k8s is ready
- argocd is ready and logged in
- Storage devices for Ceph are prepared on cluster nodes (refer to ceph-cluster.yaml for configuration)
installation
install rook-ceph-operator
- prepare
rook-ceph-operator.app.yaml
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: rook-ceph-operator spec: syncPolicy: syncOptions: - CreateNamespace=true project: default source: repoURL: https://charts.rook.io/release chart: rook-ceph targetRevision: v1.17.4 helm: releaseName: rook-ceph-operator valuesObject: image: repository: m.daocloud.io/docker.io/rook/ceph pullPolicy: IfNotPresent csi: imagePullPolicy: IfNotPresent cephcsi: repository: m.daocloud.io/quay.io/cephcsi/cephcsi registrar: repository: m.daocloud.io/registry.k8s.io/sig-storage/csi-node-driver-registrar provisioner: repository: m.daocloud.io/registry.k8s.io/sig-storage/csi-provisioner snapshotter: repository: m.daocloud.io/registry.k8s.io/sig-storage/csi-snapshotter attacher: repository: m.daocloud.io/registry.k8s.io/sig-storage/csi-attacher resizer: repository: m.daocloud.io/registry.k8s.io/sig-storage/csi-resizer destination: server: https://kubernetes.default.svc namespace: rook-ceph
- prepare
ceph-cluster.app.yaml
File not found
- apply rook-ceph-operator to k8s
kubectl -n argocd apply -f rook-ceph-operator.app.yaml
- sync by argocd
argocd app sync rook-ceph-operator
create a ceph-cluster and a filesystem
- create a ceph-cluster
- prepare
ceph-cluster.yaml
apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: m.daocloud.io/quay.io/ceph/ceph:v19.2.2 allowUnsupported: false imagePullPolicy: IfNotPresent dataDirHostPath: /var/lib/rook skipUpgradeChecks: false continueUpgradeAfterChecksEvenIfNotHealthy: false waitTimeoutForHealthyOSDInMinutes: 10 upgradeOSDRequiresHealthyPGs: false mon: count: 3 allowMultiplePerNode: false volumeClaimTemplate: spec: storageClassName: local-path resources: requests: storage: 10Gi mgr: count: 2 allowMultiplePerNode: false modules: - name: rook enabled: true - name: pg_autoscaler enabled: true dashboard: enabled: true ssl: false monitoring: enabled: false crashCollector: disable: false logCollector: enabled: true periodicity: daily maxLogSize: 1G storage: useAllNodes: false useAllDevices: false allowDeviceClassUpdate: false allowOsdCrushWeightUpdate: true nodes: - name: k3s2 devices: - name: /dev/ubuntu-vg/ceph-cluster - name: k3s3 devices: - name: /dev/ubuntu-vg/ceph-cluster - name: k3s4 devices: - name: /dev/ubuntu-vg/ceph-cluster resources: mgr: requests: cpu: "500m" memory: "512Mi" limits: cpu: "1000m" memory: "2Gi" mon: requests: cpu: "500m" memory: "1Gi" limits: cpu: "1000m" memory: "2Gi" osd: requests: cpu: "500m" memory: "1Gi" limits: cpu: "1000m" memory: "2Gi" placement: all: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type/rook-ceph operator: Exists tolerations: - effect: NoSchedule key: node-type/rook-ceph operator: Exists disruptionManagement: managePodBudgets: true osdMaintenanceTimeout: 30 pgHealthCheckTimeout: 0 manageMachineDisruptionBudgets: false
kubectl -n rook-ceph apply -f ceph-cluster.yaml
- wait for all pods to be running:
kubectl -n rook-ceph get pods -w
- prepare
- create a filesystem
- prepare
filesystem.yaml
apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: filesystem namespace: rook-ceph spec: metadataPool: name: filesystem-metadata-pool failureDomain: host replicated: size: 3 requireSafeReplicaSize: true dataPools: - name: filesystem-data-pool failureDomain: host replicated: size: 3 requireSafeReplicaSize: true preserveFilesystemOnDelete: true metadataServer: activeCount: 2 activeStandby: true placement: tolerations: - effect: NoSchedule key: node-type/rook-ceph operator: Exists resources: requests: cpu: "500m" memory: "1Gi" limits: cpu: "1000m" memory: "4Gi"
kubectl -n rook-ceph apply -f filesystem.yaml
- prepare
- verify ceph status
- prepare
toolbox.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-tools namespace: rook-ceph labels: app: rook-ceph-tools spec: replicas: 1 selector: matchLabels: app: rook-ceph-tools template: metadata: labels: app: rook-ceph-tools spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-ceph-tools image: m.daocloud.io/docker.io/rook/ceph:v1.17.4 command: ["/usr/local/bin/toolbox.sh"] imagePullPolicy: IfNotPresent env: - name: ROOK_CEPH_USERNAME valueFrom: secretKeyRef: name: rook-ceph-mon key: ceph-username - name: ROOK_CEPH_SECRET valueFrom: secretKeyRef: name: rook-ceph-mon key: ceph-secret volumeMounts: - mountPath: /etc/ceph name: ceph-config - name: mon-endpoint-volume mountPath: /etc/rook volumes: - name: mon-endpoint-volume configMap: name: rook-ceph-mon-endpoints items: - key: data path: mon-endpoints - name: ceph-config emptyDir: {}
- apply toolbox to k8s
kubectl -n rook-ceph apply -f toolbox.yaml
- check Ceph cluster status:
kubectl -n rook-ceph exec -it deployment/rook-ceph-tools -- ceph status
- prepare
create a storage class to use the filesystem
- prepare
storage-class.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-filesystem provisioner: rook-ceph.cephfs.csi.ceph.com parameters: clusterID: rook-ceph fsName: filesystem pool: filesystem-data-pool csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph reclaimPolicy: Delete allowVolumeExpansion: true
- apply storage-class to k8s
kubectl apply -f storage-class.yaml
- check storage class
kubectl get storageclass ceph-filesystem