Ceph Operator Helm Chart
Installs rook to create, configure, and manage Ceph clusters on Kubernetes.
Introduction¶
This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager.
Prerequisites¶
- Helm 3.13+
See the Helm support matrix for more details.
Installing¶
The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.
- Install the Helm chart
- Install the Ceph-CSI drivers chart so CSI can provision and mount volumes
- Create a Rook cluster.
The helm install command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph namespace (you will install your clusters into separate namespaces).
Release¶
The release channel is the most recent release of Rook that is considered stable for the community.
For example settings, see the next section or values.yaml
Configuration¶
The following table lists the configurable parameters of the rook-operator chart and their default values.
| Parameter | Description | Default |
|---|---|---|
allowLoopDevices | If true, loop devices are allowed to be used for osds in test clusters | false |
annotations | Pod annotations | {} |
ceph-csi-operator.controllerManager.manager.env.csiServiceAccountPrefix | "ceph-csi-" | |
ceph-csi-operator.fullnameOverride | "ceph-csi" | |
ceph-csi-operator.nameOverride | "ceph-csi" | |
cephCommandsTimeoutSeconds | The timeout for ceph commands in seconds | "15" |
containerSecurityContext | Set the container security context for the operator | {"capabilities":{"drop":["ALL"]},"runAsGroup":2016,"runAsNonRoot":true,"runAsUser":2016} |
crds.enabled | Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see the disaster recovery guide to restore them. | true |
csi.attacher.repository | Kubernetes CSI Attacher image repository | "registry.k8s.io/sig-storage/csi-attacher" |
csi.attacher.tag | Attacher image tag | "v4.11.0" |
csi.cephcsi.repository | Ceph CSI image repository | "quay.io/cephcsi/cephcsi" |
csi.cephcsi.tag | Ceph CSI image tag | "v3.16.2" |
csi.csiAddons.repository | CSIAddons sidecar image repository | "quay.io/csiaddons/k8s-sidecar" |
csi.csiAddons.tag | CSIAddons sidecar image tag | "v0.14.0" |
csi.installCsiOperator | When true, install the ceph-csi-operator subchart (see Chart.yaml condition). | true |
csi.provisioner.repository | Kubernetes CSI provisioner image repository | "registry.k8s.io/sig-storage/csi-provisioner" |
csi.provisioner.tag | Provisioner image tag | "v6.1.1" |
csi.registrar.repository | Kubernetes CSI registrar image repository | "registry.k8s.io/sig-storage/csi-node-driver-registrar" |
csi.registrar.tag | Registrar image tag | "v2.16.0" |
csi.resizer.repository | Kubernetes CSI resizer image repository | "registry.k8s.io/sig-storage/csi-resizer" |
csi.resizer.tag | Resizer image tag | "v2.1.0" |
csi.serviceMonitor.enabled | Enable ServiceMonitor for CSI metrics | false |
csi.serviceMonitor.interval | Interval at which metrics should be scraped | "5s" |
csi.serviceMonitor.labels | Additional labels for the ServiceMonitor | {} |
csi.serviceMonitor.namespace | Namespace in which to deploy the ServiceMonitor | the release namespace |
csi.snapshotter.repository | Kubernetes CSI snapshotter image repository | "registry.k8s.io/sig-storage/csi-snapshotter" |
csi.snapshotter.tag | Snapshotter image tag | "v8.5.0" |
currentNamespaceOnly | Whether the operator should watch cluster CRD in its own namespace or not | false |
customHostnameLabel | Custom label to identify node hostname. If not set kubernetes.io/hostname will be used | nil |
deleteUnusedCrushRules | If true, delete unused generated CRUSH rules after the mgr starts | true |
disableDeviceHotplug | Disable automatic orchestration when new devices are discovered. | false |
discover.nodeAffinity | The node labels for affinity of discover-agent 1 | nil |
discover.podLabels | Labels to add to the discover pods | nil |
discover.resources | Add resources to discover daemon pods | nil |
discover.toleration | Toleration for the discover pods. Options: NoSchedule, PreferNoSchedule or NoExecute | nil |
discover.tolerationKey | The specific key of the taint to tolerate | nil |
discover.tolerations | Array of tolerations in YAML format which will be added to discover deployment | nil |
discoverDaemonUdev | Blacklist certain disks according to the regex provided. | nil |
discoveryDaemonInterval | Set the discovery daemon device discovery interval (default to 60m) | "60m" |
enableDiscoveryDaemon | Enable discovery daemon | false |
enableOBCWatchOperatorNamespace | Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used | true |
enforceHostNetwork | Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled | false |
hostpathRequiresPrivileged | Runs Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions. | false |
image.pullPolicy | Image pull policy | "IfNotPresent" |
image.repository | Image | "docker.io/rook/ceph" |
image.tag | Image tag | master |
imagePullSecrets | imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. | nil |
logLevel | Global log level for the operator. Options: ERROR, WARNING, INFO, DEBUG | "INFO" |
monitoring.enabled | Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors | false |
nodeSelector | Kubernetes nodeSelector to add to the Deployment. | {} |
obcAllowAdditionalConfigFields | Many OBC additional config fields may be risky for administrators to allow users control over. The safe and default-allowed fields are 'maxObjects' and 'maxSize'. Other fields should be considered risky. To allow all additional configs, use this value: "maxObjects,maxSize,bucketMaxObjects,bucketMaxSize,bucketPolicy,bucketLifecycle,bucketOwner" | "maxObjects,maxSize" |
obcProvisionerNamePrefix | Specify the prefix for the OBC provisioner in place of the cluster namespace | ceph cluster namespace |
operatorPodLabels | Custom pod labels for the operator | {} |
priorityClassName | Set the priority class for the rook operator deployment if desired | nil |
rbacAggregate.enableOBCs | If true, create a ClusterRole aggregated to user facing roles for objectbucketclaims | false |
rbacEnable | If true, create & use RBAC resources | true |
reconcileConcurrentClusters | Number of clusters the operator reconciles concurrently | 1 |
resources | Pod resource requests & limits | {"limits":{"memory":"512Mi"},"requests":{"cpu":"200m","memory":"128Mi"}} |
revisionHistoryLimit | The revision history limit for all pods created by Rook. If blank, the K8s default is 10. | nil |
scaleDownOperator | If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. | false |
tolerations | List of Kubernetes tolerations to add to the Deployment. | [] |
unreachableNodeTolerationSeconds | Delay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes | 5 |
useOperatorHostNetwork | If true, run rook operator on the host network | nil |
Development Build¶
To deploy from a local build from your development environment:
- Build the Rook docker image:
make - Copy the image to your K8s cluster, such as with the
docker savethen thedocker loadcommands - Install the helm chart:
Uninstalling the Chart¶
To see the currently installed Rook chart:
To uninstall/delete the rook-ceph deployment:
The command removes all the Kubernetes components associated with the chart and deletes the release.
After uninstalling you may want to clean up the CRDs as described on the teardown documentation.
-
nodeAffinityand*NodeAffinityoptions should have the format"role=storage,rook; storage=ceph"orstorage;role=rook-exampleorstorage;(checks only for presence of key) ↩