NFS Storage Overview
NFS storage can be mounted with read/write permission from multiple pods. NFS storage may be especially useful for leveraging an existing Rook cluster to provide NFS storage for legacy applications that assume an NFS client connection. Such applications may not have been migrated to Kubernetes or might not yet support PVCs. Rook NFS storage can provide access to the same network filesystem storage from within the Kubernetes cluster via PVC while simultaneously providing access via direct client connection from within or outside of the Kubernetes cluster.
Warning
Simultaneous access to NFS storage from Pods and from external clients complicates NFS user ID mapping significantly. Client IDs mapped from external clients will not be the same as the IDs associated with the NFS CSI driver, which mount exports for Kubernetes pods.
Warning
Due to a number of Ceph issues and changes, Rook officially only supports Ceph v16.2.7 or higher for CephNFS. If you are using an earlier version, upgrade your Ceph version following the advice given in Rook's v1.9 NFS docs.
Note
CephNFSes support NFSv4.1+ access only. Serving earlier protocols inhibits responsiveness after a server restart.
Prerequisites¶
This guide assumes you have created a Rook cluster as explained in the main quickstart guide as well as a Ceph filesystem which will act as the backing storage for NFS.
Many samples reference the CephNFS and CephFilesystem example manifests here and here.
Creating an NFS cluster¶
Create the NFS cluster by specifying the desired settings documented for the NFS CRD.
Creating Exports¶
When a CephNFS is first created, all NFS daemons within the CephNFS cluster will share a configuration with no exports defined. When creating an export, it is necessary to specify the CephFilesystem which will act as the backing storage for the NFS export.
RADOS Gateways (RGWs), provided by CephObjectStores, can also be used as backing storage for NFS exports if desired.
Using the Ceph Dashboard¶
Exports can be created via the Ceph dashboard as well. To enable and use the Ceph dashboard in Rook, see here.
Using the Ceph CLI¶
The Ceph CLI can be used from the Rook toolbox pod to create and manage NFS exports. To do so, first ensure the necessary Ceph mgr modules are enabled, if necessary, and that the Ceph orchestrator backend is set to Rook.
Enable the Ceph orchestrator if necessary¶
- Required for Ceph v16.2.7 and below
- Optional for Ceph v16.2.8 and above
- Must be disabled for Ceph v17.2.1 due to a Ceph regression
Ceph's NFS CLI can create NFS exports that are backed by CephFS (a CephFilesystem) or Ceph Object Gateway (a CephObjectStore). cluster_id
or cluster-name
in the Ceph NFS docs normally refers to the name of the NFS cluster, which is the CephNFS name in the Rook context.
For creating an NFS export for the CephNFS and CephFilesystem example manifests, the below command can be used. This creates an export for the /test
pseudo path.
The below command will list the current NFS exports for the example CephNFS cluster, which will give the output shown for the current example.
The simple /test
export's info can be listed as well. Notice from the example that only NFS protocol v4 via TCP is supported.
If you are done managing NFS exports and don't need the Ceph orchestrator module enabled for anything else, it may be preferable to disable the Rook and NFS mgr modules to free up a small amount of RAM in the Ceph mgr Pod.
Mounting exports¶
Each CephNFS server has a unique Kubernetes Service. This is because NFS clients can't readily handle NFS failover. CephNFS services are named with the pattern rook-ceph-nfs-<cephnfs-name>-<id>
<id>
is a unique letter ID (e.g., a, b, c, etc.) for a given NFS server. For example, rook-ceph-nfs-my-nfs-a
.
For each NFS client, choose an NFS service to use for the connection. With NFS v4, you can mount an export by its path using a mount command like below. You can mount all exports at once by omitting the export path and leaving the directory as just /
.
Exposing the NFS server outside of the Kubernetes cluster¶
Use a LoadBalancer Service to expose an NFS server (and its exports) outside of the Kubernetes cluster. The Service's endpoint can be used as the NFS service address when mounting the export manually. We provide an example Service here: deploy/examples/nfs-load-balancer.yaml
.
NFS Security¶
Security options for NFS are documented here.
Ceph CSI NFS provisioner and NFS CSI driver¶
The NFS CSI provisioner and driver are documented here
Advanced configuration¶
Advanced NFS configuration is documented here
Known issues¶
Known issues are documented on the NFS CRD page.