NFSoRDMA (NFS over Remote Direct Memory Access) is a way to run NFS with RDMA transport semantics: the client can move data more directly and avoid much of the TCP/kernel overhead, which typically improves throughput, reduces latency, and lowers CPU usage.
On Dell EMC PowerScale OneFS, NFSoRDMA is supported on modern OneFS versions (introduced with OneFS 9.2) and uses RoCEv2 (Routable RoCE) as the RDMA transport over Ethernet.
For OpenShift, NFSoRDMA can be a strong fit when you want shared POSIX storage (RWX) but also care about performance consistency and CPU efficiencyâcommon in AI/ML pipelines, media workloads, and HPC-style data processing.
NFSoRDMA is usually most valuable when:
Dell highlights that OneFSâs NFS over RDMA implementation can deliver dramatic performance increases and reduced workstation CPU utilization compared to TCP-based NFS in appropriate workflows.
A few OneFS specifics matter when designing this for OpenShift:
OneFS runs RDMA traffic using RoCEv2 (RRoCE).
Hardware-wise, Dellâs guidance calls out Mellanox ConnectX family NIC support for RDMA-capable front-end networking.
Even when the data path uses RDMA, OneFS documentation notes that NFSv3 auxiliary protocols (mount, lock manager, rpcbind, etc.) still run over TCP/UDP.
So your firewall/network rules canât be âRDMA-onlyââyou still need the usual NFS control-plane ports for the protocol flavor you use.
OneFS expects you to place RoCEv2-capable front-end interfaces into an IP pool before clients can access over RDMA.
Dellâs best-practices doc notes that when mounting NFS over RDMA, you may need to specify port 20049 depending on the client kernel version (it relates to RPC binding for RDMA).
Dell provides both WebUI and CLI paths. From the OneFS CLI guide, enabling RDMA for NFS is done by toggling the global NFS RDMA setting, for example:
isi nfs settings global modify --nfsv-rdma-enabled=true
isi nfs settings global viewFrom there, ensure the client-facing IP pool used by OpenShift nodes is the one that contains RoCEv2-capable interfaces. Dellâs best-practices doc describes filtering/validating RDMA-capable interfaces at the IP pool level. (Dell)
For most OpenShift deployments, the practical way to consume PowerScale NFS storage is via the CSI Driver for Dell PowerScale (part of Dellâs Container Storage Modules ecosystem). (GitHub)
mountOptionsKubernetes supports specifying NFS mount flags via mountOptions in the StorageClass, and dynamically provisioned PVs will be mounted with those options. (Kubernetes)
That means you can express âuse RDMA transportâ at the StorageClass level, for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: powerscale-nfs-rdma
provisioner: csi-isilon.dellemc.com
parameters:
csi.storage.k8s.io/fstype: nfs
# ... your PowerScale CSI parameters (AccessZone, IsiPath, etc.)
mountOptions:
- nfsvers=4.1
- rdma
- port=20049Notes:
rdma + optional port=20049 pattern aligns with Dellâs OneFS NFS-over-RDMA guidance. (Dell)Dellâs OneFS protocol guidelines include a recommended maximum of 32 NFS-over-RDMA connections per node. This isnât a hard law, but itâs a useful sizing signal when many OpenShift nodes/pods mount the same service IP pool. (Dell)
Dell recommends enabling flow control to optimize performance under packet loss scenarios for NFS over RDMA. Treat this as a âstorage network designâ topic (switch config, congestion control, etc.). (Dell)
Because NFS can silently fall back to TCP if RDMA transport fails, validate end-to-end (OneFS settings + IP pool + client mount flags + network reachability).
Mount works but performance looks like TCP NFS Confirm the mount options include rdma (and port=20049 if needed). (Dell)
RDMA mounts fail intermittently Check that the client is using the correct IP pool (RDMA-capable interfaces), and validate RoCEv2 fabric health. (Dell)
Youâre using VLANs or LACP/aggregated interfaces on OneFS Dell documents NFSoRDMA is not supported for VLAN and aggregated interfacesâuse supported interface designs/IP pools. (Dell)
mountOptions (Kubernetes)