đŸŽ€ Speaking at KubeCon EU 2026 Lessons Learned Orchestrating Multi-Tenant GPUs on OpenShift AI View Session
Luca Berton
howto

NFSoRDMA on Dell EMC PowerScale OneFS for OpenShift

Luca Berton ‱
#openshift#kubernetes#dell#powerscale#onefs#nfs#rdma#nfsordma#roce

NFSoRDMA on Dell EMC PowerScale OneFS for OpenShift

NFSoRDMA (NFS over Remote Direct Memory Access) is a way to run NFS with RDMA transport semantics: the client can move data more directly and avoid much of the TCP/kernel overhead, which typically improves throughput, reduces latency, and lowers CPU usage.

On Dell EMC PowerScale OneFS, NFSoRDMA is supported on modern OneFS versions (introduced with OneFS 9.2) and uses RoCEv2 (Routable RoCE) as the RDMA transport over Ethernet.

For OpenShift, NFSoRDMA can be a strong fit when you want shared POSIX storage (RWX) but also care about performance consistency and CPU efficiency—common in AI/ML pipelines, media workloads, and HPC-style data processing.


When NFSoRDMA is worth it in OpenShift

NFSoRDMA is usually most valuable when:

Dell highlights that OneFS’s NFS over RDMA implementation can deliver dramatic performance increases and reduced workstation CPU utilization compared to TCP-based NFS in appropriate workflows.


How NFSoRDMA works on OneFS (important details)

A few OneFS specifics matter when designing this for OpenShift:

OneFS uses RoCEv2 on supported NICs

OneFS runs RDMA traffic using RoCEv2 (RRoCE).
Hardware-wise, Dell’s guidance calls out Mellanox ConnectX family NIC support for RDMA-capable front-end networking.

Some NFS “helper” protocols still use TCP/UDP

Even when the data path uses RDMA, OneFS documentation notes that NFSv3 auxiliary protocols (mount, lock manager, rpcbind, etc.) still run over TCP/UDP.
So your firewall/network rules can’t be “RDMA-only”—you still need the usual NFS control-plane ports for the protocol flavor you use.

IP pools must include RDMA-capable interfaces

OneFS expects you to place RoCEv2-capable front-end interfaces into an IP pool before clients can access over RDMA.

Port 20049 can matter for some clients

Dell’s best-practices doc notes that when mounting NFS over RDMA, you may need to specify port 20049 depending on the client kernel version (it relates to RPC binding for RDMA).


Prerequisites checklist

Storage side (PowerScale / OneFS)

OpenShift side (clients)


Enabling NFSoRDMA on OneFS (high-level)

Dell provides both WebUI and CLI paths. From the OneFS CLI guide, enabling RDMA for NFS is done by toggling the global NFS RDMA setting, for example:

isi nfs settings global modify --nfsv-rdma-enabled=true
isi nfs settings global view

From there, ensure the client-facing IP pool used by OpenShift nodes is the one that contains RoCEv2-capable interfaces. Dell’s best-practices doc describes filtering/validating RDMA-capable interfaces at the IP pool level. (Dell)


Integrating with OpenShift using the PowerScale CSI driver

For most OpenShift deployments, the practical way to consume PowerScale NFS storage is via the CSI Driver for Dell PowerScale (part of Dell’s Container Storage Modules ecosystem). (GitHub)

The key OpenShift/Kubernetes mechanism: mountOptions

Kubernetes supports specifying NFS mount flags via mountOptions in the StorageClass, and dynamically provisioned PVs will be mounted with those options. (Kubernetes)

That means you can express “use RDMA transport” at the StorageClass level, for example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: powerscale-nfs-rdma
provisioner: csi-isilon.dellemc.com
parameters:
  csi.storage.k8s.io/fstype: nfs
  # ... your PowerScale CSI parameters (AccessZone, IsiPath, etc.)
mountOptions:
  - nfsvers=4.1
  - rdma
  - port=20049

Notes:


Operational guidance and guardrails

Don’t ignore “connections per node” guidance

Dell’s OneFS protocol guidelines include a recommended maximum of 32 NFS-over-RDMA connections per node. This isn’t a hard law, but it’s a useful sizing signal when many OpenShift nodes/pods mount the same service IP pool. (Dell)

RoCEv2 needs a healthy fabric

Dell recommends enabling flow control to optimize performance under packet loss scenarios for NFS over RDMA. Treat this as a “storage network design” topic (switch config, congestion control, etc.). (Dell)

Verify what transport you’re actually using

Because NFS can silently fall back to TCP if RDMA transport fails, validate end-to-end (OneFS settings + IP pool + client mount flags + network reachability).


Quick troubleshooting checklist


← Back to Blog