A fire in 2023 destroyed most of the technological equipment of the Slovak Film Institute (SFÚ) in Bratislava, without which it was no longer possible to continue the Institute’s work in preserving our film heritage. The company SUNTEQ won the competitive tender among other suppliers with a proposal to deliver a central storage system, computing equipment, infrastructure components, and full integration of new technologies.
The requirements for the central storage system were specified with regard to the future development of workstations and the technological demands that may arise from the transition to new workflows and formats:
- Capability to support at least 20 clients (macOS, Windows OS, Linux OS)
- Hybrid storage system with SSD and HDD, with a minimum capacity of 300 TB to 1 PB, easily expandable
- Internal SSD data throughput of at least 40 GB/s with support for NVMe-OF RDMA; or 20 GB/s for HDD
- Minimum data throughput per individual client of 8 GB/s, optimized for 4K DPX format
- Conectivity via Fibre Optics

We collaborated with Ardis Technologies BV, a manufacturer of high-performance scalable systems, on the delivery of the central storage solution. The SSD portion is built on the DDP10EF system with NVMe SSD drives and support for the NVMe-OF RDMA protocol, which enabled us to achieve real-world read speeds of over 8 GB/s per client. The HDD portion is based on the EXR78 chassis. Both parts form a single logical volume with a combined capacity of 300 TB + 1008 TB.
Thanks to the AVFS file system and a graphical management interface, file workflows can be easily configured for any number of users. The remaining active 100 Gb network components are provided by Mellanox (Nvidia) and ATTO. An important component is the NVMe-oF initiator developed by StarWind, used on clients running the Windows operating system. One of the clients is a macOS workstation that accesses the storage via the ATTO iSCSI protocol, achieving read speeds of approximately 5 GB/s.
The designed system supports 13 restoration workstations (PFClean, DIAMANT), 2 color grading suites (Resolve, BaseLight), DCI mastering (Clipster), audio restoration and post-production (CEDAR, Pro Tools), and a scanner workstation (DFT).
All workstations are connected to the storage system via a 100 Gb optical link.
Additionally, AVFS exports SMB/NFS protocols, which are used to connect to the archival tape library.



Why NVMe-OF RDMA?
NVMe over Fabrics (NVMe-oF) is a network protocol that extends the benefits of the local NVMe (Non-Volatile Memory Express) interface across high-speed networks. RDMA (Remote Direct Memory Access) is a specific data transfer method that allows direct access to the memory of a remote system without burdening the CPU, significantly reducing latency and increasing throughput.
NVMe-oF over RDMA therefore enables access to remote NVMe storage with performance nearly equivalent to that of local PCIe-connected storage, without imposing data transfer overhead on the client-side CPU.
Key Components:
- NVMe SSD: High-speed storage with low latency and high throughput.
- RDMA protocols:
- RoCE (RDMA over Converged Ethernet): Most commonly used in video production.
- InfiniBand – Extremely low latency, typically used in high-performance computing (HPC).
- iWARP – Less commonly used.
- Fabric (Network Infrastructure): Typically 25/40/100 Gb Ethernet or InfiniBand.
Advantages over Traditional Technologies such as Fibre Channel (FC):
Function | NVMe-oF RDMA | Fibre Channel (FC) |
Latency | Very low (10–20 µs) | Higher (around 100 µs) |
Throughput | 25/40/100/200 GbE | 16/32/64 GFC |
CPU Utilization | Minimal thanks to RDMA | Higher (classic I/O stack) |
Scalability | Better, runs on standard Ethernet infrastructure | Limited, more expensive infrastructure |
Flexibility & Cost | Lower cost due to standard Ethernet components | Higher cost of FC switches and HBAs |
Native NVMe Support | Yes | No (uses SCSI stack) |
Why is NVMe-oF RDMA ideal for video production and color grading?
- Extremely Low Latency:
- NVMe-oF RDMA offers ultra-low latency, which is crucial for real-time video processing and playback.
- High Throughput:
- For handling 4K/8K RAW video or large EXR sequences, immediate data access is critical.
- RDMA bypasses the operating system and CPU during data transfers, enabling smooth playback—even with multiple concurrent video streams.
- Modern NVMe-oF configurations achieve transfer speeds in the hundreds of gigabits per second.
- This allows simultaneous processing of multiple high-resolution video tracks (e.g. 4x 8K ProRes RAW) from a single central storage system without bandwidth limitations.
- Shared and Scalable Storage:
- Unlike local RAID or DAS solutions, multiple workstations can share a single high-performance NVMe array with no performance degradation.
- This is essential for collaborative tasks like team-based color grading (DaVinci Resolve, Baselight), online/offline editing, rendering, and mastering.
- Reliability and Low Latency Variability
- In professional environments, predictable latency is key.
- RDMA avoids CPU bottlenecks, eliminating frame load jitter and ensuring consistent system responsiveness.
- Efficiency with Large Files:
- Video files typically involve large, linear data transfers—an ideal use case for NVMe-oF RDMA, which delivers peak performance especially with large data blocks.
NVMe-oF over RDMA represents the most advanced solution for accessing high-performance storage over a network.
In the field of professional video and post-production, it offers a unique combination of low latency, high throughput, and centralized data access. Compared to traditional solutions like Fibre Channel or iSCSI, it is more cost-effective, flexible, and scalable—making it ideal for modern 4K/8K film productions.

The image shows the results of a benchmark test performed on an HP Z8 client running Windows 11, equipped with a 100Gb Ethernet NIC, using the ATTO Disk Benchmark utility.
The measured data throughput for both read and write operations comfortably exceeds the required parameters.
Check this video “What is RDMA?”
Video © RoCE Initiative
Information about the products and technologies used:
Share this article: