Windows native NVMe driver delivers major SSD performance gains

New benchmark results show that Microsoft’s native NVMe driver can bring substantial storage performance improvements on Windows. The biggest gains appear in random read workloads, where results climbed by as much as 64.89%, while processor usage also dropped noticeably. The driver first appeared in Windows Server 2025, and it can also be enabled on Windows 11 through registry changes.

New NVMe driver improves bandwidth, latency, and CPU usage

According to benchmark data published by StorageReview, Microsoft’s native NVMe driver changes storage performance in three main areas. The first is random read throughput and IOPS, where 4K and 64K workloads posted clear gains. That translates into faster data access when the system is under heavy load or handling several tasks at once.

The second area is latency. The new driver cuts 4K and 64K random read latency by a significant margin, leading to faster response times in workloads that are sensitive to delay. When higher bandwidth and lower latency are combined, the effect becomes much more visible in demanding use cases.

The third area is processor efficiency. In sequential read and write operations, the benchmarks showed lower CPU usage across different block sizes.

With data transfers handled more efficiently, the processor has more headroom for background processes and heavier workloads. Lower power use could also be one of the side effects.

StorageReview used a high-end test platform built around two 128-core AMD EPYC 9754 processors, codenamed Bergamo, paired with 768GB of DDR5-4800 memory.

The system also included 16 Solidigm P5316 30.72TB PCIe 4.0 SSDs in a JBOD setup. The benchmarks were run on Windows Server 2025 using FIO.

The largest improvement showed up in random read performance. 4K random read throughput increased from 6.1 GiB/s to 10.058 GiB/s, which represents a 64.89% jump.

64K random read performance rose from 74.291 GiB/s to 91.165 GiB/s, a gain of 22.71%. Sequential 64K reads stayed nearly unchanged, while 128K sequential reads improved by 6.65%.

Sequential write results were more mixed. With a 64K block size, performance increased by 12.13%, moving from 44.67 GiB/s to 50.087 GiB/s. At 128K, there was no real benefit, and results were effectively flat with a slight 0.79% decline.

Latency figures also painted a mixed picture. Random read latency improved sharply. 4K random read latency fell from 0.169 ms to 0.104 ms, a reduction of 38.46%. 64K random read latency dropped from 0.239 ms to 0.207 ms, improving by 13.39%.

Sequential write latency moved in the opposite direction. At 64K, write latency increased from 0.399 ms to 0.558 ms, which is a 39.85% rise. At 128K, the increase was smaller, going from 1.022 ms to 1.149 ms, or 12.43%. That suggests the penalty becomes less severe at a larger block size.

CPU usage results were more consistently positive. In 64K sequential reads, processor usage dropped from 44.89% to 37.11%. In 128K sequential reads, it fell from 61.56% to 49.56%. Sequential writes followed a similar pattern. CPU usage declined from 70.44% to 57.78% in 64K writes and from 58.44% to 47.33% in 128K writes.

These findings suggest that Microsoft’s native NVMe driver does more than increase raw throughput. It also improves how efficiently system resources are used, which matters more as faster SSDs become common across both consumer and enterprise systems.

Microsoft includes the native NVMe driver, nvmedisk.sys, in both Windows Server 2025 and Windows 11 25H2. It is not enabled by default at this stage.

Users who want to use it need to turn it on through specific registry edits. The company’s decision to keep it optional for now appears to be tied to compatibility requirements and support for third-party vendors.

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here