SATA performance tests (SSD)

This week we’ve received a few prototypes from the factory and now it is the time to test it out. The topic of SATA performance of this board was discussed quite a lot recently and it was for a good reason. RPi Compute Module 4 is limited to a single PCIe lane that can provide a theoretical throughput of 5Gbps. So, let’s check what we’ve got here.

  • Disks: Cheap Crucial BX500 250GB SSDs (x4)
  • Configurations: RAID 0, RAID 1, RAID 1+0 and Standalone SSD
  • Test cases (read/write):
    • Sequential
    • 512K
    • 4K
    • 4K QD32
  • Tools: fio
  • Additionally we’ve setup a NFS server to test performance over the network

P.S. 3.5′ HDDs tests are coming soon.

# run fio tests
# tests taken from

cd /mnt/datadir/
/usr/local/bin/fio --loops=5 --size=500m --filename=fiotest.tmp --stonewall --ioengine=libaio --direct=1 /
	--name=SeqRead --bs=1m --rw=read /
	--name=SeqWrite --bs=1m --rw=write /
	--name=512Kread --bs=512k --rw=randread /
	--name=512Kwrite --bs=512k --rw=randwrite /
	--name=4KQD32read --bs=4k --iodepth=32 --rw=randread /
	--name=4KQD32write --bs=4k --iodepth=32 --rw=randwrite /
	--name=4Kread --bs=4k --rw=randread /
	--name=4Kwrite --bs=4k --rw=randwrite
All in one chart:
All in one table:
Raid in action:

23 thoughts on “SATA performance tests (SSD)”

    • Thanks for the questions.
      There is a need for improvement here. Disks are detected when plugging/unplugging while running but RAID configurations suffer (restart helps).
      We will be looking into that later.

      • Would it be possible to test btrfs soft-raid1? With 3.5” disks would be ideal, but whatever is fine. Thanks for your work!

      • Adding to what I’ve written to you via email yesterday… I just realised that you combine ‘/usr/local/bin/fio –loops=5 –size=500m’ with ultra cheap BX500 SSDs that suffer from really low sustained write performance. As long as continuous writes fit into the SLC cache write speeds of ~500 MB/s are possible but once the cache is full, write performance drops below 150 MB/s.

        Size of caches usually depends on SSD capacity, for example on the 240GB BX500 write performance drops drastically after 13GB of continuous writes. Which capacity do you use?

        4 write tests, 5 loops, 500MB test size = 10GB data written per test run. If you use the 120GB models and their SLC cache is smaller than 10GB then this would explain the 254 MB/s ‘standalone’ sequential write numbers. And if that’s true you would’ve to throw away all your numbers since the benchmark design is flawed (letting the SSD become an artificial bottleneck while testing).

        To test for this is easy:

        cd /path/to/mountpoint
        for i in {1..10} ; do
        iozone -e -I -a -s 1000M -r 16384k -i 0 -i 1

          • At least WD Green and Blue suffer from the same ‘problem’. They also use an SLC cache so sustained write performance drops drastically after some GB are written. Only after some idle time and flushing the cache contents to normal pages high write performance will be restored.

            That’s why it’s important to test for this which can be done easily just by repeating 1GB writes (e.g. with iozone as outlined above) and notice at which point performance drops.

          • Hi,
            I would go to test it with KIOXIA drives, since those have high IOPS (above 75000 IOPS R/W), while the majority of SSDs like WD or Crucial have maybe half of that. Or maybe try with Samsung PRO series.

            Your results with BX500 are consistent with my experience with this drive. Its performance is low, yet still faster than HDD.

  1. Looking forward to more testing. While it’s not stellar, it’s very competitive against a lot of the other low-cost ARM solutions fielded to date…and is going to be, heh, pretty much all open. The only other real play right at the moment (One could say Hardkernel MIGHT have one, but the notion of just drive drop-ins is silly and removes them) is the Helios-64 and they’ve kind of muddled up the situation with bad logistics.

  2. Get this up and Running Guys quik.
    There so much need off this. When Will you release. So on i Hope.
    Come out whit a date.


Leave a Comment