Benchmark ssd on linux: How to measure the same things as crystaldiskmark does in windows












19














I want to benchmark a ssd (possibly with encrypted filesystems) and compare it to benchmarks done by crystaldiskmark on windows.



CrystalDiskMark on Windows



So how can I measure approximately the same things as crystaldiskmark does?



For the first row (Seq) I think I could do something like



LC_ALL=C dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

sudo su -c "echo 3 > /proc/sys/vm/drop_caches"
LC_ALL=C dd if=tempfile of=/dev/null bs=1M count=1024


But I am not sure about the dd parameters.



For the random 512KB, 4KB, 4KB (Queue Depth=32) reads/writes speed-tests I don't have any idea how to reproduce the measurements in linux? So how can I do this?



For testing reading speeds something like sudo hdparm -Tt /dev/sda doesn't seem to make sense for me since I want for example benchmark something like encfs mounts.



Edit



@Alko, @iain



Perhaps I should write something about the motivation about this question: I am trying to benchmark my ssd and compare some encryption solutions. But that's another question (Best way to benchmark different encryption solutions on my system). While surfing in the web about ssd's and benchmarking I have often seen users posting their CrystelDiskMark results in forums. So this is the only motivation for the question. I just want to do the same on linux. For my particular benchmarking see my other question.










share|improve this question
























  • Why don't you use a benchmarking tool, that works on both systems?
    – Alko
    Oct 8 '13 at 14:51










  • Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
    – ljwobker
    Dec 14 '17 at 18:29


















19














I want to benchmark a ssd (possibly with encrypted filesystems) and compare it to benchmarks done by crystaldiskmark on windows.



CrystalDiskMark on Windows



So how can I measure approximately the same things as crystaldiskmark does?



For the first row (Seq) I think I could do something like



LC_ALL=C dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

sudo su -c "echo 3 > /proc/sys/vm/drop_caches"
LC_ALL=C dd if=tempfile of=/dev/null bs=1M count=1024


But I am not sure about the dd parameters.



For the random 512KB, 4KB, 4KB (Queue Depth=32) reads/writes speed-tests I don't have any idea how to reproduce the measurements in linux? So how can I do this?



For testing reading speeds something like sudo hdparm -Tt /dev/sda doesn't seem to make sense for me since I want for example benchmark something like encfs mounts.



Edit



@Alko, @iain



Perhaps I should write something about the motivation about this question: I am trying to benchmark my ssd and compare some encryption solutions. But that's another question (Best way to benchmark different encryption solutions on my system). While surfing in the web about ssd's and benchmarking I have often seen users posting their CrystelDiskMark results in forums. So this is the only motivation for the question. I just want to do the same on linux. For my particular benchmarking see my other question.










share|improve this question
























  • Why don't you use a benchmarking tool, that works on both systems?
    – Alko
    Oct 8 '13 at 14:51










  • Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
    – ljwobker
    Dec 14 '17 at 18:29
















19












19








19


11





I want to benchmark a ssd (possibly with encrypted filesystems) and compare it to benchmarks done by crystaldiskmark on windows.



CrystalDiskMark on Windows



So how can I measure approximately the same things as crystaldiskmark does?



For the first row (Seq) I think I could do something like



LC_ALL=C dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

sudo su -c "echo 3 > /proc/sys/vm/drop_caches"
LC_ALL=C dd if=tempfile of=/dev/null bs=1M count=1024


But I am not sure about the dd parameters.



For the random 512KB, 4KB, 4KB (Queue Depth=32) reads/writes speed-tests I don't have any idea how to reproduce the measurements in linux? So how can I do this?



For testing reading speeds something like sudo hdparm -Tt /dev/sda doesn't seem to make sense for me since I want for example benchmark something like encfs mounts.



Edit



@Alko, @iain



Perhaps I should write something about the motivation about this question: I am trying to benchmark my ssd and compare some encryption solutions. But that's another question (Best way to benchmark different encryption solutions on my system). While surfing in the web about ssd's and benchmarking I have often seen users posting their CrystelDiskMark results in forums. So this is the only motivation for the question. I just want to do the same on linux. For my particular benchmarking see my other question.










share|improve this question















I want to benchmark a ssd (possibly with encrypted filesystems) and compare it to benchmarks done by crystaldiskmark on windows.



CrystalDiskMark on Windows



So how can I measure approximately the same things as crystaldiskmark does?



For the first row (Seq) I think I could do something like



LC_ALL=C dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc

sudo su -c "echo 3 > /proc/sys/vm/drop_caches"
LC_ALL=C dd if=tempfile of=/dev/null bs=1M count=1024


But I am not sure about the dd parameters.



For the random 512KB, 4KB, 4KB (Queue Depth=32) reads/writes speed-tests I don't have any idea how to reproduce the measurements in linux? So how can I do this?



For testing reading speeds something like sudo hdparm -Tt /dev/sda doesn't seem to make sense for me since I want for example benchmark something like encfs mounts.



Edit



@Alko, @iain



Perhaps I should write something about the motivation about this question: I am trying to benchmark my ssd and compare some encryption solutions. But that's another question (Best way to benchmark different encryption solutions on my system). While surfing in the web about ssd's and benchmarking I have often seen users posting their CrystelDiskMark results in forums. So this is the only motivation for the question. I just want to do the same on linux. For my particular benchmarking see my other question.







performance dd ssd benchmark






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 13 '17 at 12:36









Community

1




1










asked Oct 6 '13 at 9:05









student

6,9531763120




6,9531763120












  • Why don't you use a benchmarking tool, that works on both systems?
    – Alko
    Oct 8 '13 at 14:51










  • Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
    – ljwobker
    Dec 14 '17 at 18:29




















  • Why don't you use a benchmarking tool, that works on both systems?
    – Alko
    Oct 8 '13 at 14:51










  • Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
    – ljwobker
    Dec 14 '17 at 18:29


















Why don't you use a benchmarking tool, that works on both systems?
– Alko
Oct 8 '13 at 14:51




Why don't you use a benchmarking tool, that works on both systems?
– Alko
Oct 8 '13 at 14:51












Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
– ljwobker
Dec 14 '17 at 18:29






Found this, seems very useful and in my brief testing on three separate drives gave very similar numbers to the actual crystaldiskmark... github.com/buty4649/fio-cdm
– ljwobker
Dec 14 '17 at 18:29












4 Answers
4






active

oldest

votes


















12














I'd say fio would have no trouble producing those workloads. Note that despite its name CrystalDiskMark is actually a benchmark of a filesysystem on a particular disk - it can't do I/O raw to the disk alone. As such it will always have filesystem overhead in it (not necessarily a bad thing but something to be aware of e.g. because the filesystems being compared might not be the same).



An example based on replicating the output in the screenshot above supplemented by information from the CrystalDiskMark manual (this isn't complete but should give the general idea):



fio --loops=5 --size=1000m --filename=/mnt/fs/fiotest.tmp --stonewall --ioengine=libaio --direct=1 
--name=Seqread --bs=1m --rw=read
--name=Seqwrite --bs=1m --rw=write
--name=512Kread --bs=512k --rw=randread
--name=512Kwrite --bs=512k --rw=randwrite
--name=4kQD32read --bs=4k --iodepth=32 --rw=randread
--name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
rm -f /mnt/fs/fiotest.tmp


BE CAREFUL - this example permanently destroys the data in /mnt/fs/fiotest.tmp!



A list of fio parameters can be seen on http://fio.readthedocs.io/en/latest/fio_doc.html .






share|improve this answer



















  • 2




    I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
    – Vahid Pazirandeh
    Jan 14 at 10:00






  • 1




    @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
    – inf3rno
    Jan 21 at 0:27



















3














You can use iozone and bonnie. They can do what crystal disk mark can do and more.



I personally used iozone a lot while benchmarking and stress testing devices from personal computers to enterprise storage systems. It has an auto mode which does everything but you can tailor it to your needs.






share|improve this answer

















  • 3




    How to reproduce with this the crystalmark measurements in detail?
    – student
    Oct 8 '13 at 15:51






  • 1




    I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
    – bayindirh
    Oct 10 '13 at 18:47






  • 1




    Just the ones in the screenshot.
    – trr
    May 16 '17 at 4:53



















3














I created a script that tries to replicate the behavior of crystaldiskmark 5 with fio. (Revised: Now does all tests available in the various versions of crystaldiskmark up to crystaldiskmark 6 including 512K and 4KQ8T8 tests + more improvements)



The script depends on fio and df. If you do not want to install df, erase line 19 through 21 (the script will no longer display which drive is being tested) or check the comments for a modified version.



#!/bin/bash

LOOPS=5 #How many times to run each test
SIZE=1024 #Size of each test, multiples of 32 recommended for Q32 tests to give the most accurate results.
WRITEZERO=0 #Set whether to write zeroes or randoms to testfile (random is the default for both fio and crystaldiskmark); dd benchmarks typically only write zeroes which is why there can be a speed difference.

QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
SIZE+=m
QSIZE+=m

if [ -z $1 ]; then
TARGET=$HOME
echo "Defaulting to $TARGET for testing"
else
TARGET="$1"
echo "Testing in $TARGET"
fi

DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model)
DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB

echo "Configuration: Size:$SIZE Loops:$LOOPS Write Only Zeroes:$WRITEZERO
Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
"

fio --loops=$LOOPS --size=$SIZE --filename=$TARGET/.fiomark.tmp --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json
--name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite
--name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read
--name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write
--name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read
--name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write
--name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read
--name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write
--name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread
--name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite
--name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread
--name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite
--name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread
--name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite > $TARGET/.fiomark.txt

SEQR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
SEQW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
F12KR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
F12KW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
SEQ32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
SEQ32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
FKR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
FKW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
FK32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
FK32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
FK8R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"
FK8W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"

echo -e "
Results from /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE):
33[0;33m
Sequential Read: $SEQR
Sequential Write: $SEQW
33[0;32m
512KB Read: $F12KR
512KB Write: $F12KW
33[1;36m
Sequential Q32T1 Read: $SEQ32R
Sequential Q32T1 Write: $SEQ32W
33[0;36m
4KB Read: $FKR
4KB Write: $FKW
33[1;33m
4KB Q32T1 Read: $FK32R
4KB Q32T1 Write: $FK32W
33[1;35m
4KB Q8T8 Read: $FK8R
4KB Q8T8 Write: $FK8W
"

rm $TARGET/.fiomark.txt $TARGET/.fiomark.tmp


Which will output results like this:



Results from /dev/sdb, Corsair Force GT (111GB):  

Sequential Read: 533MB/s IOPS=0
Sequential Write: 125MB/s IOPS=0

512KB Read: 457MB/s IOPS=914
512KB Write: 133MB/s IOPS=267

Sequential Q32T1 Read: 534MB/s IOPS=16
Sequential Q32T1 Write: 134MB/s IOPS=4

4KB Read: 32MB/s IOPS=8224
4KB Write: 150MB/s IOPS=38460

4KB Q32T1 Read: 195MB/s IOPS=49951
4KB Q32T1 Write: 121MB/s IOPS=31148

4KB Q8T8 Read: 129MB/s IOPS=33149
4KB Q8T8 Write: 132MB/s IOPS=33796


(Revision: The results are now color coded, to remove the color coding remove all instances of 33[x;xxm (where x is a number) from the echo command at the bottom of the script.)



The script when run without arguments will test the speed of your home drive/partition. You can also enter a path to a directory on another hard drive if you would like to test that instead. While running the script creates hidden temporary files in the target directory which it cleans up after it finishes running (.fiomark.tmp and .fiomark.txt)



You cannot see the test results as they complete, but if you cancel the command while it is running before it finishes all the tests, you will get to see the results of completed tests and the temporary files will get deleted afterwards as well.



After some research, I found that the crystaldiskmark benchmark results on the same model of drive as I have seem to relatively closely match the results of this fio benchmark, at least at a glance. As I do not have a windows installation I can not verify how close they really are for certain on the same drive.



Note that you may sometimes get slightly off results, especially if you're doing something in the background while the tests are running, so running the test twice in a row to compare results is advisable.



These tests take a loong time to run. The default settings in the script currently are suitable for a regular (SATA) SSD.



Recommended SIZE setting for different drives:




  • (SATA) SSD: 1024(default)

  • (ANY) HDD: 256

  • (High End NVME) SSD: 4096

  • (Low-Mid End NVME) SSD: 1024(default)


A High End NVME typically has around ~2GB/s read speeds (Intel Optane and Samsung 960 EVO are examples; but in the latter's case I would recommend 2048 instead due to slower 4kb speeds.), a Low-Mid End can have anywhere between ~500-1800MB/s read speeds.



The major reason why these sizes should be adjusted is because of how long the tests would take otherwise, for older/weaker HDDs for example, you can have as low as 0.4MB/s 4kb Read speeds. You try waiting for 5 loops of 1GB at that speed, other 4kb tests typically have around 1MB/s speeds. We have 6 of them. Each running 5 loops, do you wait for 30GB of data to be transferred at those speeds? Or do you want to lower that to 7.5GB of Data instead (at 256MB/s it's a 2-3 hour test)



Of course, the ideal method for handling that situation would be to run sequential & 512k tests separate from the 4k tests ( so run the sequential and 512k tests with something like say 512m, and then run the 4k tests at 32m)



More recent HDD models are higher end though and can get much better results than that.



And there you have it. Enjoy!






share|improve this answer























  • Have you checked how your script behaves with fio on Windows?
    – Anon
    Nov 11 at 19:03






  • 1




    (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
    – Anon
    Nov 11 at 19:26












  • Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
    – Cestarian
    Nov 11 at 20:16






  • 1




    @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
    – Igor
    17 hours ago










  • @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
    – Cestarian
    7 mins ago



















1














I'm not sure the various deeper tests make any real sense when considering what you're doing in detail.



The settings like block size, and queue depth, are parameters for controlling the low level input/output parameters of the ATA interface your SSD is sitting on.



Thats all well and good when you're just running some basic test against a drive fairly directly, like to a large file in a simple partitioned filesystem.



Once you start talking about benchmarking an encfs, these parameters dont particularly apply to your filesystem any more, the filesystem is just an interface into something else that eventually backs onto a filesystem that backs onto a drive.



I think it would be helpful to understand what exactly you're trying to measure, because there are two factors in play here - the raw disk IO speed, which you can test by timing various DD commands (can give examples if this is what you want) /without/ encfs, or the process will be CPU bounded by the encryption and you're trying to test the relative throughput of the encryption algorithm. In which case the parameters for queue depth etc aren't particularly relevant.



In both regards, a timed DD command will give you the basic throughput statistics you seek, but you should consider what you're intending to measure and the relevant parameters for that.



This link seems to provide a good guide to disk speed testing using timed DD commands including the necessary coverage about 'defeating buffers/cache' and so forth. Likely this will provide the information you need. Decide which you're more interested in tho, disk performance or encryption performance, one of the two will be the bottleneck, and tuning the non-bottleneck isn't going to benefit anything.






share|improve this answer





















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "106"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f93791%2fbenchmark-ssd-on-linux-how-to-measure-the-same-things-as-crystaldiskmark-does-i%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    12














    I'd say fio would have no trouble producing those workloads. Note that despite its name CrystalDiskMark is actually a benchmark of a filesysystem on a particular disk - it can't do I/O raw to the disk alone. As such it will always have filesystem overhead in it (not necessarily a bad thing but something to be aware of e.g. because the filesystems being compared might not be the same).



    An example based on replicating the output in the screenshot above supplemented by information from the CrystalDiskMark manual (this isn't complete but should give the general idea):



    fio --loops=5 --size=1000m --filename=/mnt/fs/fiotest.tmp --stonewall --ioengine=libaio --direct=1 
    --name=Seqread --bs=1m --rw=read
    --name=Seqwrite --bs=1m --rw=write
    --name=512Kread --bs=512k --rw=randread
    --name=512Kwrite --bs=512k --rw=randwrite
    --name=4kQD32read --bs=4k --iodepth=32 --rw=randread
    --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
    rm -f /mnt/fs/fiotest.tmp


    BE CAREFUL - this example permanently destroys the data in /mnt/fs/fiotest.tmp!



    A list of fio parameters can be seen on http://fio.readthedocs.io/en/latest/fio_doc.html .






    share|improve this answer



















    • 2




      I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
      – Vahid Pazirandeh
      Jan 14 at 10:00






    • 1




      @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
      – inf3rno
      Jan 21 at 0:27
















    12














    I'd say fio would have no trouble producing those workloads. Note that despite its name CrystalDiskMark is actually a benchmark of a filesysystem on a particular disk - it can't do I/O raw to the disk alone. As such it will always have filesystem overhead in it (not necessarily a bad thing but something to be aware of e.g. because the filesystems being compared might not be the same).



    An example based on replicating the output in the screenshot above supplemented by information from the CrystalDiskMark manual (this isn't complete but should give the general idea):



    fio --loops=5 --size=1000m --filename=/mnt/fs/fiotest.tmp --stonewall --ioengine=libaio --direct=1 
    --name=Seqread --bs=1m --rw=read
    --name=Seqwrite --bs=1m --rw=write
    --name=512Kread --bs=512k --rw=randread
    --name=512Kwrite --bs=512k --rw=randwrite
    --name=4kQD32read --bs=4k --iodepth=32 --rw=randread
    --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
    rm -f /mnt/fs/fiotest.tmp


    BE CAREFUL - this example permanently destroys the data in /mnt/fs/fiotest.tmp!



    A list of fio parameters can be seen on http://fio.readthedocs.io/en/latest/fio_doc.html .






    share|improve this answer



















    • 2




      I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
      – Vahid Pazirandeh
      Jan 14 at 10:00






    • 1




      @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
      – inf3rno
      Jan 21 at 0:27














    12












    12








    12






    I'd say fio would have no trouble producing those workloads. Note that despite its name CrystalDiskMark is actually a benchmark of a filesysystem on a particular disk - it can't do I/O raw to the disk alone. As such it will always have filesystem overhead in it (not necessarily a bad thing but something to be aware of e.g. because the filesystems being compared might not be the same).



    An example based on replicating the output in the screenshot above supplemented by information from the CrystalDiskMark manual (this isn't complete but should give the general idea):



    fio --loops=5 --size=1000m --filename=/mnt/fs/fiotest.tmp --stonewall --ioengine=libaio --direct=1 
    --name=Seqread --bs=1m --rw=read
    --name=Seqwrite --bs=1m --rw=write
    --name=512Kread --bs=512k --rw=randread
    --name=512Kwrite --bs=512k --rw=randwrite
    --name=4kQD32read --bs=4k --iodepth=32 --rw=randread
    --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
    rm -f /mnt/fs/fiotest.tmp


    BE CAREFUL - this example permanently destroys the data in /mnt/fs/fiotest.tmp!



    A list of fio parameters can be seen on http://fio.readthedocs.io/en/latest/fio_doc.html .






    share|improve this answer














    I'd say fio would have no trouble producing those workloads. Note that despite its name CrystalDiskMark is actually a benchmark of a filesysystem on a particular disk - it can't do I/O raw to the disk alone. As such it will always have filesystem overhead in it (not necessarily a bad thing but something to be aware of e.g. because the filesystems being compared might not be the same).



    An example based on replicating the output in the screenshot above supplemented by information from the CrystalDiskMark manual (this isn't complete but should give the general idea):



    fio --loops=5 --size=1000m --filename=/mnt/fs/fiotest.tmp --stonewall --ioengine=libaio --direct=1 
    --name=Seqread --bs=1m --rw=read
    --name=Seqwrite --bs=1m --rw=write
    --name=512Kread --bs=512k --rw=randread
    --name=512Kwrite --bs=512k --rw=randwrite
    --name=4kQD32read --bs=4k --iodepth=32 --rw=randread
    --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
    rm -f /mnt/fs/fiotest.tmp


    BE CAREFUL - this example permanently destroys the data in /mnt/fs/fiotest.tmp!



    A list of fio parameters can be seen on http://fio.readthedocs.io/en/latest/fio_doc.html .







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Sep 18 '17 at 6:30

























    answered Sep 13 '17 at 21:38









    Anon

    1,4521220




    1,4521220








    • 2




      I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
      – Vahid Pazirandeh
      Jan 14 at 10:00






    • 1




      @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
      – inf3rno
      Jan 21 at 0:27














    • 2




      I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
      – Vahid Pazirandeh
      Jan 14 at 10:00






    • 1




      @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
      – inf3rno
      Jan 21 at 0:27








    2




    2




    I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
    – Vahid Pazirandeh
    Jan 14 at 10:00




    I tried fio in Ubuntu 16.04 and CrystalDiskMark in Windows 7. Some numbers match up while others don't. The sequential r/w was off by a factor of 2. Meaning, the Linux values were 50% of those reported by CDM v3.0.4 (note: current version is 6.0.0, but old versions still available for download). To fiddle w/ the disparity I set bs=4m instead of 1m. That made the numbers closer. Trying 8m and 32m made it even closer. Ultimately like Anon said his answer is not complete and like @Alko, we need the same tool on both OSes. Also note the newest CDM 6 uses different tests than OP. Nice info Anon
    – Vahid Pazirandeh
    Jan 14 at 10:00




    1




    1




    @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
    – inf3rno
    Jan 21 at 0:27




    @VahidPazirandeh Interesting, github.com/buty4649/fio-cdm/blob/master/fio-cdm has the same 1m settings, maybe the documentation of cdm is not good enough.
    – inf3rno
    Jan 21 at 0:27













    3














    You can use iozone and bonnie. They can do what crystal disk mark can do and more.



    I personally used iozone a lot while benchmarking and stress testing devices from personal computers to enterprise storage systems. It has an auto mode which does everything but you can tailor it to your needs.






    share|improve this answer

















    • 3




      How to reproduce with this the crystalmark measurements in detail?
      – student
      Oct 8 '13 at 15:51






    • 1




      I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
      – bayindirh
      Oct 10 '13 at 18:47






    • 1




      Just the ones in the screenshot.
      – trr
      May 16 '17 at 4:53
















    3














    You can use iozone and bonnie. They can do what crystal disk mark can do and more.



    I personally used iozone a lot while benchmarking and stress testing devices from personal computers to enterprise storage systems. It has an auto mode which does everything but you can tailor it to your needs.






    share|improve this answer

















    • 3




      How to reproduce with this the crystalmark measurements in detail?
      – student
      Oct 8 '13 at 15:51






    • 1




      I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
      – bayindirh
      Oct 10 '13 at 18:47






    • 1




      Just the ones in the screenshot.
      – trr
      May 16 '17 at 4:53














    3












    3








    3






    You can use iozone and bonnie. They can do what crystal disk mark can do and more.



    I personally used iozone a lot while benchmarking and stress testing devices from personal computers to enterprise storage systems. It has an auto mode which does everything but you can tailor it to your needs.






    share|improve this answer












    You can use iozone and bonnie. They can do what crystal disk mark can do and more.



    I personally used iozone a lot while benchmarking and stress testing devices from personal computers to enterprise storage systems. It has an auto mode which does everything but you can tailor it to your needs.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Oct 8 '13 at 14:32









    bayindirh

    689418




    689418








    • 3




      How to reproduce with this the crystalmark measurements in detail?
      – student
      Oct 8 '13 at 15:51






    • 1




      I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
      – bayindirh
      Oct 10 '13 at 18:47






    • 1




      Just the ones in the screenshot.
      – trr
      May 16 '17 at 4:53














    • 3




      How to reproduce with this the crystalmark measurements in detail?
      – student
      Oct 8 '13 at 15:51






    • 1




      I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
      – bayindirh
      Oct 10 '13 at 18:47






    • 1




      Just the ones in the screenshot.
      – trr
      May 16 '17 at 4:53








    3




    3




    How to reproduce with this the crystalmark measurements in detail?
    – student
    Oct 8 '13 at 15:51




    How to reproduce with this the crystalmark measurements in detail?
    – student
    Oct 8 '13 at 15:51




    1




    1




    I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
    – bayindirh
    Oct 10 '13 at 18:47




    I'll try my best to write an howto, but I need the list of tests that Crystal Disk Mark conducts. Are there any other tests which the software runs beside the ones visible on the screenshot.
    – bayindirh
    Oct 10 '13 at 18:47




    1




    1




    Just the ones in the screenshot.
    – trr
    May 16 '17 at 4:53




    Just the ones in the screenshot.
    – trr
    May 16 '17 at 4:53











    3














    I created a script that tries to replicate the behavior of crystaldiskmark 5 with fio. (Revised: Now does all tests available in the various versions of crystaldiskmark up to crystaldiskmark 6 including 512K and 4KQ8T8 tests + more improvements)



    The script depends on fio and df. If you do not want to install df, erase line 19 through 21 (the script will no longer display which drive is being tested) or check the comments for a modified version.



    #!/bin/bash

    LOOPS=5 #How many times to run each test
    SIZE=1024 #Size of each test, multiples of 32 recommended for Q32 tests to give the most accurate results.
    WRITEZERO=0 #Set whether to write zeroes or randoms to testfile (random is the default for both fio and crystaldiskmark); dd benchmarks typically only write zeroes which is why there can be a speed difference.

    QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
    SIZE+=m
    QSIZE+=m

    if [ -z $1 ]; then
    TARGET=$HOME
    echo "Defaulting to $TARGET for testing"
    else
    TARGET="$1"
    echo "Testing in $TARGET"
    fi

    DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
    DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model)
    DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB

    echo "Configuration: Size:$SIZE Loops:$LOOPS Write Only Zeroes:$WRITEZERO
    Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
    "

    fio --loops=$LOOPS --size=$SIZE --filename=$TARGET/.fiomark.tmp --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json
    --name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite
    --name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read
    --name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write
    --name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read
    --name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write
    --name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read
    --name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write
    --name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread
    --name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite
    --name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread
    --name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite
    --name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread
    --name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite > $TARGET/.fiomark.txt

    SEQR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK8R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"
    FK8W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"

    echo -e "
    Results from /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE):
    33[0;33m
    Sequential Read: $SEQR
    Sequential Write: $SEQW
    33[0;32m
    512KB Read: $F12KR
    512KB Write: $F12KW
    33[1;36m
    Sequential Q32T1 Read: $SEQ32R
    Sequential Q32T1 Write: $SEQ32W
    33[0;36m
    4KB Read: $FKR
    4KB Write: $FKW
    33[1;33m
    4KB Q32T1 Read: $FK32R
    4KB Q32T1 Write: $FK32W
    33[1;35m
    4KB Q8T8 Read: $FK8R
    4KB Q8T8 Write: $FK8W
    "

    rm $TARGET/.fiomark.txt $TARGET/.fiomark.tmp


    Which will output results like this:



    Results from /dev/sdb, Corsair Force GT (111GB):  

    Sequential Read: 533MB/s IOPS=0
    Sequential Write: 125MB/s IOPS=0

    512KB Read: 457MB/s IOPS=914
    512KB Write: 133MB/s IOPS=267

    Sequential Q32T1 Read: 534MB/s IOPS=16
    Sequential Q32T1 Write: 134MB/s IOPS=4

    4KB Read: 32MB/s IOPS=8224
    4KB Write: 150MB/s IOPS=38460

    4KB Q32T1 Read: 195MB/s IOPS=49951
    4KB Q32T1 Write: 121MB/s IOPS=31148

    4KB Q8T8 Read: 129MB/s IOPS=33149
    4KB Q8T8 Write: 132MB/s IOPS=33796


    (Revision: The results are now color coded, to remove the color coding remove all instances of 33[x;xxm (where x is a number) from the echo command at the bottom of the script.)



    The script when run without arguments will test the speed of your home drive/partition. You can also enter a path to a directory on another hard drive if you would like to test that instead. While running the script creates hidden temporary files in the target directory which it cleans up after it finishes running (.fiomark.tmp and .fiomark.txt)



    You cannot see the test results as they complete, but if you cancel the command while it is running before it finishes all the tests, you will get to see the results of completed tests and the temporary files will get deleted afterwards as well.



    After some research, I found that the crystaldiskmark benchmark results on the same model of drive as I have seem to relatively closely match the results of this fio benchmark, at least at a glance. As I do not have a windows installation I can not verify how close they really are for certain on the same drive.



    Note that you may sometimes get slightly off results, especially if you're doing something in the background while the tests are running, so running the test twice in a row to compare results is advisable.



    These tests take a loong time to run. The default settings in the script currently are suitable for a regular (SATA) SSD.



    Recommended SIZE setting for different drives:




    • (SATA) SSD: 1024(default)

    • (ANY) HDD: 256

    • (High End NVME) SSD: 4096

    • (Low-Mid End NVME) SSD: 1024(default)


    A High End NVME typically has around ~2GB/s read speeds (Intel Optane and Samsung 960 EVO are examples; but in the latter's case I would recommend 2048 instead due to slower 4kb speeds.), a Low-Mid End can have anywhere between ~500-1800MB/s read speeds.



    The major reason why these sizes should be adjusted is because of how long the tests would take otherwise, for older/weaker HDDs for example, you can have as low as 0.4MB/s 4kb Read speeds. You try waiting for 5 loops of 1GB at that speed, other 4kb tests typically have around 1MB/s speeds. We have 6 of them. Each running 5 loops, do you wait for 30GB of data to be transferred at those speeds? Or do you want to lower that to 7.5GB of Data instead (at 256MB/s it's a 2-3 hour test)



    Of course, the ideal method for handling that situation would be to run sequential & 512k tests separate from the 4k tests ( so run the sequential and 512k tests with something like say 512m, and then run the 4k tests at 32m)



    More recent HDD models are higher end though and can get much better results than that.



    And there you have it. Enjoy!






    share|improve this answer























    • Have you checked how your script behaves with fio on Windows?
      – Anon
      Nov 11 at 19:03






    • 1




      (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
      – Anon
      Nov 11 at 19:26












    • Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
      – Cestarian
      Nov 11 at 20:16






    • 1




      @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
      – Igor
      17 hours ago










    • @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
      – Cestarian
      7 mins ago
















    3














    I created a script that tries to replicate the behavior of crystaldiskmark 5 with fio. (Revised: Now does all tests available in the various versions of crystaldiskmark up to crystaldiskmark 6 including 512K and 4KQ8T8 tests + more improvements)



    The script depends on fio and df. If you do not want to install df, erase line 19 through 21 (the script will no longer display which drive is being tested) or check the comments for a modified version.



    #!/bin/bash

    LOOPS=5 #How many times to run each test
    SIZE=1024 #Size of each test, multiples of 32 recommended for Q32 tests to give the most accurate results.
    WRITEZERO=0 #Set whether to write zeroes or randoms to testfile (random is the default for both fio and crystaldiskmark); dd benchmarks typically only write zeroes which is why there can be a speed difference.

    QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
    SIZE+=m
    QSIZE+=m

    if [ -z $1 ]; then
    TARGET=$HOME
    echo "Defaulting to $TARGET for testing"
    else
    TARGET="$1"
    echo "Testing in $TARGET"
    fi

    DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
    DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model)
    DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB

    echo "Configuration: Size:$SIZE Loops:$LOOPS Write Only Zeroes:$WRITEZERO
    Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
    "

    fio --loops=$LOOPS --size=$SIZE --filename=$TARGET/.fiomark.tmp --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json
    --name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite
    --name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read
    --name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write
    --name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read
    --name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write
    --name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read
    --name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write
    --name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread
    --name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite
    --name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread
    --name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite
    --name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread
    --name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite > $TARGET/.fiomark.txt

    SEQR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK8R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"
    FK8W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"

    echo -e "
    Results from /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE):
    33[0;33m
    Sequential Read: $SEQR
    Sequential Write: $SEQW
    33[0;32m
    512KB Read: $F12KR
    512KB Write: $F12KW
    33[1;36m
    Sequential Q32T1 Read: $SEQ32R
    Sequential Q32T1 Write: $SEQ32W
    33[0;36m
    4KB Read: $FKR
    4KB Write: $FKW
    33[1;33m
    4KB Q32T1 Read: $FK32R
    4KB Q32T1 Write: $FK32W
    33[1;35m
    4KB Q8T8 Read: $FK8R
    4KB Q8T8 Write: $FK8W
    "

    rm $TARGET/.fiomark.txt $TARGET/.fiomark.tmp


    Which will output results like this:



    Results from /dev/sdb, Corsair Force GT (111GB):  

    Sequential Read: 533MB/s IOPS=0
    Sequential Write: 125MB/s IOPS=0

    512KB Read: 457MB/s IOPS=914
    512KB Write: 133MB/s IOPS=267

    Sequential Q32T1 Read: 534MB/s IOPS=16
    Sequential Q32T1 Write: 134MB/s IOPS=4

    4KB Read: 32MB/s IOPS=8224
    4KB Write: 150MB/s IOPS=38460

    4KB Q32T1 Read: 195MB/s IOPS=49951
    4KB Q32T1 Write: 121MB/s IOPS=31148

    4KB Q8T8 Read: 129MB/s IOPS=33149
    4KB Q8T8 Write: 132MB/s IOPS=33796


    (Revision: The results are now color coded, to remove the color coding remove all instances of 33[x;xxm (where x is a number) from the echo command at the bottom of the script.)



    The script when run without arguments will test the speed of your home drive/partition. You can also enter a path to a directory on another hard drive if you would like to test that instead. While running the script creates hidden temporary files in the target directory which it cleans up after it finishes running (.fiomark.tmp and .fiomark.txt)



    You cannot see the test results as they complete, but if you cancel the command while it is running before it finishes all the tests, you will get to see the results of completed tests and the temporary files will get deleted afterwards as well.



    After some research, I found that the crystaldiskmark benchmark results on the same model of drive as I have seem to relatively closely match the results of this fio benchmark, at least at a glance. As I do not have a windows installation I can not verify how close they really are for certain on the same drive.



    Note that you may sometimes get slightly off results, especially if you're doing something in the background while the tests are running, so running the test twice in a row to compare results is advisable.



    These tests take a loong time to run. The default settings in the script currently are suitable for a regular (SATA) SSD.



    Recommended SIZE setting for different drives:




    • (SATA) SSD: 1024(default)

    • (ANY) HDD: 256

    • (High End NVME) SSD: 4096

    • (Low-Mid End NVME) SSD: 1024(default)


    A High End NVME typically has around ~2GB/s read speeds (Intel Optane and Samsung 960 EVO are examples; but in the latter's case I would recommend 2048 instead due to slower 4kb speeds.), a Low-Mid End can have anywhere between ~500-1800MB/s read speeds.



    The major reason why these sizes should be adjusted is because of how long the tests would take otherwise, for older/weaker HDDs for example, you can have as low as 0.4MB/s 4kb Read speeds. You try waiting for 5 loops of 1GB at that speed, other 4kb tests typically have around 1MB/s speeds. We have 6 of them. Each running 5 loops, do you wait for 30GB of data to be transferred at those speeds? Or do you want to lower that to 7.5GB of Data instead (at 256MB/s it's a 2-3 hour test)



    Of course, the ideal method for handling that situation would be to run sequential & 512k tests separate from the 4k tests ( so run the sequential and 512k tests with something like say 512m, and then run the 4k tests at 32m)



    More recent HDD models are higher end though and can get much better results than that.



    And there you have it. Enjoy!






    share|improve this answer























    • Have you checked how your script behaves with fio on Windows?
      – Anon
      Nov 11 at 19:03






    • 1




      (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
      – Anon
      Nov 11 at 19:26












    • Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
      – Cestarian
      Nov 11 at 20:16






    • 1




      @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
      – Igor
      17 hours ago










    • @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
      – Cestarian
      7 mins ago














    3












    3








    3






    I created a script that tries to replicate the behavior of crystaldiskmark 5 with fio. (Revised: Now does all tests available in the various versions of crystaldiskmark up to crystaldiskmark 6 including 512K and 4KQ8T8 tests + more improvements)



    The script depends on fio and df. If you do not want to install df, erase line 19 through 21 (the script will no longer display which drive is being tested) or check the comments for a modified version.



    #!/bin/bash

    LOOPS=5 #How many times to run each test
    SIZE=1024 #Size of each test, multiples of 32 recommended for Q32 tests to give the most accurate results.
    WRITEZERO=0 #Set whether to write zeroes or randoms to testfile (random is the default for both fio and crystaldiskmark); dd benchmarks typically only write zeroes which is why there can be a speed difference.

    QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
    SIZE+=m
    QSIZE+=m

    if [ -z $1 ]; then
    TARGET=$HOME
    echo "Defaulting to $TARGET for testing"
    else
    TARGET="$1"
    echo "Testing in $TARGET"
    fi

    DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
    DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model)
    DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB

    echo "Configuration: Size:$SIZE Loops:$LOOPS Write Only Zeroes:$WRITEZERO
    Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
    "

    fio --loops=$LOOPS --size=$SIZE --filename=$TARGET/.fiomark.tmp --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json
    --name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite
    --name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read
    --name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write
    --name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read
    --name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write
    --name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read
    --name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write
    --name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread
    --name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite
    --name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread
    --name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite
    --name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread
    --name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite > $TARGET/.fiomark.txt

    SEQR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK8R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"
    FK8W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"

    echo -e "
    Results from /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE):
    33[0;33m
    Sequential Read: $SEQR
    Sequential Write: $SEQW
    33[0;32m
    512KB Read: $F12KR
    512KB Write: $F12KW
    33[1;36m
    Sequential Q32T1 Read: $SEQ32R
    Sequential Q32T1 Write: $SEQ32W
    33[0;36m
    4KB Read: $FKR
    4KB Write: $FKW
    33[1;33m
    4KB Q32T1 Read: $FK32R
    4KB Q32T1 Write: $FK32W
    33[1;35m
    4KB Q8T8 Read: $FK8R
    4KB Q8T8 Write: $FK8W
    "

    rm $TARGET/.fiomark.txt $TARGET/.fiomark.tmp


    Which will output results like this:



    Results from /dev/sdb, Corsair Force GT (111GB):  

    Sequential Read: 533MB/s IOPS=0
    Sequential Write: 125MB/s IOPS=0

    512KB Read: 457MB/s IOPS=914
    512KB Write: 133MB/s IOPS=267

    Sequential Q32T1 Read: 534MB/s IOPS=16
    Sequential Q32T1 Write: 134MB/s IOPS=4

    4KB Read: 32MB/s IOPS=8224
    4KB Write: 150MB/s IOPS=38460

    4KB Q32T1 Read: 195MB/s IOPS=49951
    4KB Q32T1 Write: 121MB/s IOPS=31148

    4KB Q8T8 Read: 129MB/s IOPS=33149
    4KB Q8T8 Write: 132MB/s IOPS=33796


    (Revision: The results are now color coded, to remove the color coding remove all instances of 33[x;xxm (where x is a number) from the echo command at the bottom of the script.)



    The script when run without arguments will test the speed of your home drive/partition. You can also enter a path to a directory on another hard drive if you would like to test that instead. While running the script creates hidden temporary files in the target directory which it cleans up after it finishes running (.fiomark.tmp and .fiomark.txt)



    You cannot see the test results as they complete, but if you cancel the command while it is running before it finishes all the tests, you will get to see the results of completed tests and the temporary files will get deleted afterwards as well.



    After some research, I found that the crystaldiskmark benchmark results on the same model of drive as I have seem to relatively closely match the results of this fio benchmark, at least at a glance. As I do not have a windows installation I can not verify how close they really are for certain on the same drive.



    Note that you may sometimes get slightly off results, especially if you're doing something in the background while the tests are running, so running the test twice in a row to compare results is advisable.



    These tests take a loong time to run. The default settings in the script currently are suitable for a regular (SATA) SSD.



    Recommended SIZE setting for different drives:




    • (SATA) SSD: 1024(default)

    • (ANY) HDD: 256

    • (High End NVME) SSD: 4096

    • (Low-Mid End NVME) SSD: 1024(default)


    A High End NVME typically has around ~2GB/s read speeds (Intel Optane and Samsung 960 EVO are examples; but in the latter's case I would recommend 2048 instead due to slower 4kb speeds.), a Low-Mid End can have anywhere between ~500-1800MB/s read speeds.



    The major reason why these sizes should be adjusted is because of how long the tests would take otherwise, for older/weaker HDDs for example, you can have as low as 0.4MB/s 4kb Read speeds. You try waiting for 5 loops of 1GB at that speed, other 4kb tests typically have around 1MB/s speeds. We have 6 of them. Each running 5 loops, do you wait for 30GB of data to be transferred at those speeds? Or do you want to lower that to 7.5GB of Data instead (at 256MB/s it's a 2-3 hour test)



    Of course, the ideal method for handling that situation would be to run sequential & 512k tests separate from the 4k tests ( so run the sequential and 512k tests with something like say 512m, and then run the 4k tests at 32m)



    More recent HDD models are higher end though and can get much better results than that.



    And there you have it. Enjoy!






    share|improve this answer














    I created a script that tries to replicate the behavior of crystaldiskmark 5 with fio. (Revised: Now does all tests available in the various versions of crystaldiskmark up to crystaldiskmark 6 including 512K and 4KQ8T8 tests + more improvements)



    The script depends on fio and df. If you do not want to install df, erase line 19 through 21 (the script will no longer display which drive is being tested) or check the comments for a modified version.



    #!/bin/bash

    LOOPS=5 #How many times to run each test
    SIZE=1024 #Size of each test, multiples of 32 recommended for Q32 tests to give the most accurate results.
    WRITEZERO=0 #Set whether to write zeroes or randoms to testfile (random is the default for both fio and crystaldiskmark); dd benchmarks typically only write zeroes which is why there can be a speed difference.

    QSIZE=$(($SIZE / 32)) #Size of Q32Seq tests
    SIZE+=m
    QSIZE+=m

    if [ -z $1 ]; then
    TARGET=$HOME
    echo "Defaulting to $TARGET for testing"
    else
    TARGET="$1"
    echo "Testing in $TARGET"
    fi

    DRIVE=$(df $TARGET | grep /dev | cut -d/ -f3 | cut -d" " -f1 | rev | cut -c 2- | rev)
    DRIVEMODEL=$(cat /sys/block/$DRIVE/device/model)
    DRIVESIZE=$(($(cat /sys/block/$DRIVE/size)*512/1024/1024/1024))GB

    echo "Configuration: Size:$SIZE Loops:$LOOPS Write Only Zeroes:$WRITEZERO
    Running Benchmark on: /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE), please wait...
    "

    fio --loops=$LOOPS --size=$SIZE --filename=$TARGET/.fiomark.tmp --stonewall --ioengine=libaio --direct=1 --zero_buffers=$WRITEZERO --output-format=json
    --name=Bufread --loops=1 --bs=$SIZE --iodepth=1 --numjobs=1 --rw=readwrite
    --name=Seqread --bs=$SIZE --iodepth=1 --numjobs=1 --rw=read
    --name=Seqwrite --bs=$SIZE --iodepth=1 --numjobs=1 --rw=write
    --name=512kread --bs=512k --iodepth=1 --numjobs=1 --rw=read
    --name=512kwrite --bs=512k --iodepth=1 --numjobs=1 --rw=write
    --name=SeqQ32T1read --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=read
    --name=SeqQ32T1write --bs=$QSIZE --iodepth=32 --numjobs=1 --rw=write
    --name=4kread --bs=4k --iodepth=1 --numjobs=1 --rw=randread
    --name=4kwrite --bs=4k --iodepth=1 --numjobs=1 --rw=randwrite
    --name=4kQ32T1read --bs=4k --iodepth=32 --numjobs=1 --rw=randread
    --name=4kQ32T1write --bs=4k --iodepth=32 --numjobs=1 --rw=randwrite
    --name=4kQ8T8read --bs=4k --iodepth=8 --numjobs=8 --rw=randread
    --name=4kQ8T8write --bs=4k --iodepth=8 --numjobs=8 --rw=randwrite > $TARGET/.fiomark.txt

    SEQR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "Seqread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "Seqwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "512kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    F12KW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "512kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "SeqQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    SEQ32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "SeqQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKR="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kread"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FKW="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kwrite"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep bw_bytes | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ32T1read"' | grep -m1 iops | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK32W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep bw_bytes | sed '2!d' | cut -d: -f2 | sed s:,::g)/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ32T1write"' | grep iops | sed '7!d' | cut -d: -f2 | cut -d. -f1 | sed 's: ::g')"
    FK8R="$(($(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A15 '"name" : "4kQ8T8read"' | grep iops | sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"
    FK8W="$(($(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep bw_bytes | sed 's/ "bw_bytes" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }')/1024/1024))MB/s IOPS=$(cat $TARGET/.fiomark.txt | grep -A80 '"name" : "4kQ8T8write"' | grep '"iops" '| sed 's/ "iops" : //g' | sed 's:,::g' | awk '{ SUM += $1} END { print SUM }' | cut -d. -f1)"

    echo -e "
    Results from /dev/$DRIVE, $DRIVEMODEL ($DRIVESIZE):
    33[0;33m
    Sequential Read: $SEQR
    Sequential Write: $SEQW
    33[0;32m
    512KB Read: $F12KR
    512KB Write: $F12KW
    33[1;36m
    Sequential Q32T1 Read: $SEQ32R
    Sequential Q32T1 Write: $SEQ32W
    33[0;36m
    4KB Read: $FKR
    4KB Write: $FKW
    33[1;33m
    4KB Q32T1 Read: $FK32R
    4KB Q32T1 Write: $FK32W
    33[1;35m
    4KB Q8T8 Read: $FK8R
    4KB Q8T8 Write: $FK8W
    "

    rm $TARGET/.fiomark.txt $TARGET/.fiomark.tmp


    Which will output results like this:



    Results from /dev/sdb, Corsair Force GT (111GB):  

    Sequential Read: 533MB/s IOPS=0
    Sequential Write: 125MB/s IOPS=0

    512KB Read: 457MB/s IOPS=914
    512KB Write: 133MB/s IOPS=267

    Sequential Q32T1 Read: 534MB/s IOPS=16
    Sequential Q32T1 Write: 134MB/s IOPS=4

    4KB Read: 32MB/s IOPS=8224
    4KB Write: 150MB/s IOPS=38460

    4KB Q32T1 Read: 195MB/s IOPS=49951
    4KB Q32T1 Write: 121MB/s IOPS=31148

    4KB Q8T8 Read: 129MB/s IOPS=33149
    4KB Q8T8 Write: 132MB/s IOPS=33796


    (Revision: The results are now color coded, to remove the color coding remove all instances of 33[x;xxm (where x is a number) from the echo command at the bottom of the script.)



    The script when run without arguments will test the speed of your home drive/partition. You can also enter a path to a directory on another hard drive if you would like to test that instead. While running the script creates hidden temporary files in the target directory which it cleans up after it finishes running (.fiomark.tmp and .fiomark.txt)



    You cannot see the test results as they complete, but if you cancel the command while it is running before it finishes all the tests, you will get to see the results of completed tests and the temporary files will get deleted afterwards as well.



    After some research, I found that the crystaldiskmark benchmark results on the same model of drive as I have seem to relatively closely match the results of this fio benchmark, at least at a glance. As I do not have a windows installation I can not verify how close they really are for certain on the same drive.



    Note that you may sometimes get slightly off results, especially if you're doing something in the background while the tests are running, so running the test twice in a row to compare results is advisable.



    These tests take a loong time to run. The default settings in the script currently are suitable for a regular (SATA) SSD.



    Recommended SIZE setting for different drives:




    • (SATA) SSD: 1024(default)

    • (ANY) HDD: 256

    • (High End NVME) SSD: 4096

    • (Low-Mid End NVME) SSD: 1024(default)


    A High End NVME typically has around ~2GB/s read speeds (Intel Optane and Samsung 960 EVO are examples; but in the latter's case I would recommend 2048 instead due to slower 4kb speeds.), a Low-Mid End can have anywhere between ~500-1800MB/s read speeds.



    The major reason why these sizes should be adjusted is because of how long the tests would take otherwise, for older/weaker HDDs for example, you can have as low as 0.4MB/s 4kb Read speeds. You try waiting for 5 loops of 1GB at that speed, other 4kb tests typically have around 1MB/s speeds. We have 6 of them. Each running 5 loops, do you wait for 30GB of data to be transferred at those speeds? Or do you want to lower that to 7.5GB of Data instead (at 256MB/s it's a 2-3 hour test)



    Of course, the ideal method for handling that situation would be to run sequential & 512k tests separate from the 4k tests ( so run the sequential and 512k tests with something like say 512m, and then run the 4k tests at 32m)



    More recent HDD models are higher end though and can get much better results than that.



    And there you have it. Enjoy!







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 2 mins ago

























    answered Nov 6 at 18:19









    Cestarian

    83911125




    83911125












    • Have you checked how your script behaves with fio on Windows?
      – Anon
      Nov 11 at 19:03






    • 1




      (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
      – Anon
      Nov 11 at 19:26












    • Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
      – Cestarian
      Nov 11 at 20:16






    • 1




      @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
      – Igor
      17 hours ago










    • @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
      – Cestarian
      7 mins ago


















    • Have you checked how your script behaves with fio on Windows?
      – Anon
      Nov 11 at 19:03






    • 1




      (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
      – Anon
      Nov 11 at 19:26












    • Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
      – Cestarian
      Nov 11 at 20:16






    • 1




      @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
      – Igor
      17 hours ago










    • @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
      – Cestarian
      7 mins ago
















    Have you checked how your script behaves with fio on Windows?
    – Anon
    Nov 11 at 19:03




    Have you checked how your script behaves with fio on Windows?
    – Anon
    Nov 11 at 19:03




    1




    1




    (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
    – Anon
    Nov 11 at 19:26






    (A note to readers other than Cestarian: if you're making a new tool that uses fio then if at all possible don't scrape the human readable fio output - use --output-format=json and parse the JSON. Fio's human readable output is not meant for machines and is not stable between versions fio. See this YouTube video of a case where scraping fio's human output led to an undesirable outcome)
    – Anon
    Nov 11 at 19:26














    Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
    – Cestarian
    Nov 11 at 20:16




    Thanks, I'll keep that in mind, sadly I no longer have a windows installation so I can't test that... but, I decided to look up crystaldiskmark results for my ssd and it seems my memory was wrong, since the results add up to what I'm getting on fio after all. It was a misunderstanding on my end that lead me to think the results were slower :/ I will correct it. I will also update this answer soon to use the json output for futureproofing, and perhaps a GUI version (I started working on it, but gtkdialog is poorly documented and zenity is limited so I'm having a hard time)
    – Cestarian
    Nov 11 at 20:16




    1




    1




    @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
    – Igor
    17 hours ago




    @Cestarian, great script, but not working "out of the box" on CentOS7. I had to modify it a bit.
    – Igor
    17 hours ago












    @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
    – Cestarian
    7 mins ago




    @Igor interesting, I just tested it now and it runs fine on my manjaro laptop. Based on the lines you erased, maybe CentOS does not have df installed by default?
    – Cestarian
    7 mins ago











    1














    I'm not sure the various deeper tests make any real sense when considering what you're doing in detail.



    The settings like block size, and queue depth, are parameters for controlling the low level input/output parameters of the ATA interface your SSD is sitting on.



    Thats all well and good when you're just running some basic test against a drive fairly directly, like to a large file in a simple partitioned filesystem.



    Once you start talking about benchmarking an encfs, these parameters dont particularly apply to your filesystem any more, the filesystem is just an interface into something else that eventually backs onto a filesystem that backs onto a drive.



    I think it would be helpful to understand what exactly you're trying to measure, because there are two factors in play here - the raw disk IO speed, which you can test by timing various DD commands (can give examples if this is what you want) /without/ encfs, or the process will be CPU bounded by the encryption and you're trying to test the relative throughput of the encryption algorithm. In which case the parameters for queue depth etc aren't particularly relevant.



    In both regards, a timed DD command will give you the basic throughput statistics you seek, but you should consider what you're intending to measure and the relevant parameters for that.



    This link seems to provide a good guide to disk speed testing using timed DD commands including the necessary coverage about 'defeating buffers/cache' and so forth. Likely this will provide the information you need. Decide which you're more interested in tho, disk performance or encryption performance, one of the two will be the bottleneck, and tuning the non-bottleneck isn't going to benefit anything.






    share|improve this answer


























      1














      I'm not sure the various deeper tests make any real sense when considering what you're doing in detail.



      The settings like block size, and queue depth, are parameters for controlling the low level input/output parameters of the ATA interface your SSD is sitting on.



      Thats all well and good when you're just running some basic test against a drive fairly directly, like to a large file in a simple partitioned filesystem.



      Once you start talking about benchmarking an encfs, these parameters dont particularly apply to your filesystem any more, the filesystem is just an interface into something else that eventually backs onto a filesystem that backs onto a drive.



      I think it would be helpful to understand what exactly you're trying to measure, because there are two factors in play here - the raw disk IO speed, which you can test by timing various DD commands (can give examples if this is what you want) /without/ encfs, or the process will be CPU bounded by the encryption and you're trying to test the relative throughput of the encryption algorithm. In which case the parameters for queue depth etc aren't particularly relevant.



      In both regards, a timed DD command will give you the basic throughput statistics you seek, but you should consider what you're intending to measure and the relevant parameters for that.



      This link seems to provide a good guide to disk speed testing using timed DD commands including the necessary coverage about 'defeating buffers/cache' and so forth. Likely this will provide the information you need. Decide which you're more interested in tho, disk performance or encryption performance, one of the two will be the bottleneck, and tuning the non-bottleneck isn't going to benefit anything.






      share|improve this answer
























        1












        1








        1






        I'm not sure the various deeper tests make any real sense when considering what you're doing in detail.



        The settings like block size, and queue depth, are parameters for controlling the low level input/output parameters of the ATA interface your SSD is sitting on.



        Thats all well and good when you're just running some basic test against a drive fairly directly, like to a large file in a simple partitioned filesystem.



        Once you start talking about benchmarking an encfs, these parameters dont particularly apply to your filesystem any more, the filesystem is just an interface into something else that eventually backs onto a filesystem that backs onto a drive.



        I think it would be helpful to understand what exactly you're trying to measure, because there are two factors in play here - the raw disk IO speed, which you can test by timing various DD commands (can give examples if this is what you want) /without/ encfs, or the process will be CPU bounded by the encryption and you're trying to test the relative throughput of the encryption algorithm. In which case the parameters for queue depth etc aren't particularly relevant.



        In both regards, a timed DD command will give you the basic throughput statistics you seek, but you should consider what you're intending to measure and the relevant parameters for that.



        This link seems to provide a good guide to disk speed testing using timed DD commands including the necessary coverage about 'defeating buffers/cache' and so forth. Likely this will provide the information you need. Decide which you're more interested in tho, disk performance or encryption performance, one of the two will be the bottleneck, and tuning the non-bottleneck isn't going to benefit anything.






        share|improve this answer












        I'm not sure the various deeper tests make any real sense when considering what you're doing in detail.



        The settings like block size, and queue depth, are parameters for controlling the low level input/output parameters of the ATA interface your SSD is sitting on.



        Thats all well and good when you're just running some basic test against a drive fairly directly, like to a large file in a simple partitioned filesystem.



        Once you start talking about benchmarking an encfs, these parameters dont particularly apply to your filesystem any more, the filesystem is just an interface into something else that eventually backs onto a filesystem that backs onto a drive.



        I think it would be helpful to understand what exactly you're trying to measure, because there are two factors in play here - the raw disk IO speed, which you can test by timing various DD commands (can give examples if this is what you want) /without/ encfs, or the process will be CPU bounded by the encryption and you're trying to test the relative throughput of the encryption algorithm. In which case the parameters for queue depth etc aren't particularly relevant.



        In both regards, a timed DD command will give you the basic throughput statistics you seek, but you should consider what you're intending to measure and the relevant parameters for that.



        This link seems to provide a good guide to disk speed testing using timed DD commands including the necessary coverage about 'defeating buffers/cache' and so forth. Likely this will provide the information you need. Decide which you're more interested in tho, disk performance or encryption performance, one of the two will be the bottleneck, and tuning the non-bottleneck isn't going to benefit anything.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Oct 8 '13 at 14:21









        iain

        67846




        67846






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f93791%2fbenchmark-ssd-on-linux-how-to-measure-the-same-things-as-crystaldiskmark-does-i%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            サソリ

            広島県道265号伴広島線

            Accessing regular linux commands in Huawei's Dopra Linux