After trying to decide which I/O scheduler to use, I decided to try running the Androbench storage performance benchmark tool for the I/O schedulers,
cfq, deadline, noop,vr and sio to find out.
All tests run on deodexed JVQ on Galaxian [email protected]
The file system used was /data (ext4)
I used standard buffer sizes (256Kb for sequential I/O) and (4kb for random I/O) To speed the tests up, I used a read file size of 16Mb and a write file size of 1Mb
I did test using the recommended sizes of 32MB and 2MB) but the rates were the same so I kept the smaller sizes to speed up the tests
I know its not very scientific but provides a rough estimate of the relative performance.
MBRS =MB/s sequential reads
MBWS =MB/s sequential writes
RRIOPS =random reads IO's/sec
RWIOPS=random writes IO's/sec
Test #1
MBRS MBWS RRIOPS RW IOPS
cfq 19.40 4.8 1014 35
deadline 19.39 6.1 1119 42
noop 19.47 9.0 1098 43
vr 17.76 7.7 1105 45
sio 18.30 6.9 1152 49
Test #2
MBRS MBWS RRIOPS RWIOPS
cfq 19.18 3.3 1057 33
deadline 26.27 5.5 1151 43
noop 19.08 8.6 1059 41
vr 19.55 6.4 1120 50
sio 19.16 6.3 1122 45
Test #3
MBRS MBWS RRIOPS RWIOPS
cfq 19.25 3.8 1120 34
deadline 18.97 7.8 1146 43
noop 19.59 9.3 1161 45
vr 19.14 6.6 1227 53
sio 19.46 7.5 1172 50
my conclusion - use NOOP or SIO(if stable)
As you can see, sequential reads (MB/s) and sequential IOPS are very high so I don't think we need to worry about these too much, as you know (from Quadrant) it's writes that are VERY slow
Unsurprisingly, we can see that the slowest scheduler is the default CFQ.
The noop scheduler produced the fastest sequential write rate on all runs.
the VR and SIO schedulers produce the greatest number of IOPS for random writes.
Think I will use the noop scheduler as it uses little CPU and is fairly well established and its not far off the random write IOPS rate.
Is someone can be arsed, I think we should test with all background processes off, using a 1Ghz speed, and run say 10 sets of tests and order the results using a spreadsheet.
Would be good to see these results run against a RFS file system too!!!
Would also be good to test I/O against say a /cache file system with journalling switched off to unserstand the actual improvement.
I am surprised at the read speed, the IO rate is equivalent to 5 or 6 15K SA/FC physical disks - Whahoo!!!!!
cfq, deadline, noop,vr and sio to find out.
All tests run on deodexed JVQ on Galaxian [email protected]
The file system used was /data (ext4)
I used standard buffer sizes (256Kb for sequential I/O) and (4kb for random I/O) To speed the tests up, I used a read file size of 16Mb and a write file size of 1Mb
I did test using the recommended sizes of 32MB and 2MB) but the rates were the same so I kept the smaller sizes to speed up the tests
I know its not very scientific but provides a rough estimate of the relative performance.
MBRS =MB/s sequential reads
MBWS =MB/s sequential writes
RRIOPS =random reads IO's/sec
RWIOPS=random writes IO's/sec
Test #1
MBRS MBWS RRIOPS RW IOPS
cfq 19.40 4.8 1014 35
deadline 19.39 6.1 1119 42
noop 19.47 9.0 1098 43
vr 17.76 7.7 1105 45
sio 18.30 6.9 1152 49
Test #2
MBRS MBWS RRIOPS RWIOPS
cfq 19.18 3.3 1057 33
deadline 26.27 5.5 1151 43
noop 19.08 8.6 1059 41
vr 19.55 6.4 1120 50
sio 19.16 6.3 1122 45
Test #3
MBRS MBWS RRIOPS RWIOPS
cfq 19.25 3.8 1120 34
deadline 18.97 7.8 1146 43
noop 19.59 9.3 1161 45
vr 19.14 6.6 1227 53
sio 19.46 7.5 1172 50
my conclusion - use NOOP or SIO(if stable)
As you can see, sequential reads (MB/s) and sequential IOPS are very high so I don't think we need to worry about these too much, as you know (from Quadrant) it's writes that are VERY slow
Unsurprisingly, we can see that the slowest scheduler is the default CFQ.
The noop scheduler produced the fastest sequential write rate on all runs.
the VR and SIO schedulers produce the greatest number of IOPS for random writes.
Think I will use the noop scheduler as it uses little CPU and is fairly well established and its not far off the random write IOPS rate.
Is someone can be arsed, I think we should test with all background processes off, using a 1Ghz speed, and run say 10 sets of tests and order the results using a spreadsheet.
Would be good to see these results run against a RFS file system too!!!
Would also be good to test I/O against say a /cache file system with journalling switched off to unserstand the actual improvement.
I am surprised at the read speed, the IO rate is equivalent to 5 or 6 15K SA/FC physical disks - Whahoo!!!!!