![]()
Going with the examples in the sysbench man page for disk IO - a 3 part process:
Usage example:
$ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw prepare
$ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run
$ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw cleanup
I'll try these in my home dir on the laptop first for a timescale as its about a DVD size operation - only took secs really:
sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark
128 files, 24576Kb each, 3072Mb total
Creating files for the test...
du -cks test_file.*
24576 test_file.99
3145728 total
That's the files created - now for part two:
stevee@AMDA8 ~ $ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Done.
Operations performed: 6035 Read, 4015 Write, 12805 Other = 22855 Total
Read 94.297Mb Written 62.734Mb Total transferred 157.03Mb (2.7694Mb/sec)
177.24 Requests/sec executed
Test execution summary:
total time: 56.7022s
total number of events: 10050
total time taken by event execution: 149.7922
per-request statistics:
min: 0.01ms
avg: 14.90ms
max: 331.71ms
approx. 95 percentile: 115.01ms
Threads fairness:
events (avg/stddev): 628.1250/55.78
execution time (avg/stddev): 9.3620/0.53
stevee@localhost ~ $ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Done.
Operations performed: 6013 Read, 3999 Write, 12801 Other = 22813 Total
Read 93.953Mb Written 62.484Mb Total transferred 156.44Mb (1.704Mb/sec)
109.05 Requests/sec executed
Test execution summary:
total time: 91.8072s
total number of events: 10012
total time taken by event execution: 791.5647
per-request statistics:
min: 0.01ms
avg: 79.06ms
max: 540.69ms
approx. 95 percentile: 253.78ms
Threads fairness:
events (avg/stddev): 625.7500/32.42
execution time (avg/stddev): 49.4728/1.27
stevee@DELLMINT ~ $ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Done.
Operations performed: 6007 Read, 4000 Write, 12800 Other = 22807 Total
Read 93.859Mb Written 62.5Mb Total transferred 156.36Mb (2.1502Mb/sec)
137.62 Requests/sec executed
Test execution summary:
total time: 72.7172s
total number of events: 10007
total time taken by event execution: 379.5267
per-request statistics:
min: 0.01ms
avg: 37.93ms
max: 643.15ms
approx. 95 percentile: 219.55ms
Threads fairness:
events (avg/stddev): 625.4375/54.05
execution time (avg/stddev): 23.7204/2.10
stevee@hpmint ~ $ sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw run
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Extra file open flags: 0
128 files, 24Mb each
3Gb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Done.
Operations performed: 6001 Read, 4004 Write, 12801 Other = 22806 Total
Read 93.766Mb Written 62.562Mb Total transferred 156.33Mb (3.0281Mb/sec)
193.80 Requests/sec executed
Test execution summary:
total time: 51.6261s
total number of events: 10005
total time taken by event execution: 72.0259
per-request statistics:
min: 0.02ms
avg: 7.20ms
max: 430.52ms
approx. 95 percentile: 64.73ms
Threads fairness:
events (avg/stddev): 625.3125/101.84
execution time (avg/stddev): 4.5016/0.48
Dont forget to remove the test files:
sysbench --num-threads=16 --test=fileio --file-total-size=3G --file-test-mode=rndrw cleanup
| AMDA8 | Dell 64 bit | HP Pav 64 bit | HP Pav 32 bit | |
| Data Total | 157.03Mb | 156.36Mb | 156.33Mb | 156.44Mb |
| Time elapsed | 56.7022s | 72.7172s | 72.0259s | 91.8072s |
| Transfer rate | 2.7694Mb/sec
346MB/s |
2.1502Mb/sec
268MB/s |
3.0281Mb/sec
378MB/s |
1.704Mb/sec
213MB/s |
Here's the table from the Slow Can you Go DD Post to compare - I've changed some drives around since.
| AMDA8 | Dell 64 bit | HP Pav 64 bit | HP Pav 32 bit | |
| Cpu Utilization | 99%CPU | 99%CPU | 24%CPU | 14%CPU |
| Time elapsed | 3.6 secs avg | 3.8 secs avg | 52.68 secs | 1:05 = 65 sec |
| Transfer rate | 1.2 Gb/s avg
150MB/s |
1.1 Gb/s avg
137MB/s |
81.5 MB/s | 65.4 MB/s avg |
A big difference in "benchmarking" results here from this test and the previous one from the older Post using DD. The drive in the 32 bit Celeron PC is the same one, but shows a transfer rate difference for Sysbench to the DD test from 65.4MB/s for DD, to 213Mb/sec for Sysbench - over 3 times the speed of the DD test.
Similar increase multiples for the others too...
Again goes to show how the test overhead affects apparent performance unless written specifically for benchmarking I suppose. (The DD test was one suggested in B. Greggs book). For system comparison or baselines it's fine I guess, but accurate transfer rates for disk/file IO, maybe not.
While I have all 4 set to go I better re-run the DD tests and verify those numbers...
| AMDA8 | Dell 64 bit | HP Pav 64 bit | HP Pav 32 bit | |
| Cpu Utilization | 29%CPU | 27%CPU | 30%CPU | 13%CPU |
| Time elapsed | 37.7196 s | 43.1529 s | 42.9582 s | 73.8415 s |
| Transfer rate | 114 MB/s | 99.5 MB/s | 100 MB/s | 58.2 MB/s |
A re-run of each to show any caching increases:
| AMDA8 | Dell 64 bit | HP Pav 64 bit | HP Pav 32 bit | |
| Cpu Utilization | 28%CPU | 28%CPU | 30%CPU | 13%CPU |
| Time elapsed | 38.8037 s | 43.1416 s | 42.7921 s | 66.4704 s |
| Transfer rate | 111 MB/s | 99.6 MB/s | 100 MB/s | 64.6 MB/s |
Yeah well..? Considering the Dell64 and HP64 have had disk changes...
Hmm...disk IO - not an exact science it seems...
You can also time the cached/non-cached reads for each disk on a system with hddparm:
stevee@DELLMINT ~ $ sudo hdparm -t /dev/sd*
/dev/sda:
Timing buffered disk reads: 288 MB in 3.00 seconds = 95.93 MB/sec
/dev/sda1:
Timing buffered disk reads: 290 MB in 3.00 seconds = 96.66 MB/sec
/dev/sdb:
Timing buffered disk reads: 166 MB in 3.01 seconds = 55.18 MB/sec
/dev/sdb1:
Timing buffered disk reads: 166 MB in 3.01 seconds = 55.11 MB/sec
/dev/sdb2:
Timing buffered disk reads: read(2097152) returned 1024 bytes
/dev/sdb5:
Timing buffered disk reads: 94 MB in 3.02 seconds = 31.12 MB/sec
/dev/sdc:
Timing buffered disk reads: 72 MB in 3.08 seconds = 23.37 MB/sec
/dev/sdc1:
Timing buffered disk reads: 70 MB in 3.00 seconds = 23.31 MB/sec
/dev/sdd:
Timing buffered disk reads: 70 MB in 3.00 seconds = 23.31 MB/sec
/dev/sdd1:
Timing buffered disk reads: 72 MB in 3.09 seconds = 23.29 MB/sec
/dev/sde:
Timing buffered disk reads: 70 MB in 3.08 seconds = 22.72 MB/sec
/dev/sde1:
Timing buffered disk reads: 70 MB in 3.07 seconds = 22.79 MB/sec
/dev/sde2:
Timing buffered disk reads: read(2097152) returned 1024 bytes
/dev/sde5:
Timing buffered disk reads: 70 MB in 3.06 seconds = 22.89 MB/sec
stevee@DELLMINT ~ $ sudo hdparm -T /dev/sd*
/dev/sda:
Timing cached reads: 1736 MB in 2.00 seconds = 867.79 MB/sec
/dev/sda1:
Timing cached reads: 1780 MB in 2.00 seconds = 890.03 MB/sec
/dev/sdb:
Timing cached reads: 1610 MB in 2.00 seconds = 805.15 MB/sec
/dev/sdb1:
Timing cached reads: 1604 MB in 2.00 seconds = 802.30 MB/sec
/dev/sdb2:
read(2097152) returned 1024 bytes
/dev/sdb5:
Timing cached reads: 1610 MB in 2.00 seconds = 804.92 MB/sec
/dev/sdc:
Timing cached reads: 1558 MB in 2.00 seconds = 778.69 MB/sec
/dev/sdc1:
Timing cached reads: 1702 MB in 2.00 seconds = 850.94 MB/sec
/dev/sdd:
Timing cached reads: 1736 MB in 2.00 seconds = 867.37 MB/sec
/dev/sdd1:
Timing cached reads: 1578 MB in 2.00 seconds = 788.44 MB/sec
/dev/sde:
Timing cached reads: 1542 MB in 2.00 seconds = 770.48 MB/sec
/dev/sde1:
Timing cached reads: 1696 MB in 2.00 seconds = 847.88 MB/sec
/dev/sde2:
read(2097152) returned 1024 bytes
/dev/sde5:
Timing cached reads: 1656 MB in 2.00 seconds = 828.38 MB/sec
stevee@AMDA8 ~ $ sudo hdparm -T /dev/sd*
/dev/sda:
Timing cached reads: 1622 MB in 2.00 seconds = 811.00 MB/sec
/dev/sda1:
Timing cached reads: 1684 MB in 2.00 seconds = 841.46 MB/sec
/dev/sda2:
read(2097152) returned 1024 bytes
/dev/sda5:
Timing cached reads: 1742 MB in 2.00 seconds = 870.23 MB/sec
This is the sort of order of magnitude differences between the memory cache speeds above, and the physical drive below:
stevee@AMDA8 ~ $ sudo hdparm -t /dev/sd*
/dev/sda:
Timing buffered disk reads: 334 MB in 3.02 seconds = 110.71 MB/sec
/dev/sda1:
Timing buffered disk reads: 332 MB in 3.00 seconds = 110.62 MB/sec
/dev/sda2:
Timing buffered disk reads: read(2097152) returned 1024 bytes
/dev/sda5:
Timing buffered disk reads: 144 MB in 3.01 seconds = 47.90 MB/sec
For a tech comparison of an added SSD drive to HPMINT - almost 3 times faster:
sudo hdparm -t /dev/sde1
/dev/sde1:
Timing buffered disk reads: 238 MB in 3.01 seconds = 79.19 MB/sec
stevee@hpmint ~ $ sudo hdparm -t /dev/sdf1
/dev/sdf1:
Timing buffered disk reads: 656 MB in 3.00 seconds = 218.50 MB/sec