- General information
- Data readiness
- Grid and Cloud
- Infrastructure
- Online Computing
- Software Infrastructure
- Batch system, resource management system
- Computing Environment
- Facility Access
- FileCatalog
- HPSS services
- Home directories and other areas backups
- Hypernews
- Installing the STAR software stack
- Provision CVMFS and mount BNL/STAR repo
- RCF Contributions
- Software and Libraries
- Storage
- Tools
- Video Conferencing
- Web Access
- Machine Learning
- Offline Software
- Production
- Test tree
Simple tests -- hdparm and dd
Updated on Fri, 2008-08-29 16:30. Originally created by wbetts on 2008-06-25 16:15.
Under:
Starting with the basics, hdparm:
Five samples of the following:
On eastwood:
- hdparm -t -T /dev/hd{a,c,e,f,g,h}1
- hdparm -t -T /dev/mapper/VolGroup00-LogVol00 (/ on hda)
On newman:
- hdparm -t -T /dev/md{0,1}
- hdparm -t -T /dev/mapper/VolGroup00-LogVol00 (/ on hda)
Timing cached reads (MB/s) | Timing buffered disk reads (MB/s) |
||
eastwood - LV on hda | 1162.59 +/- 3.84 | 54.39 +/- 0.42 |
|
eastwood - hda1 | 1167.32 +/- 2.31 | 54.55 +/- 0.21 | |
eastwood - hdc1 | 1168.34 +/- 4.75 | 35.95 +/- 0.68 | |
eastwood - hdc1 *** | 1027.43 +/- 4.89 | 42.65 +/- 0.39 |
*** -- for this test, I moved this disk to the Promise controller in position hde1 to see if it would improve. While there appears to be improvement in this test, the IOzone results show no improvement. Even with this "improved" hdparm result, it is clear this disk really is inferior for some reason. |
eastwood - hde1 | 1167.39 +/- 3.41 | 57.5 +/- 0.04 | |
eastwood - hdf1 | 1167.72 +/- 2.03 | 59.34 +/- 0.26 | |
eastwood - hdg1 | 1166.46 +/- 2.63 | 57.27 +/- 0.09 | |
eastwood - hdh1 | 1166.20 +/- 3.59 | 54.36 +/- 0.10 | |
newman - LV on hda | 1180.26 +/- 10.32 | 51.70 +/- 1.51 | |
newman - md0 (RAID5) | 1211.01 +/- 43.51 | 66.67 +/- 0.30 | |
newman - md1 (RAID0) | 1197.65 +/- 15.77 | 113.67 +/- 0.45 |
Items of note:
- The Logical Volumes on the two nodes (hda in both cases) are essentially identical (as expected).
- hdc on eastwood is significantly slower for some reason (not expected). If this holds out in other tests and is true on newman, it will skew the RAID tests, since hdc is used in the RAID5 array on newman. (I originally hypothesized that this was a feature/flaw of the onboard Intel controller, since the masters and slaves on the Promise controller are indistinguishable, but after further testing (dd, IOzone and some hardware swapping), I have concluded that this disk (and likely all the model "-00FUA0" are simply not up to par with the "-00GVA0" models.))
- Other than hdc, the individual disks on eastwood all perform within a few percent of each other.
- ext2 and ext3 are essentially identical (as expected for reading)
- The RAID arrays on newman are noticably faster than individual disks (as expected for reading stripes in parallel). The RAID5 array is not nearly as fast though as the RAID0 array. This could be the "hdc" effect, in which case results are not really apples-to-apples comparisons.
On to dd:
Here are the test commands, using 1GB write/read and then 10GB write/read:
time dd if=/dev/zero of=/$drive/test.zero bs=1024 count=1000000
time dd of=/dev/zero if=/$drive/test.zero bs=1024 count=1000000
time dd if=/dev/zero of=/$drive/test.zero bs=1024 count=10000000
time dd of=/dev/zero if=/$drive/test.zero bs=1024 count=10000000
This sequence was run once on eastwood and five times on newman. (hda on eastwood was tested twice (by accident, but with surprisingly different results in the 10GB tests)). This is essentially an idealized test, the results of which are unlikely to be matched in a production system -- during this test there should be little or no rotational or seek latency (because the disks are mostly empty and the reading and writing proceeds sequentially, rather than randomly, plus there should be no contention from multiple processes.)
DRIVE | OPERATION | REAL (s) | USER | SYS | dd TIME | dd RATE (MB/s) |
eastwood - hda | 1GB write | 9.848 | 0.436 | 7.947 | 9.84596 | 104 |
eastwood - hda | 1GB read | 2.426 | 0.401 | 2.026 | 2.42409 | 422 |
eastwood - hda | 10GB write | 287.458 | 5.170 | 88.529 | 286.89 | 35.7 |
eastwood - hda | 10GB read | 289.671 | 3.647 | 24.788 | 289.328 | 35.4 |
eastwood - hda | 1GB write | 12.032 | 0.512 | 8.462 | 11.0798 | 92.4 |
eastwood - hda | 1GB read | 2.734 | 0.402 | 2.333 | 2.73233 | 375 |
eastwood - hda | 10GB write | 203.015 | 5.146 | 89.103 | 202.397 | 50.6 |
eastwood - hda | 10GB read | 192.057 | 4.195 | 24.033 | 191.866 | 53.4 |
eastwood - hdc | 1GB write | 6.242 | 0.386 | 4.273 | 6.22181 | 165 |
eastwood - hdc | 1GB read | 2.695 | 0.434 | 2.262 | 2.69342 | 380 |
eastwood - hdc | 10GB write | 232.640 | 3.874 | 44.764 | 232.588 | 44 |
eastwood - hdc | 10GB read | 253.590 | 3.583 | 26.264 | 253.474 | 40.4 |
eastwood - hdc (moved to Promise controller)*** | 1GB write | 4.594±0.011 | 0.4582±0.009 | 4.136±0.015 | 4.592±0.011 | 223±0.7 |
eastwood - hdc (moved to Promise controller)*** | 1GB read | 2.604±0.010 | 0.4512±0.015 | 2.154±0.015 | 2.602±0.010 | 393.4±1.5 |
eastwood - hdc (moved to Promise controller)*** | 10GB write | 231.193±0.923 | 4.3206±0.095 | 45.051±0.144 | 231.146±0.907 | 44.28±0.16 |
eastwood - hdc (moved to Promise controller)*** | 10GB read | 222.119±0.458 | 3.758±0.048 | 26.514±0.724 | 222.074±0.451 | 46.14±0.09 |
eastwood - hde | 1GB write | 5.920 | 0.377 | 4.295 | 5.86033 | 175 |
eastwood - hde | 1GB read | 2.680 | 0.415 | 2.266 | 2.67825 | 382 |
eastwood - hde | 10GB write | 185.871 | 4.072 | 44.518 | 185.808 | 55.1 |
eastwood - hde | 10GB read | 192.924 | 3.648 | 25.640 | 192.841 | 53.1 |
eastwood - hdf | 1GB write | 5.864 | 0.395 | 4.293 | 5.8289 | 176 |
eastwood - hdf | 1GB read | 2.691 | 0.401 | 2.290 | 2.68856 | 381 |
eastwood - hdf | 10GB write | 174.073 | 3.899 | 44.818 | 174.004 | 58.8 |
eastwood - hdf | 10GB read | 282.014 | 3.847 | 27.576 | 281.878 | 36.3 |
eastwood - hdg | 1GB write | 11.149 | 0.489 | 8.396 | 11.1013 | 92.2 |
eastwood - hdg | 1GB read | 2.721 | 0.440 | 2.281 | 2.71868 | 377 |
eastwood - hdg | 10GB write | 181.620 | 5.316 | 87.690 | 181.573 | 56.4 |
eastwood - hdg | 10GB read | 183.700 | 3.880 | 24.359 | 183.613 | 55.8 |
eastwood - hdh | 1GB write | 10.714 | 0.536 | 8.175 | 10.6598 | 96.1 |
eastwood - hdh | 1GB read | 2.710 | 0.427 | 2.284 | 2.7075 | 378 |
eastwood - hdh | 10GB write | 190.202 | 5.180 | 87.511 | 190.156 | 53.9 |
eastwood - hdh | 10GB write | 194.078 | 4.013 | 24.908 | 194.012 | 52.8 |
newman - hda | 1GB write | 10.670±1.308 | 0.504±0.026 | 8.242±0.065 | 10.628±1.281 | 97.6 ±12.99 |
newman - hda | 1GB read | 3.582±1.222 | 0.440±0.012 | 2.281±0.041 | 2.720±0.043 | 376.6 ±6.0 |
newman - hda | 10GB write | 269.355±5.300 | 5.173±0.131 | 88.529±0.373 | 268.738±5.294 | 38.1 ±0.7 |
newman - hda | 10GB read | 211.516±4.548 | 4.331±0.263 | 24.237±0.281 | 211.124±1.035 | 48.5 ±1.0 |
newman - raid0 | 1GB write | 10.012±0.459 | 0.525±0.029 | 8.605±0.128 | 9.972±0.453 | 102.9 ±4.6 |
newman - raid0 | 1GB read | 2.764±0.029 | 0.426±0.020 | 2.338±0.043 | 2.762±0.029 | 370.8 ±3.7 |
newman - raid0 | 10GB write | 123.691±3.427 | 5.130±0.378 | 86.677±0.700 | 123.636±3.427 | 82.9 ± 2.4 |
newman - raid0 | 10GB read | 88.485±1.196 | 3.969±0.198 | 25.252±0.263 | 88.415±1.180 | 115.8 ±1.5 |
newman - raid5 | 1GB write | 9.701±0.158 | 0.535±0.010 | 8.923±0.070 | 9.693±0.156 | 105.6 ±1.7 |
newman - raid5 | 1GB read | 2.974±0.103 | 0.439±0.029 | 2.535±0.115 | 2.971±0.104 | 345.0 ±12.2 |
newman - raid5 | 10GB write | 205.112±2.087 | 5.692±0.106 | 100.492±0.552 | 205.090±2.068 | 49.9 ±0.5 |
newman - raid5 | 10GB read | 156.661±0.334 | 4.528±0.343 | 38.091±1.381 | 156.623±0.347 | 65.4 ±0.2 |
Observations:
- 1GB reads are likely dominated by file caching in RAM, since the file had just been written. 1GB writes may be faster than disk I/O because some of the writes may have only been buffered in RAM rather than actually written to disk at the end of the dd command.
- We see again that hdc is not able to perform as well as the drives on the Promise Controller (hde and up). Based on the results and the IOzone results after switching it to the Promise controller, the conclusion is that this drive is not up to par. An open question is whether this applies to all the "-00FUA0" model drives or just this one.
- The discrepancy between the first two hda tests on eastwood is disturbing, and may be worth further testing.
- Eastwood's hdf (ext2) 10GB read is surprisingly low. Perhaps this whole test suite should be run several more times to average out possible aberations.
- From these results and the hdparm, it appears uncontested single drives max out at about 55-60 MB/sec for both reads and writes, while RAID0 is roughly twice as fast. The RAID5 results are not encouraging, but this may be affected by the presence of hdc in the array.
»
- Printer-friendly version
- Login or register to post comments