Summary thoughts

Some conclusions and general suggestions, some based on the data, some on general principles:

1.  RAID0 can significantly improve performance over a single drive, if it isn't bottlenecked by some other limitation of the hardware.  Of course, for X drives in the array, it is X times more likely to suffer a failure than a single drive.  From the test results so far, not much can be said about RAID5.  On principle however, RAID5 should outperform bare drives and be a bit worse than RAID0.  Like RAID0, it should improve as more drives are added (within reason).

2.  The limitations of P-ATA/IDE drives and controllers are something to watch out for when planning drive layouts.  (Fortunately, IDE is
obsolete at this point, so this is unlikely to be a factor in future purchases.)

3.  Those filesystems that will be accessed the most should be put on disks in such a way as to allow simultaneous access if at all possible.
  In the case of STAR database servers, /tmp (or wherever temp space is directed to), swap space and the database files (and if possible, system
files) should all be on individual drives (and in the case of IDE drives, on separate channels).  (Of course, if servers are having to go
into swap space at all, then performance is already suffering, and more RAM would probably help more than anything else.)

4.  For the STAR database server slaves, if they are using tmp space (whether /tmp or redirected somewhere else), then as is the case with using swap space, more RAM would likely help, but to gain a bit in writing to the temp space, make it ext2 rather than ext3, whether RAIDed or not.  Presumably it matters not if the filesystem gets corrupted during a crash -- at worst, just reformat it.  It is after all, only temp space...