Since not a long time I've upgraded my rig, and the new hardware include a double 80GB serial ATA raid 0 array.
I inizialized the array making 2 partitions (1 for OS and applications, the other for datas) with different cluster size (bigger for the data partition for sligthly enhanced performance, this was the advise from the partition manager).
Sometimes experiencing stability faults, I'm in doubt if that kind of partition is not a good choice for a raid array.
Does anyone is experienced in similar configurations? Does the problem can be related to the partitioning, or is better to consider other possible causes?
<font size=-1>[ This Message was edited by: Cochise on 2005-06-07 13:44 ]</font>
raid arrays
With RAID0 (striping) you double the risk of failure. If one disk fails, that's it. So backup regularily and check the disks. Normally you do no partitioning at all, because the trick is, that the partition in reality is two disks (and to avoid mess in the stripe tables).
They only know clusters+sectors, you know, not partitions. But ok, it works.
If you have a fault, you should check the drives for faulty sectors, immediately.
SCSI drives can correct themselves by holding a list of defect sectors, the others need regular checking/deleting of bad ones, what, of course, can lead to deleting active content. So before you check, you should backup, then run the disk check and then restore whatever got lost.
You see, a lot of work for the performance you win, with high risk.
So I'd like to recommend a RAID10, like this:
4 disks
partitioned (->8 partitions)
1/2 and 3/4 mirrored (RAID1) (->4 partitions)
partition pairs striped (RAID0). (->2 partitions)
BTW, that's why it's called 10, you do first 1, then 0.
In case of failure the mirror tells you what disk to check+repair or replace.
<font size=-1>[ This Message was edited by: Micha on 2005-06-09 06:03 ]</font>
They only know clusters+sectors, you know, not partitions. But ok, it works.
If you have a fault, you should check the drives for faulty sectors, immediately.
SCSI drives can correct themselves by holding a list of defect sectors, the others need regular checking/deleting of bad ones, what, of course, can lead to deleting active content. So before you check, you should backup, then run the disk check and then restore whatever got lost.
You see, a lot of work for the performance you win, with high risk.
So I'd like to recommend a RAID10, like this:
4 disks
partitioned (->8 partitions)
1/2 and 3/4 mirrored (RAID1) (->4 partitions)
partition pairs striped (RAID0). (->2 partitions)
BTW, that's why it's called 10, you do first 1, then 0.
In case of failure the mirror tells you what disk to check+repair or replace.
<font size=-1>[ This Message was edited by: Micha on 2005-06-09 06:03 ]</font>
yes, I think the actual drives are fast enough and in most cases its possible to somehow repair a damage. Without the need to reinstall everything and always having to backup.
But the 4 disk solution is pretty cool, normally you check the damaged disk and that's it, the mirror takes care to find out what file(s) there are to restore, so you don't have these surprises like: starting CoolEdit -> nothing happens -> aah, that file was it last time.
But I agree, lots of noise/warmth spreading hardware for a damage that luckily is not happening daily.
If you have a many $$$ job with hard dates for delivery, ok, then it's a) affordable, only $$, b)highly to recommend. At work we use these towers, they look like midi towers, but contain only a controller with 8 HDs. They're cool. Self controlling units that send you a message like "here no 5, my disk 2 died" you take #2 out, run checkdisk, plug back and receive "here no 5, disk 2 too small". Well, you see, there is still some room left for intelligent solutions. How about a spare partition as a repair purpose reserve space solution?
But the 4 disk solution is pretty cool, normally you check the damaged disk and that's it, the mirror takes care to find out what file(s) there are to restore, so you don't have these surprises like: starting CoolEdit -> nothing happens -> aah, that file was it last time.
But I agree, lots of noise/warmth spreading hardware for a damage that luckily is not happening daily.
If you have a many $$$ job with hard dates for delivery, ok, then it's a) affordable, only $$, b)highly to recommend. At work we use these towers, they look like midi towers, but contain only a controller with 8 HDs. They're cool. Self controlling units that send you a message like "here no 5, my disk 2 died" you take #2 out, run checkdisk, plug back and receive "here no 5, disk 2 too small". Well, you see, there is still some room left for intelligent solutions. How about a spare partition as a repair purpose reserve space solution?
I runned some test.
No file system problems.
I also benchmarked file system performances and I think the gain is worth the risk.
Dedicated hardware is for requirements that aren't like mine, or however this is what I think. I don't need raid5 neither can afford the price for the board and for other HDD to go to raid10 (this modality is also supported by my MB).
So I'll do backups
No file system problems.
I also benchmarked file system performances and I think the gain is worth the risk.
Dedicated hardware is for requirements that aren't like mine, or however this is what I think. I don't need raid5 neither can afford the price for the board and for other HDD to go to raid10 (this modality is also supported by my MB).
So I'll do backups