Warren, I was doing a little research on the IDE vs SCSI raid topic aand I haven't found any hard numbers really comparing the two but I did find this article about a guy runing several IDE raid arrays on Linux and one SCSI raid array. http://www.research.att.com/~gjm/linux/ide-raid.html http://www.acc.umu.se/~sagge/scsi_ide/ I haven't read through his numbers very much yet, but he seems happy with his IDE raid arrays. I found several comparison that said SCSI is much better than IDE in raid configurations, but they did not offer any performance numbers. With IDE you get a lot of disk space with okay performance. The performance proble is in teh design of the bus. IDE was only designed for one device per channel. You can put two devices per chanel, but the share that channel. If one disk is working the other is idle. SCSI on teh other hand is designed for multiple disks per channel (I think up to 36, but most scsi controllers are 7). All of those devices have dedicated bandwidth on the channel. And in SCSI all disks can be access at the same time unlike IDE. If the performance of one IDE disk meets your requirments, but you need more space, then IDE raind is the way to go. On the other hand if you really need blazing speed and incredible space, then SCSI is the way to go. And the IDE is much cheaper, unless you get an awsome deal. As for the switch stuff, one of the cheap switches with a GIG port should work well for you. I don't know if I agree with your friend about possibly using a hub. The switch sends traffic to only the port it is intended, thus removing excess traffic from your systems, whereas the hub will send every pack to every system causing a massive collision domain. His reasoning is that the servers interface on the switch will send and recieve all the traffic. What he is forgetting bout is that the switch will only send packets from the workstations to the switch port. On a hub those packets will also get sent to every other workstation. And the packets teh server sends out will also be sent to every workstation when a switch would only send those packets to the port they are intended. Does that make sense? I was just looking in "data comm wearhouse" and they have a D-link 22 10/100 2 GIG switch for $900 (model DES-3225G). Linksys also looks like they have a 24 port 10/100 switch with GIG capability. Worth looking into. The bonding stuff I did in solaris required the Cisco 5509 I was using to support it and the performance hit and trouble made it not worth the headache. We eventually moved to GIG. If you get a lab set up and want to test with a cheap switch I could unplug my 16 port SMC 10/100 switch and bring it to your lab to do some testing for a few hours. I can't leave it, because it gets all the systems in my house connected. If your interested I could also bring three PPro systems with me to temporialy turn into clients for testing. If the switch is not here they serve no purpose. Later, Dusty RAM drive would be awsome!!!!!!! ------------------------------------------------ > > To get anywhere near the theoretical max of ethernet you are going to need > better hardware than SMC, Linksys, or D-link. They do offer some switches > with GIG ports, so you would need to look at the max throughput of the > entire switch to see if it will work for you. > > Brian Chee at UH said about the same thing. He actually suggested buying > HUBS instead of switches if all traffic is going to a single port, because > cheap switches tend to "crap out" under that kind of load. > > > > > I have done channel bonding on Solairs (bonding 4 FE ports in a quad eth > card) connecting to Cisco instead of buying GIG ETH. The problem was it was > very CPU intensive. Sounds like you need to build a small lab and start > testing. What hardware does LTSP recomend. Maybe you should look at GIG > ETH for your server. > > > > Yeah. I'll determine if I need GIG ETH after testing. I'm collecting a few > donated computers right now for some initial testing, but I need another 15. > > > On the disk side, there is NO performance benefit of IDE RAID and some > performance hit for RAID 5. You are just getting more size and some > protection. > > > > Dusty > > Are you sure? IDE RAID 0 can easily hit 55-70MB/sec thru-put on two 7200 > rpm disks. True that isn't redundant, but IDE RAID 5 (3 disk) can do about > 45-55MB/sec. IDE RAID lacks the onboard cache of *real* SCSI RAID > controllers, and CPU usage is much higher, but I think the cost/performance > tradeoff is worth it. Most of the time I'm looking at this 1GHz CPU Linux > machine (with one disk), and the bottleneck always appears to be the disk > clicking away, with 5% CPU usage. The LTSP servers will be dual 1GHz, so > I'm sure it can handle the extra overhead just fine. > > With the cost savings over SCSI RAID, we could probably afford a Platypus > battery backed RAM drive. 350MB/sec Reiserfs journal. Whoopass. > > > > > > --- > You are currently subscribed to luau as: dusty@sandust.com > To unsubscribe send a blank email to $subst('Email.Unsub')