To get anywhere near the theoretical max of ethernet you are going to need better hardware than SMC, Linksys, or D-link. They do offer some switches with GIG ports, so you would need to look at the max throughput of the entire switch to see if it will work for you. I have done channel bonding on Solairs (bonding 4 FE ports in a quad eth card) connecting to Cisco instead of buying GIG ETH. The problem was it was very CPU intensive. Sounds like you need to build a small lab and start testing. What hardware does LTSP recomend. Maybe you should look at GIG ETH for your server. On the disk side, there is NO performance benefit of IDE RAID and some performance hit for RAID 5. You are just getting more size and some protection. Dusty ------------------------------------------------------- > > Thanks for the recommendations everyone. > > Just one question... > This will be configured with the Linux Terminal Server with 100Mbit ethernet > in one port of the switch. Thin client computers will be in other ports. > That will be about 60KB/sec of X data idle, up to 700KB/sec during moderate > activity. > > Theoretical max for Ethernet 100Mbit/8 ~ 12.5MB/sec > Theoretical max bandwidth requirement for 23 clients ~ 16.1MB/sec > > Meaning I could be in trouble. To make matters worse I doubt the NIC can > reach 90% of the theoretical 12.5MB/sec. Anyone know if I can kludge my way > around this problem with TWO NIC interfaces on the server, using two ports > on the switch? For example 192.168.0.1 and 192.168.0.2 are the LTSP > interfaces, and a quick hack to the BOOTP server will round robin either as > the LTSP server. > > If this configuration works, then perhaps the Kingston 32-port managed > switch would be a good deal in order to get more thin clients onto the > server. > > Warren Togami > warren@togami.com > > > > --- > You are currently subscribed to luau as: dusty@sandust.com > To unsubscribe send a blank email to $subst('Email.Unsub')