[luau] No hard drive, only compact flash card

MonMotha monmotha at indy.rr.com
Sun Jul 6 13:29:00 PDT 2003


Matthew John Darnell wrote:
> Aloha,
> 
> Does anyone have any experience with booting and running an full Linux
> server install from a 1.0GB CompactFlash Cardor similar.
> 
> By full server install I mean, apache, sendmail, mysql, gc++, etc.  No X
> needed, only command line.
> 
> Seems like it would be possible, 500MB for the OS and 50MB for the apps.
> 
> I wonder how fast/slow they are for access compared to a hard drive.
> 
> I see 1.0GB card for $299 retail, they will only be getting
> cheaper/faster/higher density.
> 
> Aloha,
> Matt
>

CF cards are *VERY* slow compared to hard drives, especially on writes.  My 
little 32MB things can manage about 1-4MB/sec reads, but only about 
100-300kB/sec writes!  This is *REALLY* slow.  You will NOT want to even THINK 
about swapping to it.  In other words, make sure you have enough RAM because 
there won't be any swap.  RAM is cheap these days, so this shouldn't be a 
problem.  However, last time I checked, distros like redhat likes to complain a 
lot if you didn't set up swap for them (I think it used to be that redhat would 
refuse to install under such a situation?)

People tend to overexagerate the erase cycle limitations of flash.  CF cards 
usually do wear patterning to prevent the same sectors from being used over and 
over, and when they have reached their max usage, that sector is just no longer 
used and is remapped (like bad sectors on IDE hard drives).  The entire card 
isn't useless.  If you're really concerned about this, you can get nicer flash 
cards that actually present themselves as raw flash, rather than ATA flash, and 
run a real flash filesystem like jffs2 on it. jffs2 includes on-the-fly 
compression (which I think can be disabled, but may actually help with 
read/write speed in this case), and all the bad block handling/wear patterning 
you could need.

However, due to their slowness at writes, I'd reccomend keeping really dynamic 
things like /tmp in a ramdisk (use tmpfs, it takes up only as much ram as it 
needs to based on what's in it).  You might also want to do something with /var 
(like unpacking it to a ramdisk at startup, then tarring it up back to CF at 
shutdown, of course this makes unclean shutdowns REALLY bad).  Or, you could 
just not have logging to /var/log and simply use a ring buffer like is used by 
busybox's syslog.

I'm still curious if even 500MB would be needed for "the os".  You seem to be 
used to very bloated desktop oses (like redhat) that are designed to have 
everything abstracted two or three times (remember, you can always fix the 
problem by adding another layer of indirection).  I will say that I have "the 
os" in well under 4MB (where "the os" is defined as kernel, core apps like stuff 
  in /bin and /sbin, and libraries like glibc in /lib; this does not include 
/usr of course).  Aagin you can save a fair amount on smaller systems by playing 
tricks with smaller versions of libraries, but on a system with full apps like 
mysql and gcc, it won't be worth it (as I think gcc completely and utterly 
requires glibc).

Toolchains are big, but they're not that big.  I've seen full x86->ARM 
toolchains in about 50-70MB.  But that has to include all the foreign libs. 
Here, those would be considered part of "the os" or "the apps", depending on 
their usage, since they are needed to run stuff locally anyway.  The static libs 
will sometimes pose problems because they tend to be rather large, but at least 
headers are usually pretty small :)

--MonMotha





More information about the LUAU mailing list