[LUAU] Intel Doubles Down on Linux

Jim Thompson jim at netgate.com
Wed Jul 20 22:41:51 PDT 2005


On Jul 20, 2005, at 5:46 PM, Tim Newsham wrote:

>> What makes you think its not Intel-dominated now?    Show of  
>> hands, please, how many in the audience here
>> run linux on anything other than an x86 processor?
>>
>
> I have a sparc running BSD and solaris (and more off topic, a  
> parisc running hpux), does this count?  No, they're not in wide  
> use.. the x86 boxen plus vmware is so much more useful :)

Yes, it counts.

>
>> Oh phleze.... X must die.
>>
>
> Hear!  Hear!

I used X in the dark ages, and it was better than a decwriter.   
Better, it took up less space on my desk than multiple glass terminals.

I remember the delight when I left X behind, with its ugliness.  I  
don't want to configure every last aspect of the UI so that I can  
work hard to make it look even uglier than the apparently color-blind  
author of the program chose.  I just want to be able to print.  Or  
pick a font.  Or have two programs work the same way.  Cut buffer,  
clipboard, WTF?!

Now, I'm so very happy to be on an OSX box, where it all seems to  
mostly Just Work without the agony of twiddling with everything to  
make the system be barely functional.

Die, X, die.  Now.  BitBlt on wheels is so last century.  I don't  
want fully generalized mediocrity.  Just have it not suck.

>> Seriously, if linux had managed to carry gnome onto raw hardware,  
>> rather than surfing the packets through an "X server", then they  
>> might have had something.   Better, if *nix had aligned around  
>> something like NeWS, then Windows would seem completely creaky in  
>> the GUI department.
>>
>
> Plan 9 got it right.  Provide a generalized system for resource  
> access (including devices), allow it to be accessed over the  
> network, and provide a device for performing graphics operations.  
> Then its just a matter of writing a simple graphics interface and  
> it can magically be used remotely (and even recursively) where ever  
> you want it.  All at miniscule sizes (by comparison). Here's to  
> good design.
>
>
>> Any computer architecture that needs "anti-virus" software has  
>> failed.
>>
>
> I'm sorry.  I have to take issue with this.  The need for anti- 
> virus software doesnt prove anything other than popularity.  There  
> is no existing security technology that can prevent virii.   
> Abstinance is the only solution.

May I suggest that you (re)-read the USGvmt's "rainbow  
books" (TCSEC), and re-familiarize yourself with the Biba security  
model?

The TCSEC has, as a central theme, the extremely strong notion of a  
Trusted Computing Base, or TCB (i.e., the implementation of the  
Reference Monitor concept).  In essence, the TCB is the central  
policy enforcement  mechanism for the computer system, mediating the  
actions of all system users and user processes.  Among the important  
characteristics of the TCB is that it be always invoked (i.e.,  
unbypassable, mediates each and every access) and self-protecting  
(i.e., cannot be modified by user code).  The  consequence of  
requiring architectures that provide such mechanisms is to limit the  
ability of hostile code to subvert the TCB.  Beginning at the C1
level of trust, fundamental protection mechanisms are required that  
provide protection of the system programs and data from unprivileged
users.  Many existing systems (e.g., PCs running DOS) lack even these  
basic protections required at C1, thus allowing a virus executed by  
any user to infect any part of the system, even those most basic to  
system operation and integrity.  WinXP goes further, and by default,  
logs itself in as something a lot like 'root' on a *nix box, putting  
the entire system at risk.

Commencing with the B2 level of trust, I expect that there will be no  
fundamental design flaws that allow the security mechanisms in the  
TCB to be circumvented.  Thus, in the absence of penetration paths, a  
virus would be limited to attacking users on an individual basis.   
This means that the rate at which it could propagate would be  
reduced, as would the damage it could inflict.

You could argue that a virus capable of infecting each and every user  
in the system (one that was present in the text editor, for instance)  
would be reasonably effective at accomplishing some missions (e.g.,  
denial of service).  Thus, the value of an intact TCB in the face of  
an otherwise completely infected user population is moot.  However,  
it is still true that a strong and self-protecting TCB, at a minimum,  
forces a virus to infect users one at a time.  It can also prevent  
some forms of attack (see Mandatory Access Control, below), and  
assure the existence and protection of the audit data by which  
viruses may be detected and traced.

In fact, a strong TCB represents the central protection mechanism  
that a  virus must overcome in order to infect the text editor in the  
first place.

Mandatory Access Control provides those mechanisms that enforce  
corporate policy dealing with the sharing of data.  Examples of such  
polices would be: "only members of the payroll staff may read or  
change payroll data," and "classified data may only be accessed by  
those having the appropriate clearances." Beginning at the B1 level,  
the TCSEC requires computer systems to be capable of enforcing MAC as  
well as DAC.  That is, the system must be able to enforce those more  
formal rules dealing with either, or both, levels of sensitivity  
(e.g., DoD classification scheme) and categories of information  
(e.g., payroll, medical, R&D, corporate planning).  Thus, the ability  
of a user to access and manipulate data is based upon the comparison  
of the attributes of users (e.g., "member of payroll department,"  
"member of R&D staff," "management," or "clearance level") with the  
attributes of the data to be accessed (e.g., payroll data, R&D data,  
classification level).  Because it is required that the TCB control  
and protect these attribute designators ("labels"), they constitute a  
"hard barrier" for a virus, effectively limiting the scope of what it  
may do; in a properly designed and implemented system a virus would  
be unable to effect any changes to the labels.  This means, for  
instance, that a virus that is being executed by someone in the  
PAYROLL department would be limited to doing damage strictly within  
the set of data that is labelled accordingly.  It would have the  
potential to modify or destroy PAYROLL data, but not access R&D or  
MEDICAL data.

Additionally, a virus could not change any labels, which means that  
it is unable to prevent PAYROLL data from being passed to anyone who  
is not a member of the payroll staff.  Likewise, a virus could not  
cause "SECRET" data to be downgraded.  In short, MAC is an extremely  
strong  mechanism, which prevents any process, including a virus,  
from making properly labeled information available to users who are  
not authorized for the information.  Systems that achieve TCSEC  
levels of B2 or greater essentially guarantee that information will  
not be "compromised," i.e., no malicious code can violate the  
restrictions implied by the labels.

It needs to be noted that the way in which mandatory controls are  
typically used is to prevent compromise, which is to say that the  
emphasis
is on preventing "high" data from being written into a "low" file.   
(Biba.)  This does not, in itself, prohibit viruses from propagating,  
either via a "low" user writing into a "high" file, or a "high" user  
importing software from a "low" file.  However, it should be noted  
further that the mandatory controls provide the opportunity for  
implementing similar controls for writing (or importation) as for  
reading.  Such controls are usually seen as implementing mandatory  
integrity policies, such that the ability to modify files is based  
upon a set of integrity labels, analogous to the classification  
labels used to regulate the reading of data.  Some systems exist  
(e.g., Honeywell SCOMP) that have implemented such mechanisms.

It may be argued that viruses present no new technical challenges.   
The attacks they carry out are the attacks that have been postulated  
since the advent of time-sharing.  However, the intellectual process  
is such that one determines a threat, or attack  scenario, and then  
develops specific countermeasures.  Thus, the classical approach has  
led us to consider attacks and develop responses on an individual  
basis.  A virus not only propagates, but may also carry out any or  
all known attacks, thus potentially presenting us with a universal  
set of attacks in one set of hostile code.

Clearly, there are no universal cures; no single set of procedures  
and technical measures guaranteed to stop any and all possible virus  
attacks.  However, this is not different from any other everyday  
security situation.  Specific mechanisms tend to be designed to  
combat  specific dangers, in the same way that vaccines are developed  
to combat specific diseases.  These preventive security measures are  
intended to raise the cost of attacks, or to make it less likely that  
a specific class of attack will be successful.  Similarly for  
viruses.  While  viruses can exploit any and all flaws in our  
computer systems and networks, they also tend to be classes of  
attacks with which we are already familiar.

Given this, while your concern for our continued vulnerability to  
virus attacks is valid, a dispassionate analysis shows that our  
previous experience in computer security is relevant - the protective  
measures and technology we have developed are directly applicable,  
and provide a good baseline for making headway against these attacks.

All this said, what is truly revolutionary about viruses is that they  
may change the way in which we will have to view the processing and  
communications support available to us, in the same way that "letter  
bombs" would cause us to radically change the way we view the postal  
system, i.e., from beneficial and useful to hostile and potentially  
dangerous.  Where we have previously put great confidence in our  
computing resources ("If the computer said it, it must be correct"),  
we will now have to consider those resources as potentially hostile.

it is here, if anywhere that we may find some solace in operating  
systems such as OSX, Linux and BSD.  Not because of their (lack of)  
mainstream popularity,  but more because they are not chock-full of a  
large number of legacy security issues, all of which not only leave  
the barn door open to potential infection, but also remain unfixed,  
because fixing them would wreck havoc in the installed base.

>> And all of this in the service of writing documents, (typically in  
>> some proprietary binary format (Word)), reading email, and surfing  
>> the web.
>>
>
> Jim, please don't stifle Bill's ability to innovate.  ;-)

We're (all) waiting for that to start.

jim





More information about the LUAU mailing list