Great Circle Associates Firewalls
(April 1994)
 

Indexed By Date: [Previous] [Next] Indexed By Thread: [Previous] [Next]

Subject: Re: system()
From: Marcus J Ranum <mjr @ tis . com>
Date: Tue, 5 Apr 94 22:24:42 EDT
To: dorian @ cobalt . house . gov, earle @ uunet . uu . net
Cc: firewalls @ GreatCircle . COM

This is a bit "off topic" for the general issue of firewalls, but it
raises a fundamental question about how you develop secure software,
so I'll soapbox a little bit.

>Do you plan on stripping out every ``dangerous'' thing as it comes along?
>Are you going to disable popen() too?  Why not just fix the problem where
>it exists instead of disabling every single problem that arises?  Pretty
>soon we'll be left with nothing, then everyone loses.

	There are 2 ways to design secure software.

	The first way is to identify everything that's dangerous, and to
eliminate it, until you have something that you trust.
	The second way is to design it such that, given a few basic
assumptions, it's not going to be dangerous at all.

	Guess which one works better.
	Guess which one requires more skill, and more familiarity with
		the systems model under which the software is being built.

	Generally, what happens is you wind up with something that's
full of security holes, and then you've got to fix them. This can
sometimes consume more total effort than doing it right in the first
place. Sendmail leaps to mind as an example of a case where "penetrate
and patch" is a very, very expensive approach. [Anyone who complains,
"but sendmail is free!" simply doesn't get it]

	With respect to finding holes in programs, you tend to find
after a while that some things are Bad Ideas. Things that are Bad
Ideas are stuff like:

	1: having privileged programs that execute other programs based
		on what someone else tells them to do
	(example of 1) using system() in privileged programs
	(example of 2) using popen() in privileged programs
	-> having privileged programs when sensibly designed programs will do
	-> having large, complex privileged programs instead of small
		simple privileged programs
	-> using system access software that relies on "privileged ports"
	-> having privileged programs that write to disk files based
		on what someone tells them to do
	-> having programs that rely on complex built-in permissions
		systems instead of relying on the operating system's
		built-in permissions

	People who have been around UNIX security and who have seen
(in nauseating profusion) examples of software which is misdesigned
from a security standpoint will probably agree that bugs tend to
be instances of categories of oversight like the ones I list above.
I tend to avoid the kinds of things I list above because I seen
folks who are much smarter than I am write insecure software because
they ignored these basics.

	That's one reason why (not to put too fine a point on it)
when I looked at the xMosaic code and saw that it used:
	sprintf(buf,"mv %s %s",oldname,newname);
	system(buf);
	to rename files, my heart sank. Not because that's buggy
code, but it indicates a lack of awareness of how to program under
UNIX that makes me inclined to suspect that the subtler design
issues like security weren't handled with any more expertise.

>Are you going to disable popen() too?  Why not just fix the problem where
>it exists instead of disabling every single problem that arises?  Pretty
>soon we'll be left with nothing, then everyone loses.

	The problem of "pretty soon we'll be left with nothing" is a
real one. The tradeoff is that you've got to decide if your security
model is:

	What we don't know can hurt us.
	What we don't know doesn't bother us until we know about it.

	Believe it or not, most of the software people run is built
with the latter assumption. It's only when someone spills the beans
about there being a hole you can toss a moose through that everyone
gets upset.

	Obviously, we have to work with what we've got, which is
why lots of us use stuff that's substandard from a security
perspective (hey, we use NFS internally, too). Because you need it
to get your job done.

	The "what we don't know doesn't bother us" approach means
that the software security model becomes "penetrate and patch" --
whenever a new bug is found, you install a new version of the
software. yeech. Aren't you tired of replacing your version of
sendmail?? Vendors certainly are!!! That's why nothing ever gets
fixed fast enough.

	A better approach is to design stuff *right* the first
time, and to be able to have a degree of assurance that it won't
be able to hurt you.

	Pretty soon we'll be left with nothing is the problem.
There's so much seriously broken code out there that if we
threw it all away, most operating systems would be unusable.
Throwing it all away's not an option -- but we need to do ANYTHING
we can to encourage a shift in mind set away from the "see, it
works, it's cool, we'll worry about security later" to "see,
it works, it's cool, and here's how it takes advantage of
the way your system enforces permissions to ensure that no
matter what ugly stuff it does, it can't hurt you."

mjr.


Follow-Ups:
  • Re: system()
    From: Adam Shostack <adam @ bwh . harvard . edu>
Indexed By Date Previous: Re: system()
From: earle @ uunet . uu . net (Earle Ady)
Next: Re: system()
From: John Hawkinson <jhawk @ panix . com>
Indexed By Thread Previous: Re: system()
From: John Hawkinson <jhawk @ panix . com>
Next: Re: system()
From: Adam Shostack <adam @ bwh . harvard . edu>

Google
 
Search Internet Search www.greatcircle.com