[iwar] [fc:Castles.Built.on.Sand:.Why.Software.is.Insecure]

From: Fred Cohen (fc@all.net)
Date: 2002-01-30 17:34:44


Return-Path: <sentto-279987-4409-1012440819-fc=all.net@returns.groups.yahoo.com>
Delivered-To: fc@all.net
Received: from 204.181.12.215 [204.181.12.215] by localhost with POP3 (fetchmail-5.7.4) for fc@localhost (single-drop); Wed, 30 Jan 2002 18:15:08 -0800 (PST)
Received: (qmail 5585 invoked by uid 510); 31 Jan 2002 02:11:21 -0000
Received: from n3.groups.yahoo.com (216.115.96.53) by all.net with SMTP; 31 Jan 2002 02:11:21 -0000
X-eGroups-Return: sentto-279987-4409-1012440819-fc=all.net@returns.groups.yahoo.com
Received: from [216.115.97.162] by n3.groups.yahoo.com with NNFMP; 31 Jan 2002 01:33:39 -0000
X-Sender: fc@red.all.net
X-Apparently-To: iwar@onelist.com
Received: (EGP: mail-8_0_1_3); 31 Jan 2002 01:33:39 -0000
Received: (qmail 65264 invoked from network); 31 Jan 2002 01:33:39 -0000
Received: from unknown (216.115.97.167) by m8.grp.snv.yahoo.com with QMQP; 31 Jan 2002 01:33:39 -0000
Received: from unknown (HELO red.all.net) (12.232.72.98) by mta1.grp.snv.yahoo.com with SMTP; 31 Jan 2002 01:33:38 -0000
Received: (from fc@localhost) by red.all.net (8.11.2/8.11.2) id g0V1YjG02978 for iwar@onelist.com; Wed, 30 Jan 2002 17:34:45 -0800
Message-Id: <200201310134.g0V1YjG02978@red.all.net>
To: iwar@onelist.com (Information Warfare Mailing List)
Organization: I'm not allowed to say
X-Mailer: don't even ask
X-Mailer: ELM [version 2.5 PL3]
From: Fred Cohen <fc@all.net>
X-Yahoo-Profile: fcallnet
Mailing-List: list iwar@yahoogroups.com; contact iwar-owner@yahoogroups.com
Delivered-To: mailing list iwar@yahoogroups.com
Precedence: bulk
List-Unsubscribe: <mailto:iwar-unsubscribe@yahoogroups.com>
Date: Wed, 30 Jan 2002 17:34:44 -0800 (PST)
Subject: [iwar] [fc:Castles.Built.on.Sand:.Why.Software.is.Insecure]
Reply-To: iwar@yahoogroups.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit

Castles Built on Sand: Why Software is Insecure
by Josh Ryder 
last updated January 30, 2002

We have all heard reports of vulnerabilities being discovered in various
software. But what actually makes software more or less secure than the rest
of its competitors? Theoretically, all software starts in the same place -
with the very first sketch on somebody's napkin over dinner. It grows from
there; the environment in which it is developed, who controls the project
and most importantly who works on the project all contribute to the outcome.
Unfortunately, the outcome is not always what the developers had in mind.
Many software programs are plagued by programming flaws that may lead to
security vulnerabilities. This article will offer a brief overview of some
of the factors that may contribute to insecure software.

What are the Costs of Insecure Software?

Before entering a discussion of the reasons behind insecure software, it may
be useful to put the discussion into the context of the practical
consequences: in other words, what are the real-world costs of insecure
software. Ostensibly, organizations invest considerable money in software
because it will make their operations run more smoothly and effectively.
However, the flaws that are inherent in many programs may negate many of the
benefits they offer. The following are several real-world costs associated
with having insecure software installed on your systems.

Maintenance 

If an organization has a network administrator worth his salt, he's probably
spending a reasonable amount of his time installing security patches on the
company¹s machines. Finding, researching, and deploying these so-called
"updates" is a time-consuming, often disheartening task. This is, of course,
to say nothing of the overtime that admins are forced to put in when new
malware is released that exploits a software vulnerability (a la CodeRed and
Outlook). Trying to stay on top of the latest problems can be a full -time
job in and of itself. Add to this the actual responsibility of deploying all
of these changes and it is easy to see how an IT department's request for
more personnel may be justified after all.

Lost Productivity 

When a piece of software is compromised at work, everyone suffers. In the
best case just one machine is affected - nothing more important than the
junior programmer's system. Of course, this means that whatever is dependant
upon this junior programmer cannot be accomplished immediately and must be
delayed until such time as they are capable of completing their task. In the
worst case, a network-wide outage/infection occurs that can stop an entire
office in its tracks. When Nimda was released, whole companies were forced
to sit back in collective horror as their networks ground to a halt as the
infection spread. Those administrators fortunate enough to discover the
problem before their systems were infected often took heavy-handed measures
ranging from disabling the vulnerable program on all internal systems to
shutting off e-mail access to the entire company until the patches could be
distributed. 

Information Loss/Dissemination

Let's suppose for a minute that you are corresponding with some potential
clients over the cost of building an addition to their office. Now let's
also suppose that somewhere on your hard drive you have a document that
contains your internal cost estimate for the project -- a cost that is much
lower than that which you quoted to the potential client. What would happen
if that client somehow came by that file? I suspect, at the very least they
would be displeased, and at worst they would never use your company's
services again and would make sure that knowledge of your business practices
were widely disseminated.

Lost business through communication.

If you can't communicate, you can't work. If you are an architectural firm
that needs to send out drawings to a client on the other side of the globe,
but your network is unusable because of this a new worm, you are out of
luck. Likewise, if all of your computers mail everyone in each of their
address books, far too much communication is occurring. And if all of those
e-mails just happen to have your company's payroll information attachedŠ

Bandwidth 

Oh sweet, precious bandwidth. For those of us not lucky enough to have
access to an unmetered pipe, bandwidth can be a real concern. For home
users, a monthly bandwidth cap is often set on their connectionsŠanywhere
from one to 5 gigabytes seems to be the norm. These quotas can be reached
with alarming speed if someone manages to exploit a hole in your computer's
security and install an ftp server. Or, perhaps a more relevant example
would (again) be CodeRed and Nimda. If you believe the statistics, CodeRed
globally cost in excess of 2.6 billion dollars and Nimda another $590
million. And that is no drop in the bucket.

Why is software so insecure?

Internet Explorer is one of many examples of insecure software. Some call
Internet Explorer the browser that made the Internet accessible to the
masses. Others call it an accident waiting to happen (again and again).
Let's think about where IE grew up. Started as a skunk works project
alongside Windows 95, Internet Explorer was first released in 1995 as part
of the Internet Jumpstart Kit found in Microsoft Plus! For Windows 95. While
it was well integrated with the new operating system, few users adopted it,
as much more mature and feature rich applications already existed. With the
release of IE 2.0 in 1996 it was obvious that MS was not abandoning this
software. Cross platform support was added for Macintosh users, and several
emerging technologies such as cookies, VRML and RealAudio were now
supported. Later that year IE 3.0 was released, and the browser wars truly
began. Finally MS had released a product that was capable of competing with
Netscape on even ground. Internet Explorer was no longer merely a Web
browser, it was the launch point for most of the users Internet needs. Users
could read your e-mail, check newsgroups, view videos, listen to music, and
(believe it or not) browse the Web too.

Within nine days of its release the very first exploit for IE 3.0 was
discovered and released to the public. The rest, as they say, is history.

So how could a company with the resources of Microsoft develop and release a
product that is so obviously flawed? The Microsoft campus contains some of
the most brilliant designers and programmers the world has to offer. Many of
the development practices present at MS are used throughout the industry
(which many would say contributes to the problem, but that is a different
topic altogether). Why, then, can't even the imposing minds at Microsoft
seem to be capable of writing software that can be trusted?

First of all, we should note that there is more than one way for software to
be insecure. Methods for exploiting and circumventing security in programs
are as varied as the applications they are attacking. Perhaps the
fundamental problem is that software is not necessarily designed and
constructed with security in mind. Until recently, security has been
something of an afterthought in the computer industry, both amongst vendors
and among customers/users. However, as the previous section pointed out, it
is becoming more obvious that security is becoming a very costly concern.
Apart from the obvious problem that security has not been a fundamental part
of the development process, there are other fundamental problems with
software that add to insecurity. The following section discusses some of
these problems. 

Complexity and Integration of Programs

Today when a program is written, it cannot be assumed that only it and the
operating system will be running at any given time. The terms integration
and interconnectivity have been chanted like a mantra for several years now.
For those old enough to recall, applications used to be relatively single
purpose. WordPerfect 5.1, arguably one of the best word processors in
existence, was only a word processor. It didn't have any notion of programs
other than itself and the operating system. It performed a specific task,
and performed it well. However, when Microsoft released their Office suite
the world was forever changed. Now not only could you create documents,
spreadsheets, and presentations using one application suite, you could
actually use these tools together. For example, if you had a spreadsheet in
Excel that you wanted to include in your report, all you had to do was copy
and paste it into your report document.

As time moves on, more and more components have been either integrated with
one another, or are at least able to talk amongst themselves. Soon certain
options and functions were only available if other, sometimes unrelated
services were enabled. If you wanted to share files across a network, you
had to enable the Microsoft file-sharing protocol. Along with this protocol,
usually unbeknownst to the user, several other features such as printer
sharing and remote registry management were also enabled. To put it another
way, the "package deal" was delivering more than the user expected and
possibly wanted. This makes the job of securing the system even more
difficult as the user may not have an accurate picture of what is and is not
setup on the computer.

The code required to construct and run such sophisticated multi-application
software is not only highly complex, it is also huge. The code for a single
program, like Microsoft Word, consists of millions of lines of code. It is
almost beyond the scope of human possibility to ask the software developers
to perform the comprehensive quality assessment necessary to ensure optimal
security. Even if it were possible, it would be very easy to overlook one or
two bugs; after all, even if considerable resources are made available for
quality assessment, there is still a finite time in which to conduct it.
Hackers, on the other hand, have endless hours to prod and probe for a
single vulnerability to exploit.

Need to rush software out without adequate testing

Part of the problem may be the ³necessity² to role out commercial software
by deadlines imposed by finance and marketing needs. Ask any software
developer what it's like to code during a crunch and you'll probably get an
earful of complaints. "They want us to do too much in too little time" or "I
hope no one ever looks at this code again!² and ³It's terrible, but it
works" are oft repeated phrases. The Almighty "hack" -- a so-called elegant
solution to a programming problem -- or more usually a short-sighted but
"clever" way of covering up flaws in the system becomes the required method
of development. Suddenly design and coding style are thrown out the window
in favor of down and dirty "do what works, we'll fix it later" coding.
Initially some of the more idealistic (and typically youthful) coders feel
that this sort of programming is wrong; this feeling usually passes quickly
under the tutelage of the more experienced team members.

Under these time pressures, errors that may normally be obvious and easy to
catch slip through because inadequate attention is given to reviewing and
testing the code. This is not to say that flaws are not introduced into the
system at earlier stages in the development cycle, but I suspect that most
of the big errors are made during this time of accelerated development. Poor
program or feature specifications are compounded by designers who can only
assume that the specification is complete, and then further by programmers
who each have their own unique idea of what the design is really calling
for. 

With added features comes the added responsibility of ensuring that each of
those features works both by itself and with the other components of the
system. If you have a subsystem of 15 components, each of which relies on
the other at some point, adding another component causes you to not only
test the newest piece, but all of the other connected pieces. This can be a
very expensive and time consuming process; it is a process that is often cut
short, or is poorly done in the interests of shipping on time or reducing
costs. 

Closed source patches

An argument that you hear all the time from the Open Source community is:
How can we trust your software if we cannot see what the code is doing?
Closed source software is often patched by closed source patches --
essentially you are adding even more unknowns to the security puzzle every
time you patch something. When Windows 2000 Service Pack 2 was released, one
of the biggest concerns was that suddenly MS Exchange 2000 Server would not
longer run. This was not documented, and it was a flaw discovered time and
time again by hapless users who, perhaps naively, assumed that the bug fixes
in Service Pack 2 would not adversely affect the way that other applications
on their systems ran. In fact, even after taking a look at the full list of
documented changes found in SP2, one would not be able to guess or deduce
that any problems would occur with MS Exchange 2000.

Poor Training 

Many software developers do not understand the risks that they are exposing
their users to by creating poorly written code. While many explanations are
possible, at the root of the problem we can usually find that poor training
is the culprit. Either as students the young software developers were not
adequately inundated with good coding styles and practices or they simply
chose to ignore what advice was given to them. For those fledgling software
developers who actually attended some post-secondary education in a relevant
field, many of the assignments and problems that they were asked to solve
were one-shot standalone entities, meaning that once a problem was solved by
whatever code they had written, they would ³throw away² their work and never
look at it again. Why spend extra effort on making your code clean and safe
when the quick and dirty solution still got you full marks?

The write ­ debug - rewrite cycle is common practice among students and
junior coders; that is to say that rather than spending time on design, they
will code as much as they can, debug the software when it doesn¹t work, and
continue rewriting and debugging until such time as the code does what they
want it to. Once the goal has been accomplished, it is left alone and the
next problem is attacked. Audits for proper memory management, coding style,
or compatibility with other code are not adequately conducted. If this trend
is not stopped early on, it will grow into a much larger problem until it
becomes standard operating practice.

Conclusion 

Underlying all the causes of insecure software is a simple philosophical
flaw: software is not designed with security in mind. However, there may be
a light at the end of the tunnel. Recently, Bill Gates issued a company-wide
memo to all Microsoft employees dictating that from this moment forward,
security will be a programming priority. In the memo, entitled ³Trustworthy
Computing², Gates states: ³Trustworthy Computing is the highest priority for
all the work we are doing. We must lead the industry to a whole new level of
Trustworthiness [sic] in computing...Eventually, our software should be so
fundamentally secure that customers never even worry about it...So now, when
we face a choice between adding features and resolving security issues, we
need to choose security" (Complete Text of the Bill Gates "Trustworthy
Computing" Memo). Whether or not this resolution will be realized is
impossible to tell. However, the fact that the memo was issued indicates the
growing awareness that software developers must place a much higher priority
on secure coding. Whether or not they do so remains to be seen. Until they
do, organizations that base their operations on bug-weakened software will
continue to be castles built on sand.

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Sponsored by VeriSign - The Value of Trust
When building an e-commerce site, you want to start with a
secure foundation. Learn how with VeriSign's FREE Guide.
http://us.click.yahoo.com/kWSNbC/XdiDAA/yigFAA/kgFolB/TM
---------------------------------------------------------------------~->

------------------
http://all.net/ 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 



This archive was generated by hypermail 2.1.2 : 2002-12-31 02:15:03 PST