[iwar] [risks] Risks Digest 21.84 (fwd)

From: Fred Cohen (fc@all.net)
Date: 2002-01-05 18:48:13


Return-Path: <sentto-279987-4215-1010285275-fc=all.net@returns.groups.yahoo.com>
Delivered-To: fc@all.net
Received: from 204.181.12.215 [204.181.12.215] by localhost with POP3 (fetchmail-5.7.4) for fc@localhost (single-drop); Sat, 05 Jan 2002 18:56:07 -0800 (PST)
Received: (qmail 14946 invoked by uid 510); 6 Jan 2002 02:54:58 -0000
Received: from n2.groups.yahoo.com (216.115.96.52) by all.net with SMTP; 6 Jan 2002 02:54:58 -0000
X-eGroups-Return: sentto-279987-4215-1010285275-fc=all.net@returns.groups.yahoo.com
Received: from [216.115.97.166] by n2.groups.yahoo.com with NNFMP; 06 Jan 2002 02:47:56 -0000
X-Sender: fc@red.all.net
X-Apparently-To: iwar@onelist.com
Received: (EGP: mail-8_0_1_3); 6 Jan 2002 02:47:55 -0000
Received: (qmail 15780 invoked from network); 6 Jan 2002 02:47:54 -0000
Received: from unknown (216.115.97.171) by m12.grp.snv.yahoo.com with QMQP; 6 Jan 2002 02:47:54 -0000
Received: from unknown (HELO red.all.net) (12.232.125.69) by mta3.grp.snv.yahoo.com with SMTP; 6 Jan 2002 02:47:53 -0000
Received: (from fc@localhost) by red.all.net (8.11.2/8.11.2) id g062mDo18421 for iwar@onelist.com; Sat, 5 Jan 2002 18:48:13 -0800
Message-Id: <200201060248.g062mDo18421@red.all.net>
To: iwar@onelist.com (Information Warfare Mailing List)
Organization: I'm not allowed to say
X-Mailer: don't even ask
X-Mailer: ELM [version 2.5 PL3]
From: Fred Cohen <fc@all.net>
X-Yahoo-Profile: fcallnet
Mailing-List: list iwar@yahoogroups.com; contact iwar-owner@yahoogroups.com
Delivered-To: mailing list iwar@yahoogroups.com
Precedence: bulk
List-Unsubscribe: <mailto:iwar-unsubscribe@yahoogroups.com>
Date: Sat, 5 Jan 2002 18:48:13 -0800 (PST)
Subject: [iwar] [risks] Risks Digest 21.84 (fwd)
Reply-To: iwar@yahoogroups.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Date: Fri, 28 Dec 2001 14:38:29 +0100
From: Paul van Keep <paul@sumatra.nl>
Subject: Peak time for Eurorisks

There was a veritable plethora of Euro related gaffes just before the final
changeover.  A short overview of what has happened:

* A housing society in Eindhoven (The Netherlands) has sent out bills for
next month's rent in Euro's. But the amount on it is the same as last
month's rent, which was in guilders, a 2.2x increase. The society has
declared that anyone who paid the incorrect amount will be fully refunded
and will get a corrected bill later.

* A branch of ING bank in Blerick inadvertently inserted the wrong cassette
into an ATM, which began to give out Euro bills instead of Guilders.  This
was illegal before January 1st, 2002. The bank has traced the bills and all
but 10 euro have already been returned.

* Rabobank has made an error in it's daily processing of 300,000 automatic
transfers. Instead of transferring guilders, the transfers were made in
Euro's, again 2.2x what should have been transferred. The bank hopes to have
corrected the mistake before 6pm tonight (Due to the Euro changeover, no
banking activity will take place in any of the Euro countries on monday).

* (Just somewhat related:) Two companies thought they were being really
smart by putting Eurobills in Christmas gifts for their employees. They have
been ordered by the Dutch central bank to recover all those bills or face
charges.

------------------------------

Date: Thu, 3 Jan 2002 22:44:37 +0100
From: Paul van Keep <paul@sumatra.nl>
Subject: More Euro blues

The Euro troubles keep coming in. Even though banks have had four to five
years to prepare for the introduction of the Euro, things still go wrong.
Yesterday and today over 200 smaller Dutch post offices have had to remain
closed because of computer problems relating to the Euro changeover. It is
still unclear whether the situation will finally be resolved tomorrow.

------------------------------

Date: Tue, 01 Jan 2002 06:42:26 -0500
From: "Daniel P.B. Smith" <dpbsmith@bellatlantic.net>
Subject: Harvard admissions e-mail bounced by AOL's spam filters

According to today's Globe, AOL's spam filters rejected e-mail sent by
Harvard's admissions department to anxious applicants.  The interesting
thing is that "AOL officials could not explain" why their servers identified
these e-mail messages as spam.  No explanation, no responsibility,
apparently no indication of anything that Harvard could do to avoid the
problem in the future.  Just one of those things, like the weather.

Despite jokes, USPS "snail mail" is very highly reliable.  Those of us who
have used e-mail for years are aware that it is much less reliable; for
example, Verizon DSL's mail server slowed to a crawl for several months last
year, and during that time period less than half of e-mail I sent from
another account to test it were actually received.  Antispam filters
decrease this reliability further.

The facile name "e-mail" was helpful in the eighties as a characterization
of a form of electronic communication.  However, there is a risk that the
name may mislead people into assuming that it is comparable in reliability
to postal mail.

Let us hope that organizations do not begin use e-mail for communications
more important than university admissions letters, in the name of "security"
(and cost reduction).

  [The AOL Harvard problem was noted by quite a few RISKS readers.  TNX]

------------------------------

Date: Wed, 26 Dec 2001 21:19:22 -0800
From: Henry Baker <hbaker1@pipeline.com>
Subject: "Buffer Overflow" security problems

I'm no fan of lawyers or litigation, but it's high time that someone defined
"buffer overflow" as being equal to "gross criminal negligence".

Unlike many other software problems, this problem has had a known cure since
at least PL/I in the 1960's, where it was called an "array bounds
exception".  In my early programming days, I spent quite a number of unpaid
overtime nights debugging "array bounds exceptions" from "core dumps" to
avoid the even worse problems which would result from not checking the array
bounds.

I then spent several years of my life inventing "real-time garbage
collection", so that no software -- including embedded systems software --
would ever again have to be without such basic software error checks.

During the subsequent 25 years I have seen the incredible havoc wreaked upon
the world by "buffer overflows" and their cousins, and continue to be amazed
by the complete idiots who run the world's largest software organizations,
and who hire the bulk of the computer science Ph.D.'s.  These people _know_
better, but they don't care!

I asked the CEO of a high-tech company whose products are used by a large
fraction of you about this issue and why no one was willing to spend any
money or effort to fix these problems, and his response was that "the
records of our customer service department show very few complaints about
software crashes due to buffer overflows and the like".  Of course not, you
idiot!  The software developers turned off all the checks so they wouldn't
be bugged by the customer service department!

The C language (invented by Bell Labs -- the people who were supposed to be
building products with five 9's of reliability -- 99.999%) then taught two
entire generations of programmers to ignore buffer overflows, and nearly
every other exceptional condition, as well.  A famous paper in the
Communications of the ACM found that nearly every Unix command (all written
in C) could be made to fail (sometimes in spectacular ways) if given random
characters ("line noise") as input.  And this after Unix became the de facto
standard for workstations and had been in extensive commercial use for at
least 10 years.  The lauded "Microsoft programming tests" of the 1980's were
designed to weed out anyone who was careful enough to check for buffer
overflows, because they obviously didn't understand and appreciate the
intricacies of the C language.

I'm sorry to be politically incorrect, but for the ACM to then laud "C" and
its inventors as a major advance in computer science has to rank right up
there with Chamberlain's appeasement of Hitler.

If I remove a stop sign and someone is killed in a car accident at that
intersection, I can be sued and perhaps go to jail for contributing to that
accident.  If I lock an exit door in a crowded theater or restaurant that
subsequently burns, I face lawsuits and jail time.  If I remove or disable
the fire extinguishers in a public building, I again face lawsuits and jail
time.  If I remove the shrouding from a gear train or a belt in a factory, I
(and my company) face huge OSHA fines and lawsuits.  If I remove array
bounds checks from my software, I will get a raise and additional stock
options due to the improved "performance" and decreased number of calls from
customer service.  I will also be promoted, so I can then make sure that
none of my reports will check array bounds, either.

The most basic safeguards found in "professional engineering" are cavalierly
and routinely ignored in the software field.  Software people would never
drive to the office if building engineers and automotive engineers were as
cavalier about buildings and autos as the software "engineer" is about his
software.

I have been told that one of the reasons for the longevity of the Roman
bridges is that their designers had to stand under them when they were first
used.  It may be time to put a similar discipline into the software field.

If buffer overflows are ever controlled, it won't be due to mere crashes,
but due to their making systems vulnerable to hackers.  Software crashes due
to mere incompetence apparently don't raise any eyebrows, because no one
wants to fault the incompetent programmer (and his incompetent boss).  So we
have to conjure up "bad guys" as "boogie men" in (hopefully) far-distant
lands who "hack our systems", rather than noticing that in pointing one
finger at the hacker, we still have three fingers pointed at ourselves.

I know that it is my fate to be killed in a (real) crash due to a buffer
overflow software bug.  I feel like some of the NASA engineers before the
Challenger disaster.  I'm tired of being right.  Let's stop the madness and
fix the problem -- it's far worse, and caused far more damage than any Y2K
bug, and yet the solution is far easier.

Cassandra, aka Henry Baker <hbaker1@pipeline.com>

------------------------------

Date: Wed, 26 Dec 2001 21:19:22 -0800
From: Peter G Neumann <Neumann@CSL.sri.com>
Subject: "Buffer Overflow" security problems (Re: Baker, RISKS-21.84)

Henry, Please remember that an expressive programming language that prevents
you from doing bad things would with very high probability be misused even
by very good programmers and especially by programmers who eschew
discipline; and use of a badly designed programming language can result in
excellent programs if done wisely and carefully.  Besides, buffer overflows
are just one symptom.  There are still lots of lessons to be learned from an
historical examination of Fortran, Pascal, Euclid, Ada, PL/I, C, C++, Java,
etc.

Perhaps in defense of Ken Thompson and Dennis Ritchie, C (and Unix, for that
matter) was created not for masses of incompetent programmers, but for Ken
and Dennis and a few immediate colleagues.  That it is being used by so many
people is not the fault of Ken and Dennis.  So, as usual in RISKS cases,
blame needs to be much more widely distributed than it first appears.  And
pursuing Henry's name the blame game, whom should we blame for Microsoft
systems used unwisely in life- and mission-critical applications?  OS
developers?  Application programmers?  Programming language developers?
Users?  The U.S. Navy?  Remember the unchecked divide-by-zero in an
application that left the U.S.S. Yorktown missile cruiser dead in the water
for 2.75 hours (RISKS-19.88 to 94).  The shrinkwrap might disclaim liability
for critical uses, but that does not stop fools from rushing in.

Nothing in the foregoing to the contrary notwithstanding, it would be very
helpful if designers of modern programming languages, operating systems, and
application software would more judiciously observe the principles that we
have known and loved lo these many years (and that some of us have even
practiced!).  Take a look at my most recent report, on principles and their
potential misapplication, for DARPA's Composable High-Assurance Trustworthy
Systems (CHATS) program, now on my Web site:
http://www.csl.sri.com/neumann/chats2.html 

------------------------------

Date: Sat, 29 Dec 2001 17:19:30 -0500
From: "Laura S. Tinnel" <ltinnel@teknowledge.com>
Subject: Sometimes high-tech isn't better

We're all aware that many companies have buried their heads in the sand on 
the security issues involved with moving to high-tech solutions in the name 
of convenience, among other things. When we're talking about on-line sales, 
educational applications, news media, and the like, the repercussions of 
such are usually not critical to human life, and therefore the trade-off is 
made. However, I've just encountered something that is, well, disconcerting 
at best.

Earlier today as I sat unattended in an examination room for a half hour
waiting on the doctor to show up, I carefully studied the new computer
systems they had installed in each patient room. Computers that access ALL
patient records on a centralized server located elsewhere in the building,
all hooked up using a Windows 2000 domain on an ethernet based LAN.
Computers that contained accessible CD and floppy drives and that could be
rebooted at the will of the patient. Computers hooked up to a hot LAN jack
(oh for my trusty laptop instead of that Time magazine...) Big mistake #1 -
the classic insider problem.

Once the doctor arrived and we got comfy, I started asking him about the
computer system. (I just can't keep my big mouth shut.) Oh he was SO proud
of their new fangled system. So I asked the obvious question - what would
prevent me from accessing someone else's records while I sat here unattended
for a half hour waiting for you to show up? With a big grin on his face, he
said "Lots of people ask that question. We have security here; let me show
you." Big mistake #2 - social engineering. Then he proceeded to show me that
the system is locked until a password is entered. Of course, he said, if
someone stole the password, then they could get in, but passwords are
changed every 3 months. And, he continued, that's as secure as you can get
unless you use retinal scans. (HUH?) I know all about this stuff, for you
see "my dear", I have a masters degree in medical information technology,
and I'm in charge of the computer systems at XXXX hospital. OK. Time to fess
up. Doc, I do this for a living, and you've got a real problem here. 1, Have
you thought about the fact that you have a machine physically in this room
that anyone could reboot and install trojan software on? A: Well that's an
issue. 2. Have you thought about the fact that there's a live network
connection in this room and anyone could plug in and have instant access to
your network? A: You can really do that???  There's a guy that brings his
laptop in here all the time. 3. I assume you are using NTFS (yes), have you
locked down the file system and set the security policies properly? You do
understand that it is wide open out of the box. A: I don't know what was
done when the computers were set up. 4. Have you thought beyond just the
patient privacy issue to the issue of unauthorized modification of patient
records? What are you doing to prevent this? What could someone do if they
modified someone else's records? Make them very ill? Possibly kill them? A:
That's a big concern. (well, duh?)  Then there was a big discussion about
access to their prescription fax system that could allow people to illegally
obtain medication. I didn't bother to ask whether or not they were in any
way connected to the Internet. They either have that or modems to fax out
the prescriptions. At least he said he'd talk to his vendor to see how they
have addressed the other issues. Perhaps they have addressed some of these
things and the doctor I was chatting with simply didn't know.

I'm not trying to come down on these doctors as I'm sure they have very good
intentions. I personally think having the medical records on-line is a good
idea in the long term as it can speed access to records and enable remote
and collaborative diagnoses, potentially saving lives. But I'm not convinced
that today we can properly secure these systems to protect the lives they
are intended to help save. (Other opinions are welcome.) And with the state
of medical malpractice lawsuits and insurance, what could a breach in a
computer system that affects patient health do to the medical industry if it
becomes reliant on computer systems for storage/retrieval of all patient
records?

A couple of things. First, I'm not up on the state of cyber security in
medical applications. I was wondering if anyone out there is up on these
things or if anyone else has seen stuff like this.

Second, if a breach in the computer system was made and someone was
mistreated as a result, who could be held liable? The doctors for sure.
What about the vendor that sold and/or set up the system for them? Does "due
diligence" enter in? If so, what is "due diligence" in cyber security for
medical applications?

Third, does anyone know if the use of computers for these purposes in a
physician's office changes the cost of malpractice insurance? Is this just
too new and not yet addressed by the insurance industry? Is there any set of
criteria for "certification" of the system for medical insurance purposes,
possibly similar to that required by the FDIC for the banking industry? If
so, is the criteria really of any value??

  [This is reproduced here from an internal e-mail group, with Laura's
  permission.  A subsequent response noted the relative benignness of past
  incidents and the lack of vendor interest in good security -- grounds that
  we have been over many times here.  However, Laura seemed hopeful that the
  possibility of unauthorized modification of patient data by anyone at all
  might stimulate some greater concerns.  PGN]


------------------
http://all.net/ 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 



This archive was generated by hypermail 2.1.2 : 2002-12-31 02:15:02 PST