Return-Path: XPUM04@prime-a.central-services.umist.ac.uk
Received: from G.SEI.CMU.EDU by ubu.cert.sei.cmu.edu (5.61/2.3)
        id AA21738; Thu, 7 Jun 90 17:55:47 -0400
Received: from SEI.CMU.EDU by g.sei.cmu.edu (5.61/2.5)
        id AA16405; Thu, 7 Jun 90 17:55:43 -0400
Received: from nsfnet-relay.ac.uk by sei.cmu.edu (5.61/2.3)
        id AA14628; Thu, 7 Jun 90 17:54:57 -0400
Received: from sun.nsfnet-relay.ac.uk by vax.NSFnet-Relay.AC.UK 
           via Janet with NIFTP  id aa27071; 7 Jun 90 19:51 BST
From: Anthony Appleyard <XPUM04@prime-a.central-services.umist.ac.uk>
To: KRVW <@NSFnet-Relay.AC.UK:KRVW@sei.cmu.edu>
Date:         Tue, 05 Jun 90 14:09:36 BST 
Message-Id:   <$TGVGDBVHCQBX at UMPA>
Subject:      Virus-L vol 0 issue #0630



Virus-L Digest Thu, 30 Jun 88, Volume 0 : Issue #0630

Today's Topics

Re: OS/2 and virii
Re: Authentication of programs
Re: Forwarded comments on worms from Joseph Beckman
Re: say NO to constructive viruses :-)
VM/CMS viruses
The Sunnyvale Slug
VM/CMS viruses
Re: OS/2 and virii
Re: OS/2 and virii
Some BYTES from fidonet (sorry)        (this message is 781 lines long)
RE: OS/2 and virii
do you believe in magic?

------------------------------

Date:         Thu, 30 Jun 88 07:49:00 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         WHMurray@DOCKMASTER.ARPA
Subject:      Re: OS/2 and virii
In-Reply-To:  Message of 27 Jun 88 19:15 EDT from "David.Slonosky%QueensU.CA at
              CUNYVM.CUNY.EDU"


>Assuming OS/2 is released, does anyone have a feeling for whether
>multitasking will have an enhanced impact of the viruses? That is,
>the virus in your WordPerfect program sees the Microsoft Windows
>program and enters it then goes dormant until April 1, 1992...

OS/2, per se, should not make the problem better or worse.  However,
hardware, such as the 80386, that provides process-to-process isolation,
creates the potential for implementing containment for executing untrusted
code.  If you think back, much of the discussion in this forum has dealt
with the difficulty of doing this employing first generation PC hardware.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840

--------------------

Date:         Thu, 30 Jun 88 08:12:00 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         WHMurray@DOCKMASTER.ARPA
Subject:      Re: Authentication of programs
In-Reply-To:  Message of 28 Jun 88 08:36 EDT from "Kenneth R. van Wyk"


>In-Reply-To:  Message of Thu,
>DOCKMASTER.ARPA>
>
>Sounds like that could be an effective way of ensuring mail integrity.
>How about all data transfers on the Internet?  I assume that the scheme
>which you pointed out is just for SMTP.  That leaves FTPs up for grabs.

No reason at all why it could not apply equally well to FTP.  The
simplest application is for the author to sign the file with his private
key before putting it in the file server.  A more sophisticated
application would be for the manager of the (trusted) file server to
also sign it with his key.  The MAILSAFE mechanism cares not how many
signatures or seals are applied to the data object.  It takes them off,
verifys them, and reports them to the user in a "last applied, first removed"
order.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840

--------------------

Date:         Thu, 30 Jun 88 08:54:00 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         WHMurray@DOCKMASTER.ARPA
Subject:      Re: Forwarded comments on worms from Joseph Beckman
In-Reply-To:  Message of 29 Jun 88 08:34 EDT from "Kenneth R. van Wyk"


>Please note that this was not designed to be malicious.  Personally, I
>feel that classifying code as a virus or worm based on the perception of
>maliciousness is misleading.  By so doing, you are "divining" the intent
>of the creator (always a dangerous thing to do).

Yes, dangerous, even if the creator is omniscient and perfect in
implementing his intent and only his intent.  Otherwise, it may be
irrelevant.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840

--------------------

Date:         Thu, 30 Jun 88 13:16:00 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         WHMurray@DOCKMASTER.ARPA
Subject:      Re: say NO to constructive viruses :-)
In-Reply-To:  Message of 29 Jun 88 17:03 EDT from "me! Jefferson Ogata"


me! Jefferson Ogata writes:

>If you write a program carefully enough and test
>it appropriately to make sure it does ONLY what you WANT it to do,
>fine.  These programs are fairly simple, so that's not much of a
>problem.  Bacteria are a very different subject; they can exist in
>billions of environments.  Computer viruses have a very clearly
>defined area of possible propagation.

me!, that's the whole point.  Computer viruses do not have a very
clearly defined area of possible propagation.  Even though you can write
a set of assumptions about the target community, you have a high
potential for error.  How many users and nodes in the INTERNET?  Not
only do you not know those numbers with a high degree of confidence,
they are so volatile as to not even be knowable.

Even if you knew the environment, how would you test to insure that the
virus behaved as you expected?  You can test its behavior in the target
execution environment, but how do you test its behavior in the target
population.

It is hubris, and it is exactly this kind of hubris that concerns
people.  What we hear is someone saying that he can predict the behavior
of an environment dependent entity in an environment that our experience
tells us he that he cannot understand perfectly.

The analogy to recombinant DNA is apt.  The creator of a variant can
know a great deal about the behavior of the variant in any environment
that he can predict.  What worries the rest of us is how accurately and
completely he can predict.

All that having been said, I believe the world to be a very resilient
place.  It can tolerate a great deal of tampering.  Nonetheless, there
may be limits.  We ought to try to err on the safe side.

William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840

--------------------

Date:         Thu, 30 Jun 88 14:38:47 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
Comments:     In-Reply-To: 24 Jun 88 10:13:00 URZ   From Bernd <BG0 at DHDURZ2>
From:         Otto Stolz +49 7531 88 2645 <RZOTTO@DKNKURZ1>
Subject:      VM/CMS viruses

>  buy a XT/AT/370 and to steel a VM/CMS to develop a virus

VM/SP (CMS is part of it) is a mainframe operating system and hence
does NOT run on any PC or compatible.

There is a similar product available (I think it's called "VM bond")
wich runs on a PC.  We do not use it, as it is much too expensive.
Even if you develop some virus for this system, it surely won't run
on a mainframe under VM/SP (or whatever), as the machine architecture,
object code, &c are totally different.

The only code you can port between those two VM varieties is source
code in standard languages using only standard library members and
depending in no way whatsoever on internal data representations
(including character data)  --  and even then you've got to be very
lucky and/or clever to succeed :-)

Best regards
              Otto

--------------------

Date:         Thu, 30 Jun 88 16:13:00 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         Woody <WWEAVER@DREW>
Subject:      The Sunnyvale Slug

Reprinted without permission from _Personal_Computing_, July 1988 issue.

Beware: It's Virus Season

* "We would like to take this opportunity to convey our universal message
of peace to all Macintosh users around the World," read the message on
Aldus Publishing's FreeHand graphics program for the Macintosh.  This
seemingly cheerful message was actually a virus program. Fortunately the
virus simply displayed the message and went away; Aldus replaced packages
with the virus and nobody lost data.  But the incident, which was the first
time a virus was embedded in a commercial package being shipped,
illustrates the danger of virus programs to business users.

  Virus software, also known as a worm or Trojan horse, latches on to an
application, data file, or DOS command, and once activated, does whatever
the writer of the virus intends.  A northern California company had a more
serious bout with a virus program recently.

  The company, which requests anonymity, is currently working with Panda
Systems, Wilmington, Del. (302-764-4722), to eliminate the virus, dubbed
the Sunnyvale Slug, which Panda Systems has removed from all but one of the
company's computers.  Panda says the virus in the last computer either
worked its way into CMOS or is on an EPROM chip in the computer.

  The Sunnyvale Slug performs various nevarious deeds.  Some are benign,
such as the following remark: "Greetigs from Sunnyvale.  Can you find me?"
Some of the damage is destructive, such as altering the COPY command so it
destroys files instead of copying them.

  Several programs are available for fighting less sophisticated viruses.
Panda Systems makes a package called Dr. Panda Utilities ($80).
Lasertrieve, Inc., Metuchen, NJ (201-906-1901) offers VirAlarm, which
constantly checks files on a hard disk for viruses.  And a number of virus
prevention packages are available in CompuServe's Special Interest Groups
and other on-line services.

   -- Patrick Honan


Anyone know anything more about the Sunnyvale Slug?


Woody Weaver WWEAVER@DREW

# disclaimer: I have no business relationship with the magazine _Personal_
_Computing_ or the anti-viral companies Panda Systems or Lasertrieve, Inc.

--------------------

Date:         Thu, 30 Jun 88 16:30:48 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         "David M. Chess" <CHESS@YKTVMV>
Subject:      VM/CMS viruses

Otto Stoltz writes:
>  VM/SP (CMS is part of it) is a mainframe operating system and hence
>  does NOT run on any PC or compatible.

and other related things.   There are a few errors in the item
that I'd like to clarify slightly (although I imagine there are
folks out there who could do it better than I can):

  - The various IBM Personal Computer/370 models have both the
    usual PC 8086-series CPU, and (to quote from an announcement
    letter) a "System/370 processor card to execute System/370
    instructions, handle paging, etc."  These machines are
    "capable of running most CMS programs as well as Personal
    Computer programs."  They are, in particular, *binary*
    compatible to a large extent with mainframe VM/CMS systems.
  - Otto's last paragraph is therefore wrong, I'm afraid.  I'm
    not sure exactly how helpful a PC/370 system would be in
    developing mainframe viruses (that would depend on the
    details of the virus), but it would be much more helpful
    than Otto suggests.
  - PC/VM Bond is an entirely separate package, which includes
    things like terminal emulation, execution of host commands
    from DOS, "emulated disk" support (things on the host that
    appear to DOS to be hard disks), and other stuff like that
    there.  No particular relation to VM/PC and the PC/370
    systems, except that they both have to do with host -
    workstation cooperative processing.

DC

--------------------

Date:         Thu, 30 Jun 88 17:09:01 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         Ed Nilges <EGNILGES@PUCC>
Subject:      Re: OS/2 and virii
In-Reply-To:  Your message of Thu, 30 Jun 88 07:49:00 EDT

OS/2 and its predecessor (in terms of size and overcomplexity) MVS on
mainframes, as well as corporate announcements of software that is
never delivered (vaporware), represent viruses that are never recognized
as such.  This is because such systems are not created by individuals.

Actually, this difference in terminology and recognition may not be
altogether pernicious, since with all their faults, OS/2 and similar big
systems represent the outcome of a social process in which
people agree that the end result will be such and such.  Lone
terrorists try to justify their actions on the basis that the West
indulges in "mass terrorism" in the form of nuclear defense.  This
argument ignores the fact that we vote for the nuclear umbrella.  Like-
wise, we as computer professionals "vote" to use MVS (or whatever).

Having said this, I think it would be healthy to keep a sense of
proportion concerning viruses and to bear in mind that they appear
to have a tendency to flourish in (overly?) complex systems.  If I
have true control over my system, viruses are a matter of concern,
rather than some sort of electronic Ragnarok or Twilight of the Gods.
Also, we need to encourage our manufacturers to provide systems
that are auditable and checkable-uppable.  At ASPLOS '87, amid all
the hoopla about RISC (which will complicate compilers enormously
in "simplifying" architecture), Niklaus Wirth issued a clarion call
to computer professionals to remember the virtues of simplicity
and reliability.

--------------------

Date:         Thu, 30 Jun 88 16:34:04 CDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         Len Levine <len@evax.milw.wisc.edu>
Subject:      Re: OS/2 and virii

>William Hugh Murray says:

>OS/2, per se, should not make the problem better or worse.  However,
>hardware, such as the 80386, that provides process-to-process isolation,
>creates the potential for implementing containment for executing untrusted
>code.  If you think back, much of the discussion in this forum has dealt
>with the difficulty of doing this employing first generation PC hardware.

Since OS/2 has a way of permitting MSDOS code to be run so as to allow
older programs to work, ans since some of these MSDOS programs do not
use the exec calls, but do their stuff directly, why cannot a virus
writer ask for MSDOS mode and do the system in that way.

I understand that OS/2 is in danger unless/until the permission to
allow MSDOS code is removed.

len@evax.milw.wisc.edu

--------------------

Date:         Thu, 30 Jun 88 17:00:49 CDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         Len Levine <len@evax.milw.wisc.edu>
Subject:      Some BYTES from fidonet (sorry)


The following was taken without permission from the Fidonews.
It is very long, nearly 800 lines, but deals well with the
issues.


Volume 5, Number 26                                  27 June 1988
+---------------------------------------------------------------+
|                                                  _            |
|                                                 /  \          |
|                                                /|oo \         |
|        - FidoNews -                           (_|  /_)        |
|                                                _`@/_ \    _   |
|        International                          |     | \   \\  |
|     FidoNet Association                       | (*) |  \   )) |
|         Newsletter               ______       |__U__| /  \//  |
|                                 / FIDO \       _//|| _\   /   |
|                                (________)     (_/(_|(____/    |
|                                                     (jm)      |
+---------------------------------------------------------------+
Editor in Chief                                       Dale Lovell
Editor Emeritus:                                   Thom Henderson
Chief Procrastinator Emeritus:                       Tom Jennings
Contributing Editors:                                   Al Arango

[...]

Lee Kemp, Communet 1:221/162.14

7 June 1988

K I L L I N G   V I R U S E S

1. INTRODUCTION

Numerous utilities have been released for detecting "virus"
programs before they damage hard disks or get passed on.

Unfortunately this won't work. Existing viruses will continue to
spread from diskettes already infected, and will re-infect
computers that have been through it before. New more virulent
strains will be developed to overcome each new detection utility
released (perhaps by infecting the utilities themselves!)

I believe I've figured out a method that WILL work. The World
Health Organization managed to stamp out smallpox through a
coordinated international campaign and I believe we can do the
same for ALL computer viruses - but it will require a coordinated
campaign.

This article lays out the basic idea and asks for help. Help
through any constructive criticisms or alternative proposals,
help through negative, destructive "flames" if the idea is all
wrong, so I'll stop wasting time on it, and help from any
software developers willing to "send code".

However I am not interested in corresponding about whether
destructive Trojan viruses actually exist and whether they will
become a serious problem. Its far too late to be discussing THAT.

First, some reasons why its worth working on the solution I
propose.

1.1 There is NO way to detect viruses

None of the methods currently used have the SLIGHTEST chance of
detecting a reasonably well designed Trojan, let alone a genuine
virus. Tests that are just done when software is first received,
and just consist of running a utility over it once, or installing
a TSR monitor, are ALREADY completely useless.

Any jerk can write a Trojan that won't do anything suspicious
while it's being tested, and the test utilities themselves are
likely to be a target for more sophisticated viruses.

Ideas like continually monitoring disk writes are ok for the
first generation of Trojans but simply won't work with the next
generation. Actually they will become positively dangerous. A
virus could simply recognize the particular TSR that's monitoring
it, grab the interrupts back, and send reassuring messages to the
SysOp, while it doesn't even bother to WAIT before starting to
infect other software! A false sense of security is MUCH worse
than the knowledge that anything you make available for download
COULD be a virus.

Source code for IBM ROM BIOS is available in the Technical
Reference manuals for anyone who wants to write Trojans that
access disk controllers directly. Also there are ways to do
apparently "legitimate" disk writes that do no immediate damage
but can trigger an infection later.

Much more sophisticated approaches to delayed action are
available than using the DOS date function.

Checksums of operating system files and their dates and times are
easily bypassed.

Proper testing requires at least the sort of insulation from the
hardware and operating system that is provided by a 386 running
in virtual 8086 mode. Worse, there are even ways around THAT,
which I won't go into here.

Anyone familiar with the secure design of operating systems
understands that there is NO way an application program can be
prevented from doing whatever it damn well pleases when the
underlying CPU hardware doesn't run in a protected mode. OS/2 and
Unix run in a protected mode but MSDOS normally doesn't, and
CAN'T on XTs and ATs.

Even protected mode isn't enough, given the practical realities
of normal security precautions. Controlled experiments with Unix
viruses have achieved root privileges in less than an hour, with
an average of 30 minutes. (F. Cohen, "Computer Viruses: Theory
and Experiments", University of Southern California, August 1984,
cited in Wood and Kochan, "Unix System Security", Hayden Book
Company, 1985)

The SERIOUS work in computer security is being done on how to
protect a system when you have complete source code for
everything run on it - and THAT is damn difficult. ADA and Pascal
are languages intended to allow you to figure out what the source
code actually does, but C is the language of micro applications
and its designed for efficiency, not correctness proofs. Take a
look at the fast table driven CRC routines used in most FidoNet
mailers these days. How many C programmers have the faintest idea
what they ACTUALLY do?

Serious computer security work also deals with problems like
"compiler viruses", that install themselves in any software
compiled, including new versions of the compiler. Who REALLY
knows what's in most microcomputer object code - not even the
authors!

There is NO serious work being done on protection from real mode
applications running on 80x86 machines. Because it SIMPLY CAN'T
BE DONE.

Now sit back and think about the implications of that for 3000
FidoNet nodes around the world continually exchanging software
with each other and their users! This network can spread a deadly
virus around the world within DAYS (if not hours).


1.2 We don't have time for testing

ANY partially useful testing system for the next generation of
viruses would require tests EVERY time a copy of ANY software is
made available for distribution, and fairly elaborate procedures
to ensure the testing is done on an uninfected machine with
uninfected test utilities.

Even factory fresh diskettes from major software houses have
ALREADY been infected, so what's to stop the latest upgrade of
some commercial package infecting a machine that's been carefully
kept "clean"? Even Harvard couldn't persuade Lotus to let them
retain their policy of ONLY running software for which they had
compiled the source code themselves.

BBS SysOps just don't have the time to properly test files they
make available for download, even to detect fairly crude Trojans.
Neither do end users. Even PARTIALLY useful serious tests simply
won't be widely used until AFTER there has been some MAJOR damage
done. The time wasted on serious testing will then be almost as
damaging as actual loss of data.


1.3 We can't afford to just keep quiet and hope

The first generation of diskette based viruses has now become of
sufficient public interest for full page articles in daily
newspapers as well as computer magazines. It is therefore CERTAIN
that some warped people will take up the challenge to design the
next generation. It is also very likely to happen SOON.

In case anybody thinks that mentioning all this could give people
ideas, I should point out that the technical points made above
will be obvious to anybody TRYING to figure out how to get past
present detection utilities. People who have ALREADY shown much
more sophistication with Unix viruses will now be focussing their
attention on personal computer diskette based viruses as a result
of the newspaper publicity if nothing else.

I am deliberately refraining from mentioning some approaches that
are obvious to me, but that may not be thought of immediately by
just ANYBODY contemplating a virus program, in case that can give
an extra breathing space. But I KNOW that there ARE ways to
unleash delayed action virus programs that CANNOT be detected by
ANY feasible method. I don't think it will be long before these
more sophisticated approaches become general public knowledge
too.

A basic issue is that viruses involve quite different problems
from simple Trojans. They can spread WITHOUT doing overt damage.

I am writing a separate technical paper on all this, which shows
that FidoNet itself is in special "clear and present danger" with
more than the usual problems faced by all BBSes. Anyone wanting a
copy should NetMail me direct, explaining why they have a
legitimate "need to know" and with an undertaking not to pass on
the information. This paper is not available for file request but
will be crashed direct to responsible software developers
interested in working on solutions.

I sent a message about these problems to the International
Coordinator of FidoNet nearly three months ago. He replied to
other matters in the same message in a manner indicating that he
had not understand anything I said to him, and he did not reply
at all on this issue. Judging from that and other indications, I
do not believe that the authorities within FidoNet who ought to
take the initiative to do something about this situation are
likely to do so. (For that matter my experience with the IC is
that he also thinks he can avoid other serious problems by
sticking his head in the sand). I see no choice but to now pass
on the information direct to interested and responsible software
developers.

There's no point refraining from public discussion, when full
page articles about computer viruses are appearing in daily
newspapers, and when people responsible for administration of
FidoNet won't listen AT ALL.

Ok, now as well as being necessary to look at new solutions, it's
also URGENT to do so.


2. WHY ITS URGENT

"Computer AIDS" is likely to have as devastating an effect on
BBSes and FidoNet as the original AIDS has already had on gays,
and is now having on the wider community. Unless something is
done NOW, we are CERTAIN to eventually be hit by some really
deadly virus that has been spread to literally thousands of
public access BBS systems through FidoNet and will then, months
later, cause literally millions of dollars worth of damage to
data on literally tens of thousands of users hard disks. The
problem is THAT serious.

Apart from jerks, there are economic interests that actually
stand to GAIN from destructive viruses, because public domain
software, and the "sharing" of other software that often occurs
among people who use public domain software, directly competes
with their own commercial interests.

As Nicholas Rothwell points out in his article on "Computer
AIDS":

          But what if one does not want to create trouble, but
          rather to destroy trust? For that is what is at stake.
          Without open communication, without "public domain"
          software, without free exchange of academic and
          technical software, the personal computer revolution is
          hamstrung.

There are plenty of technically competent people in FidoNet who
are out to destroy trust and are opposed to open communication.
I'll be going into that in a later article.

Last month a report for the European Commission issued a formal
blunt warning that computer networks across the member nations of
the European Community were not safe:

          Unless action is taken to improve levels of computer
          and network security, the consequences for individual
          enterprises could be severe, even catastrophic.

For FidoNet the consequences would be catastrophic, not just
"severe". It is one of the world's largest computer networks, but
with virtually NO security and LOTS of jerks.

We are supposed to be a network dedicated to the free exchange of
information. If instead we become known as a network that has
done millions of dollars worth of PREDICTABLE damage that COULD
have been avoided by SIMPLE countermeasures, then both individual
SysOps and IFNA could be held legally responsible for the damage
resulting from negligently failing to take those countermeasures.

Quite apart from the legal consequences, a really devastating
virus attack would give the words "BBS" and "FidoNet" a brand
recognition somewhere between Tylenol and anal sex.

If the countermeasures aren't in place BEFORE major damage is
done, there will be an atmosphere of incredible paranoia about
using ANY software from BBSes and user groups. It could even be
quite difficult working on solutions in that atmosphere. Also
interests opposed to public domain software and the free exchange
of information could take the opportunity to impose regulation
and controls on a VERY unpopular minority.


3. WHAT IS TO BE DONE?

Fortunately there IS an approach that CAN stop viruses, and COULD
be widely used as soon as its available. I hope I've given enough
reasons for people to take a serious interest in my proposals
despite their unfamiliarity. Here goes.

The way biological immune systems develop antibodies to attack
foreign bodies is to first identify what SHOULD be present and
then deduce what should not. We can do that using "digital
signatures" just as the immune system recognizes molecular
signatures.

Since there is NO practical way to determine whether unknown
software is a virus or not, the ONLY feasible approach to "safe
software" is to identify KNOWN software and use nothing else.

The logic of that is both compelling and frightening. It has
already led to quite serious restrictions on the "promiscuous"
way that people exchange software. If the current trend
continues, open BBSes will become less and less viable and the
"free exchange of information" will be replaced by tightly
controlled distribution channels.

AT PRESENT the only way to identify "known" software, is through
writing and compiling it yourself, or buying it from a commercial
distributor in a shrink wrapped package. Even these precautions
aren't worth much against compiler viruses and given the level of
security at most software publishers.

Turned around, the same logic suggests we need to find another
way to identify "known" software. Fortunately there IS another
way, suitable for BBS public domain and shareware software.


3.1 Authentication by digital signature

Software authors and publishers can use public key encryption or
other digital signature techniques to authenticate their software
releases with their personal "signature". This merely requires
that they run a standard encryption utility on each package
before distribution. An explanation of how public key encryption
works is in FidoNews 428.
Authentication is the key to killing viruses while preserving the
free exchange of information. It doesn't actually kill them, but
it provides a way we can only use "known" software WITHOUT
tightly controlled distribution channels.

Essentially the use of public key digital signatures just
establishes that any two items that can be decrypted with the
same "public key" signature MUST have both come from the same
person. It's that person's personal signature and there is no way
that anybody else could "forge" it.

Because an item can be decrypted with a particular public key, it
must have been encrypted using the corresponding "secret key"
that was generated simultaneously with the public key by the
person who published that public key. Since this "secret key"
cannot be deduced from knowledge of the public key, the person
who encrypted using it must therefore be the person who published
the original public key (unless they let somebody else get hold
of a copy!)

A digital signature doesn't prove who the person that uses it
really is, or how trustworthy they are, or whether they
originally wrote the document they are signing.

But it DOES allow each software author (or other distributor) to
establish their own "reputation".

In practice most users won't want to keep the public keys of
large numbers of software authors and publishers, and new authors
and publishers need a way to get their software accepted.
This requires intermediary "recommenders" and "software
certifiers" who publish "signed" lists of signatures which they
recommend as trustworthy, or reissue software they trust under
their own "signatures". They may also issue signed "warnings"
about infected software they have come across, and who signed it.

Some SysOps and user groups may want to advise their users which
signatures they personally recommend as trustworthy. That's up to
them and it's up to their users what notice to take of their
advice.

Some software collectors may want to keep close tabs on who
releases what, and reissue copies under their own signature as
evidence that they consider an item to be uninfected. That's
equivalent to the responsibility that anyone takes now, when they
pass on a copy of ANYTHING to anybody else. A valuable service
can be performed by such "software certifiers". When things
settle down, end users should be able to rely on relatively few
signatures, and with the side benefit of automatically produced
catalogs of software available.

It's very important that there be convenient ways for
recommendations and warnings to be passed on and accepted or
rejected automatically according to users preferences as to which
advice they consider trustworthy. It's equally important that
there be no central authority who is the sole source of such
advice.

It IS possible for such "advice" to be processed automatically,
by users, with no hassles, despite coming from a multitude of
sources. I'll explain that later.

The essential point is that we ALL rely on such advice and
recommendations now, including published lists of Trojans. The
difference with my proposal is that we can automate it and know
where it's really coming from. More important, we can know which
software EXACTLY is being recommended or warned against.

Instead of lists warning about certain utility names and version
numbers, we will see (automatically processed) lists warning
about signatures. Although anybody can just adopt another
signature, getting other people to accept it will be a lot harder
than just using the RENAME command on a Trojan!

Life will actually be EASIER for SysOps as a result.


3.2 Establishing Reputations

For their recommendations and warnings to be accepted, a SysOp or
other software certifier needs a reputation for giving good
advice.

For their signatures to be recommended, or for their software to
be reissued under other people's signatures, a software author or
distributor needs a reputation for taking adequate precautions to
not release infected software. (For reissue most people would
want to read and recompile the source code themselves before
staking their own reputations on it.)

In both cases anybody can start again under another signature,
but the signatures that users will accept will be the ones that
have established a reputation over a period.


3.2.1 "I've never released infected software"

If an infected program is released under a particular signature,
anybody can PROVE that signature should not be trusted again.
(Although nobody can prove whether a virus was released
deliberately, accidentally, or as a result of allowing a secret
key to become known to somebody else, the effects are the same
and the consequences should be the same - don't trust the
signature of anybody who signed that software. They should never
have signed it. Whether it was deliberate or not is a matter for
law courts, not protection schemes.)

Proof consists of a copy of the infected software, as originally
signed by the person releasing it, together with details of the
infection, that can be tested by anyone reading about it.


3.2.2 "I've never signed bad advice"

If anybody gives signed bad advice that would be adopted
automatically by users who have decided to trust their advice,
anybody damaged can PROVE the advice was bad. Proof consists of a
copy of the signed advice, together with the proof that the
advisor got it wrong.

This can result in other software certifiers issuing signed
warnings to disregard advice from the signature that has been
proved to them to be unreliable.

Issuing a signed warning, without being able to provide the proof
when requested, can also be dealt with. Having asked for proof,
other software certifiers will be willing to stake their
reputations by issuing signed warnings that warnings from a
particular signature should be disregarded.

Signed warnings from software certifiers they trust can
automatically prompt users to revise their lists of who to trust,
and to review what software on their systems may be dangerous as
a result of having accepted bad advice.

Being able to PROVE these things, makes it possible to establish
a completely informal decentralized and automatic system for
distributing recommendations along with software. Such a system
can be fully automated so that it is almost completely
transparent to users, who only have to decide on accepting the
signatures of a few people whose recommendations and warnings
they will trust.

3.3 Implementation

All encrypted files would have a standard header including the
public key to be used (about 150 bytes). Decryption software can
look up the key (or a shorter hash of it) automatically in a
user's database of acceptable keys. Thus to decrypt a file, users
wouldn't even have to specify keys along with filenames. To
decide whether to trust some software, users wouldn't have to
look up their database manually. The key is either there or it
isn't, when the decryption software tries to process an encrypted
file.

Initially trusted signatures could be in standard format files
called PUBLIC.KEY, similar to individual nodelist lines. These
would normally be obtained direct by downloading or file request
from the trusted phone number contained within them.

Acceptance of those initial signatures as trustworthy would
result in automatic acceptance of subsequent files containing
recommendations or warnings signed with those signatures - until
the end user decides otherwise. After decrypting the
recommendation or warning the software would automatically apply
to the user's keys database.

Standard formats similar to the St Louis nodelist can be used to
distribute (signed) lists of recommendations and warnings about
particular public key signatures. Utilities similar to MAKENL and
XLATLIST (but with a "user friendly" interface) can be used
automatically together with the encryption software, to produce
customized end user databases of what signatures they will
automatically accept or reject.

End users just decide on a few signatures they INITIALLY consider
trustworthy, and then simply pass any encrypted files they
receive, whether software, recommendations or warnings, through
their encryption utility to automatically update their keys
database as well as to decrypt the software recommended by people
they trust and not warned against by people they trust.

The main complication for a full protection system is to avoid
the encryption utilities and key databases themselves becoming
infected, despite end users not fully understanding what it's all
about.

This can be achieved by writing the software so it HAS to be used
in a secure way, eg coldbooting from a "safe" floppy, encouraging
floppy backups of the encrypted versions of all software that is
accepted, and keyboard entry of checksums of PUBLIC.KEY files.

I'm drafting some proposed standards, specifications and end user
instructions for immediate and future software development.
Anyone interested in details of these proposals please file
request CRYPLIST from 1:221/162 for a list of what's available so
far. If anyone can suggest a simpler approach that is foolproof,
or can see loopholes in this one, please send NetMail to me at
1:221/162.14. Likewise for anyone interested in working on
software and standards. I'd like to start an echo, AREA:PUBKEY,
for serious discussions among interested software developers. It
really has to happen NOW.


3.4 Free Exchange of Information

Using this approach, software distributors, BBS SysOps and user
groups etc can freely distribute the encrypted versions of
software, recommendations and warnings from anybody, without
worrying about whether the software is infected or whether the
recommendations and warnings are reliable.

They simply notify end users not to accept anything that has not
been encrypted with a digital signature that the USER trusts
(whether directly or as a result of trusting recommendations and
ignoring warnings). The recommendations and warnings just get
distributed along with encrypted software.

This is an unfamiliar concept, but it can be implemented with
simple, "user friendly" utilities. It requires NO work "testing"
software and will ultimately be much easier to get across to end
users than the idea of software celibacy.

It also requires NO centralized authorities to put their stamp of
approval on things. Apart from the unlikelihood of such
authorities being established or accepted, there would be real
dangers of abuse judging from the way I've seen FidoNet
coordinators treat the nodelist. I'll be going into that in later
articles.


3.5 Isn't it too complicated?

Yes, for now. I'm hoping there will be SOME people who understand
AND agree with the point I'm making and will work on designing a
system as easy to use as possible for when its needed. Once its
needed, it will be needed BADLY, and the alternatives will be FAR
more complicated and basically won't work.

In that situation, which could come SOON, some SysOps and user
groups may go on distributing unencrypted software. But they will
lose popularity either gradually or very suddenly.

At least this proposal provides an alternative to pretending that
viruses can't get past detection utilities, and then wondering
why use of BBSes has suddenly become unpopular.

HOW safe to play it is up to each user. Some will be silly enough
to trust any PUBLIC.KEY they are offered. Each time their hard
disk gets trashed they will at least learn not to trust THAT
signature again, and eventually they may learn more. Once enough
users have been educated through experience, it will become
pointless attempting to release viruses - they aren't going to
travel far unencrypted and each signature used successfully is
going to reduce the number of gullible fools willing to accept
unknown signatures.

Of course some "software certifiers" may irresponsibly issue
software or recommendations under their signatures without
sufficient care. But they will quickly AND AUTOMATICALLY be
discredited and others more reliable will take their place.

Major software publishers will probably prefer to encourage users
to rely only on shrink wrapped factory fresh disks. They may even
welcome the additional incentive to do so. But they could end up
in the same position as religious moralists claiming that
monogamy is the only answer to AIDS. If a really devastating
virus gets out on a factory fresh diskette, that publisher will
probably go out of business and others may start publishing their
public keys in magazine ads and printed manuals (or shorter
hashes of the full keys).


3.6 How could this proposal get widely adopted?

End users that receive encrypted software and recommendations
distributed by BBSes, user groups or other end users are FORCED
to apply the encryption utility before they can use the software.

There are VERY good reasons why developers of FidoNet mailers and
related utilities should be using this system NOW, as I'm
explaining in a separate technical paper. If they do so,
responsible FidoNet SysOps will start accepting only the
encrypted versions (although initially decrypted versions would
continue circulating).

Once encrypted software starts being released, using it just
involves running the encryption utility on the files received,
with minimal inconvenience. Once FidoNet SysOps are doing that
for essential network software updates, it will be easy for other
software authors and publishers to adopt the same system and for
SysOps to make it available directly to their users. Thus the
system could take off rapidly, once it gains a foothold.

The "inconvenience" of running an encryption utility, can be
hidden by the "convenience" of having a system that automatically
keeps track of ALL the user's software installation and backups,
superior to the cataloging utilities available now and with the
advantage of automatic enforcement of use. It can also be
integrated with concepts like Megalist, for automatic tracking of
what's available and where to get it.

Keeping track is ESSENTIAL for virus killing, to recover from
having trusted irresponsible recommendations by restoring
original uninfected software, and to be able to track down,
remove, and warn others about, the signatures that caused the
problem. It has to be done automatically, or it won't be done
properly.

However this can be implemented later, after basic FidoNet
software has been protected by a preliminary system. Initially
FidoNet utilities could just be protected with publication of the
PUBLIC.KEY files of their authors, (eg in FidoNews, or shorter
hashes as nodelist flags), without the full mechanism for
exchanging recommendations and warnings and keeping track etc.


4. POSSIBLE PROBLEMS

4.1 Legal Hassles

The US National Security Agency has a policy opposed to the
widespread use of secure encryption techniques and has classified
some commercial public key encryption packages such as
Cryptmaster and PKcrypt as "weapons" subject to munitions export
controls. However this does NOT apply to publicly available
information including shareware and public domain software
available for download from BBSes (although some people have been
bluffed into believing it might).

Under the US Export Administration Regulations there is a General
Licence "GTDA" (part 379.3) which covers all such publicly
available technical data and is NOT overridden by the munitions
regulations (logical when you think about it - the US Government
is not so silly as to even TRY to prohibit export of publicly
available data!). Full details for anyone interested are
contained in a USEnet discussion as file GTDA.ARC (10K) available
for file request from 1:221/162.

Books containing detailed algorithms are readily available and
public domain or shareware source code would be in the same
category. (Some has already been released through BBSes and
USEnet).

I would recommend the following books for a thorough professional
understanding of secure cryptographic techniques:

Alan G. Konheim, "Cryptography: A Primer", John Wiley & Sons, New
York, 1981.

Carl H. Meyer and Stephen M. Matyas, "Cryptography: A new
dimension in computer data security", John Wiley & Sons, New
York, 1982.


4.2 Developing Encryption Software

Development of a standard public key encryption utility that can
be used widely involves no significant technical problems at all.

Its true that software with even the best key generation
algorithms runs extremely slowly, but each software author or
publisher only needs to generate their keys once and end users
don't need to do it at all (although there are extra benefits if
they do).

Actual encryption and decryption operates at quite acceptable
speeds with existing commercial packages and can no doubt be
further improved with the superior programming resources
available within FidoNet.

For our purposes a high speed "hybrid" implementation could be
quite acceptable, at least initially. This would use relatively
slow public key encryption to authenticate only a short hash of
the attached software, using a cryptographically secure hashing
method. The actual software need not be encrypted at all, but
just hashed with a more complex algorithm than the usual CRC and
producing at least 10 bytes of output. (That could also keep NSA
happy).

A smooth transition could be achieved by using a normal ARC file,
which can just be used to unARC the software directly or ALSO
used to check a file within it that contains the "signed" hash of
the rest of the ARC file. In the long run though, once things get
really bad, it would be better to force users to actually run the
software provided for authentication, by actually encrypting
entire files. The transitional system would be useful for secure
distribution of a better system released later.

I am writing a draft proposal for the FidoNet Technical Standards
Committee suggesting standards to ensure utilities developed will
be secure, compatible, fast, widely adopted, suitable for porting
to all common computers and suitable for other FidoNet uses.
(Other uses with further software development include: private
and authenticated Email; a convenient, decentralized and secure
means for exchanging nodelist information and session passwords
between nodes; and Email voting systems.)


4.2 Is Public Key Encryption Secure?

Most digital signature techniques rely on the computational
difficulty of factorizing large composite numbers. This problem
has defied mathematicians for some 200 years but has not been
proved cryptographically secure or even "NP complete" (a measure
of computational complexity which does NOT prove cryptographic
security). There is some indication that these methods CAN
eventually be cracked or MAY have already been cracked, with the
results classified.

Fortunately this problem need not concern us unduly as it is
unlikely that a major breakthrough in higher mathematics will
first come to light as a result of its discovery by someone
warped enough to want to launch destructive viruses! (Not that
some mathematicians aren't pretty warped, but they'd probably
prefer to take the public kudos for announcing the solution!)

If new developments make any particular digital signature system
insecure, alternatives are available and can be implemented
quickly. (Unlike virus detection programs which just simply won't
work AT ALL against the next generation of viruses.) Standards
for file headers etc should provide for later upgrades.
The main thing is to have the machinery in place for when its
needed, and improve it later.


4.4 Developing End User Software

Some pretty neat software will need to be written quickly for end
users automatic key databases and tracking etc. It has to end up
being a lot more professional and "user friendly" than most
public domain and shareware software and provide lots of extra
benefits like software cataloging, to gain wide acceptance BEFORE
a disaster hits.

That's why I wrote this article. Now who's going to do the hard
stuff?

Oh well, there it is. Sorry about the length, but if nobody pays
any attention, guess who'll be saying "I told you so".

 ------------ that's all folks (Len Levine) --------------------

--------------------

Date:         Thu, 30 Jun 88 19:35:04 EDT
Reply-To:     Malcolm Ray <malcolm@JVAX.CLP.AC.UK>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
Comments:     Warning -- original Sender: tag was malcolm@JVAX.CLP.AC.UK
From:         MALCOLM@JVAX.CLP.AC.UK
Subject:      RE: OS/2 and virii

> OS/2, per se, should not make the problem better or worse.  However,
> hardware, such as the 80386, that provides process-to-process isolation,
> creates the potential for implementing containment for executing untrusted
> code.  If you think back, much of the discussion in this forum has dealt
> with the difficulty of doing this employing first generation PC hardware.

Is that strictly true?  I know next to nothing about either the 386 or OS/2,
but it seems to me that any system which boots from a disk remains vulnerable.
In other words, the hardware memory management of the new processors will
defeat any attempt at cross-process infection by viruses of the (in DOS terms)
.COM/.EXE-hidden kind, but boot-infectors will still have the run of the
machine.

> William Hugh Murray, Fellow, Information System Security, Ernst & Whinney
> 2000 National City Center Cleveland, Ohio 44114
> 21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840

Regards, Malcolm

- ----------------------------------------------------------------------
Malcolm Ray            JANET:    malcolm@uk.ac.clp.jvax
Senior Systems Officer        BitNet:    malcolm@jvax.clp.ac.uk
City of London Polytechnic    No other routes please!

We ought not to be over-anxious to encourage innovation in cases of doubtful
improvement, for an old system must ever have two advantages over a new one:
it is established, and it is understood. -- Charles Caleb Colton (1825)

--------------------

Date:         Thu, 30 Jun 88 18:48:55 EDT
Reply-To:     Virus Discussion List <VIRUS-L@LEHIIBM1>
Sender:       Virus Discussion List <VIRUS-L@LEHIIBM1>
From:         me! Jefferson Ogata <OGATA@UMDD>
Subject:      do you believe in magic?

>From:         WHMurray@DOCKMASTER.ARPA

>Even if you knew the environment, how would you test to insure that the
>virus behaved as you expected?  You can test its behavior in the target
>execution environment, but how do you test its behavior in the target
>population.

>It is hubris, and it is exactly this kind of hubris that concerns
>people.  What we hear is someone saying that he can predict the behavior
>of an environment dependent entity in an environment that our experience
>tells us he that he cannot understand perfectly.

God forbid that  anyone  should  actually  write  an  operating  system  or
compiler.  Just  think  of  the damage that would cause to everyone's data.
It's THIS kind of unjustified fear (superstition)  that  prevents  progress
along  so  many  different  paths.  A  virus is a PROGRAM. That program has
certain characteristics that one can use to one's advantage. It seems to me
that a number of people out there are terrified by one word: 'virus'.  This
fear  of  viruses prevents them from even considering the possibilities for
code using viruses.

The simple truth is that viruses ARE controlled entities. They do what they
are supposed to do when they are properly written. The  COMMAND.COM  virus,
for  example,  infects  COMMAND.COM.  It performs a deterministic action on
COMMAND.COM, then trashes the disk. It doesn't infect  other  environments,
only  those  running  under DOS with COMMAND.COM. It would be difficult, in
fact, to write viruses with potential for infecting multiple  environments.
If  we're  following  the  analogy of biological viruses, consider how they
work, which is quite  similar  to  computer  viruses.  A  biological  virus
consists  of  a  head  and  some  legs. The virus has a key which matches a
particular site on a cell, called the active site. Viruses can only  infect
those  cells  that  have  an active site on them corresponding to the virus
key. This site is the location where the virus DNA  material  is  injected.
Because  of  this,  biological  viruses  are well-suited to the fighting of
cancer. If a virus can be tailored to find an active site that exists  only
on  cancer  cells,  it  will destroy cancer cells throughout the body. When
there are no more cancer cells, the virus will die. If biological computers
can be added to these viruses, they  can  be  programmed  to  die  after  n
generations,  thus  preventing  the  possibility of spread to other animals
whose cells might carry the same active site.

Of course, if we continue to fear  biological  viruses  and  gene-altering,
we'll  never design such viruses. Here's another possible beneficial virus:
a CRC-checking virus that infects a program, computes its  CRC,  and  waits
until  the  CRC changes, thereby detecting infection by OTHER viruses. When
it detect a virus, it warns the user and destroys itself.  Thus  the  virus
would  be  easily  removed  from  programs if necessary; just write another
virus that infects  programs  and  destroys  itself.  It  will  infect  the
program,   the   CRC-checker   will   complain   and   die,  and  then  the
CRC-checker-deleter will die after infecting one (?) program.

The point is that there  are  many  possibilities  for  beneficial  use  of
computer  viruses.  It is ridiculous to fear them BECAUSE they are viruses.
That, for want of a better term, is program racism.

- Jeff

--------------------

*** end of Virus-L issue ***
