Re: [iwar] Worm Ideas

From: e.r. (fastflyer28@yahoo.com)
Date: 2001-08-17 09:40:23


Return-Path: <sentto-279987-1621-998066429-fc=all.net@returns.onelist.com>
Delivered-To: fc@all.net
Received: from 204.181.12.215 by localhost with POP3 (fetchmail-5.1.0) for fc@localhost (single-drop); Fri, 17 Aug 2001 09:42:11 -0700 (PDT)
Received: (qmail 24134 invoked by uid 510); 17 Aug 2001 16:40:51 -0000
Received: from n17.groups.yahoo.com (216.115.96.67) by 204.181.12.215 with SMTP; 17 Aug 2001 16:40:51 -0000
X-eGroups-Return: sentto-279987-1621-998066429-fc=all.net@returns.onelist.com
Received: from [10.1.4.55] by mq.egroups.com with NNFMP; 17 Aug 2001 16:40:30 -0000
X-Sender: fastflyer28@yahoo.com
X-Apparently-To: iwar@yahoogroups.com
Received: (EGP: mail-7_3_1); 17 Aug 2001 16:40:29 -0000
Received: (qmail 18552 invoked from network); 17 Aug 2001 16:40:23 -0000
Received: from unknown (10.1.10.26) by l9.egroups.com with QMQP; 17 Aug 2001 16:40:23 -0000
Received: from unknown (HELO web14510.mail.yahoo.com) (216.136.224.169) by mta1 with SMTP; 17 Aug 2001 16:40:23 -0000
Message-ID: <20010817164023.27726.qmail@web14510.mail.yahoo.com>
Received: from [12.78.123.195] by web14510.mail.yahoo.com via HTTP; Fri, 17 Aug 2001 09:40:23 PDT
To: iwar@yahoogroups.com
In-Reply-To: <9lj0la+37hb@eGroups.com>
From: "e.r." <fastflyer28@yahoo.com>
Mailing-List: list iwar@yahoogroups.com; contact iwar-owner@yahoogroups.com
Delivered-To: mailing list iwar@yahoogroups.com
Precedence: bulk
List-Unsubscribe: <mailto:iwar-unsubscribe@yahoogroups.com>
Date: Fri, 17 Aug 2001 09:40:23 -0700 (PDT)
Reply-To: iwar@yahoogroups.com
Subject: Re: [iwar] Worm Ideas
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

I was right.  Smart guy you are.  I have just a few minr suggestions to
get this past the entire committee.  Cool stuff.  As they sai in Star
Wars, "Use the Force, young Ellis"! Call me
--- ellisd@cs.ucsb.edu wrote:
> The following two sites have proposed worms that I find very 
> interesting.  I still think the worms that are proposed here are 
> somewhat limited in scope.  The text for each follows the link.
> 
> http://www.cs.berkeley.edu/~nweaver/warhol.html
> 
> A Warhol Worm: An Internet plague in 15 minutes!
> by
> Nicholas C Weaver 
>                                                 
> (nweaver@cs.berkeley.edu)
> 
> The following is an analysis of a worst case virulence for a computer
> 
> worm, using existing mechanisms and a modified infection strategy. 
> Such a "Warhol Worm" could infect every vulnerable machine in a 15 
> minute time period, outpacing human defense. It is important to 
> understand the possible threat, in order to develop better defenses. 
> 
> "In the future, everybody will have 15 minutes of fame"
> -Andy Warhol
> 
> 
> 
> The recent outbreak of the Code Red active worm has demonstrated how 
> vulnerable our infrastructure is. But the worm could have been a 
> thousand times worse. It could have contained a malicious payload: 
> corrupting data, reflashing BIOSes, and potentially destroying 
> machines. It could have included attacks for different servers, or a 
> secondary email component, to increase its reach. 
> 
> But although it was fast, the 12 hours it took to reach epidemic 
> levels still allows for an organized response. But by simply changing
> 
> the infection pattern, it is possible for a malicious programmer to 
> build a "Warhol Worm", able to attack all vulnerable machines, 
> worldwide, in 15 minutes. A reactive, human defense would fail before
> 
> such an onslaught.  It is an important exercise to realize just how 
> vulnerable we are. 
> 
> An active worm such as Code Red or the original Morris worm takes 
> advantage of a security hole in a server. It scans through the 
> Internet, looking for machines running that service. Then it tries to
> 
> break into that service. If successful, it infects the target machine
> 
> with another copy of itself. Over a period of several hours, it goes 
> from an initial machine to Internet wide contamination. The
> difference 
> between an active worm and a Warhol worm is minor, just a different 
> strategy of choosing hosts to infect. 
> 
> There are 4 billion Internet addresses, and let us assume that 1 
> million machines are vulnerable a particular Warhol Worm. Assume that
> 
> the worm is under 100k in size and is multithreaded, allowing it to 
> probe and attack many machines simultaneously. Further assume that
> the 
> targeted servers have good network connections, allowing a running 
> copy of the worm to scan 100 machines per second, probe and infect 10
> 
> machines per second, and it takes 1 second to actually transfer over 
> the worm to a new target. Finally, assume that a Warhol Worm author 
> has planned ahead and quietly constructed a list of 50,000 computers 
> with good network connections and a running copy of the targeted 
> server, of which an unknown 25% are actually infectible by the worm. 
> 
> The worm starts out on a single machine, with its list of 50,000 
> targets. The worm goes through the list in order, and probes and 
> infects the machines. When it successfully takes over another
> machine, 
> it divides the list in half, sends half over to the newly infected 
> machine and keeps the other half. This quick division of labor allows
> 
> the worm to infect every vulnerable machine on the list in under a 
> minute. 
> 
> Now roughly 12,000 machines will be infected, and the second stage 
> begins. The worm first attempts to infect all the hosts on its
> subnet, 
> before beginning to choose new targets in the general Internet. But 
> instead of just picking random machines, or scanning through the net 
> in a linear order, the worms use a pseudo-random order. 
> 
> A Warhol worm contains a generator for a pseudo-random permutation of
> 
> all 4 billion Internet addresses. Each worm infected during the 
> hitlist phase starts at its address and goes through the permutation,
> 
> looking for new targets to infect, while each worm infected during
> the 
> second phase starts at a random location. If it finds another copy of
> 
> itself running, it picks a new, random address and starts from there.
> 
> This will have the worm behave with both the features of a random 
> probe (scattering through the net) and a sequential probe
> (minimizing duplication of effort and guaranteeing that the entire 
> Internet will be efficiently scanned). Even if the worms did not 
> infect any other machines during this phase, they would take about 
> 50-100 hours to scan the entire internet at 100 scans/machine/second.
> 
> If the computers on the hit list are chosen for speed and 
> connectivity, this would be significantly lower. 
> 
> But with 1 million vulnerable machines, any given scan and probe has
> a 
> .025% chance of being a vulnerable machine. Thus, with the 1.2
> million 
> scans per second the initial worms send out, 300 will reveal new 
> targets. By the second minute after release, the worm will have 
> infected a total of 30,000 machines. After the third minute, there 
> will be over 70,000 infected machines. It becomes quite obvious that 
> complete infection will be achieved within the 15 minute timeframe. 
> 
> Normally, once a worm which uses random probes infects about 1/2 of 
> the available hosts, the rate of new infections slows down 
> considerably. A fully coordinated worm, where the tasks of scanning 
> the internet are perfectly divided, will only slow down once every 
> target is infected. The pseudo random/random combination is a 
> compromise, allowing the worm to do a comprehensive scan of the 
> internet without relying on transmitting information between worms, 
> just the ability to check to see if a potential target is already 
> infected. 
> 
> The Internet will slow down during this process, but not as much as 
> may be expected. With only a few bytes to scan a target to see if a 
> server is running, roughly 5k to infect a target, and 100k to
> transfer 
> on successful infection, the amount of data involved is surprisingly 
> small: Even at the peak of infection with a million infested
> machines, 
> this represents only a hundred million scans per second worldwide, a 
> network load not significantly higher than Code Red employed. 
> 
> There is no current defense against a malicious Warhol worm. The 
> author of a Warhol worm could easily cause hundreds of billions in 
> real damage, through corrupted data, destroyed machines, and lost 
> time. By the time the world realizes what is happening, all the
> damage 
> would be done. 
> 
> 
> Appendix 1: Justification of assumptions
> 
> 100k worm size: 100 kilobytes of data is a reasonable size for a
> worm: 
> It is sufficient for the infection code and a small but effective 
> malicious payload. With some patience and cleverness, the size could 
> probably be reduced to 50k or less. 
> 
> 100 scans/second: Scanning a single machine to see if it is running 
> the vulnerable service requires only about a kilobit of data to be 
> sent and received, this only requires about a 100kbps link for each 
> active worm. Since server machines targeted by active worms tend to 
> have good network, this is easily achievable. Many machines should be
> 
> capable of 1000 scans per second. 
> 
> 10 attacks/second: 10 attacks per second requires transfering the 
> whole worm over. For servers, 1 megabyte/second of data is a little 
> high for some but many machines do have that level of bandwidth. A 
> lower limit of one attack per second would still be sufficient, as it
> 
> would only slow the initial phase of expansion, perhaps to slightly 
> more than a minute.  
> 
> During the second phase, infection rates are limited by the rate of 
> probing, not the rate of attacking. 
> 
> 1 second to infect a machine: Taking over and infecting a machine 
> requires that the probe be successful, the worm transfered over, and 
> the worm to start running. Since the worm is only 100k, this can 
> easily be accomplished in a second for machines connected through 
> broadband links. Note that a latency of several seconds will not 
> significantly slow down the propagation during the permutation 
> scanning phase, but only slows down the hitlist phase. 
> 
> 1M vulnerable hosts: Code Red I and II seem to have infected
> somewhere 
> between 500,000 and 1,000,000 machines on the net. A fewer number of 
> potential hosts would slow down the rate of infection slightly, but 
> even with only 500,000 vulnerable machines, the spread would still 
> take roughly 15 minutes: it would double every 2 minutes instead of 
> every minute, until all vulnerable machines are infected. 
> 
> Initial hit list scan: The initial hit-list scan is a slow scan which
> 
> is done in advance, just to determine a set of potentially vulnerable
> 
> machines. In the case of a web server, a simple web-crawler could be 
> used. For a different service, another form of scanner. Such scans, 
> since they only are used to detect the program running, not the 
> existence of the vulnerability, can be done weeks in advance and
> would 
> not attract notice. Much of the information may already be publicly 
> available. 
> 
> In practice, it would be very hard to detect and trace the scan used 
> to create such a hitlist. The Honeynet project has revealed just how 
> many scans occur all the time. A slow, advanced scan just to
> determine 
> the version of a web-server and its response time would be nearly 
> undetectable amid all the noise generated by script kiddies. 
> 
> Also, the hitlist can be created long before an actual vulnerability 
> is discovered. The attacker simply collects a catalog of machines 
> running potential targeted services, and constructs the worm with the
> 
> exception of the actual exploitation code. Once an exploit is 
> available is the final worm compiled and released, using the premade 
> hitlist. 
> 
> Pseudo random scanning: The pseudo-random-permutation is easy to 
> construct: Just use a 32 bit block cipher and a key selected in 
> advance. To generate the Nth entry in the permutation, just encrypt
> N. 
> To get the index of a given machine K, just decrypt K. 
> 
> This does result in some redundancy in scanning, as multiple worms
> may 
> probe a range in the permutation until an already infected machine is
> 
> found. However, this is a comparatively minor loss, and the rapid 
> expansion of the worm will cause a loss in efficiency, this is a 
> comparatively minor loss. Complete efficiency of scanning could be
> accomplished by having the worms communicate with each other. Such 
> communication would make such a worm even more virulent, possibly 
> allowing complete infection in 5 minutes. 
> 
> Normally, a worm like Code Red has its infection rate start to slow
> as 
> it reaches saturation, as random probing becomes less effective. 
> Because this combines the effects of random and comprehensive probes,
> 
> the rate stay higher, longer. 
> 
> Coordinated worms, and worms which exploit the structure of the net
> in 
> dividing the workload, would be even faster. The point of this 
> exercise, however, is to show that even very simple modifications to 
> random infection strategies can produce super virulence. Once you get
> 
> a worm which propagates in under an hour, what difference is there 
> between 5 minutes and 15? 
> 
> Appendix 2: Possible defenses
> 
> Unfortunately, Warhol worms are very fast, so human mediated defenses
> 
> fail. A transition to IPv6 would work wonders, since it eliminates
> the 
> random probing as a viable means of finding new hosts. Short of that,
> 
> more variation in applications, so individual flaws would affect
> fewer 
> hosts would help, since fewer vulnerable machines will slow the
> spread 
> of such worms. 
> 
> Getting programmers to write their servers in bounds checked
> languages 
> would help, as this would eliminate roughly half of the security 
> flaws. But it isn't a panacea, there is nothing in a Warhol worm
> which 
> requires that the exploit used be a buffer overflow. 
> 
> One factor which may slow the spread is the ability for routers to 
> process ARP requests. This will result in the effects of network 
> congestion for the worm, which will slow the spread. I have no
> current 
> estimates on how much this will affect things. 
> 
> 
> Thanks to Michael Constant for his assistance in helping design the 
> strategies used by such a worm. 
> 
> 
> I am currently writing an abstract simulator to model the virulence
> of 
> an abstract Warhol Worm for various parameters. I AM NOT, AND WILL 
> NOT, EVER WRITE SUCH A WORM! "I have no need to destroy things, just 
> to prove a point" 
> 
> My math is deliberately sloppy when it comes to evaluating the point 
> where the geometric growth slows down. But the simulator confirms 
> these times. I will be releasing initial code for the simulator in a 
> couple of days, as well as data runs, but initial results say that it
> 
> takes roughly 7 minutes and 30 seconds for 1M vulnerable hosts, 12 
> minutes and 45 seconds if there are only 500k potential hosts in the 
> network, as expected. 
> 
> 
> Why I'm keeping this page up!
> 
> Unfortunately, the results are too simple: Any reasonably intelligent
> 
> programmer who thinks about the problem will come up with a similar 
> solution, or a coordinated worm which efficiently divides the search 
> space. Coordinated worms are even faster but do require a level of 
> worm to worm communication. The point of this was to just demonstrate
> 
> that coordinated worms are NOT a prerequisite for truly fast 
> propagation. 
> 
> Secondly, the current worms, taking 12 hours to a day, are still fast
> 
> enough to foil most human reaction: Look at the continued spread of 
> the code red variants, roughly 3 weeks after initial infection. 
> Changing this to under an hour would only slightly increase the reach
> 
> in the current environment. 
> 
> Finally, I am a personal believer in the doctrine of full disclosure,
> 
> especially when the "bad guys" (skilled, malicious programmers) could
> 
> easily come up with the same results without documentation. It is 
> important for the rest of us to consider and evaluate what the 
> realistic threat is. Saying nothing would help nobody. 
> 
> We have always known, even before the Morris worm, that connected, 
> homogeneous computer networks are vulnerable to fast moving, active 
> worms. This, however, is an important result because it is a reminder
> 
> of just HOW vulnerable it is. Human mediated defenses can not stop a 
> fast active worm. 
> 
> I will not remove this page. 
> 
> 
> Copyright 2001 by Nicholas C Weaver, nweaver@cs.berkeley.edu. It may 
> be reproduced only in its entirety and with credit to its author.
> 
> The term "Warhol Worm" is an original term coined by the author. 
> 
> If you mirror this article, please let me know where it is and from 
> where you got it. I am interested in how this article gets spread 
> around. 
> 
> 
> 
> ***************************************************************
> 
> http://www.silicondefense.com/flash/
> 
> 
> 
>                                                      Stuart
> Staniford, 
> Gary Grim, Roelof Jonkman,
> 
>                                                               Silicon
> 
> Defense, 8/16/2001
> 
> In a recent very ingenious analysis, Nick Weaver at UC Berkeley 
> proposed the possibility of a Warhol Worm that could spread across
> the 
> Internet and infect all vulnerable servers in less than 15 minutes 
> (much faster than the hours or days seen in Worm infections to date, 
> such as Code Red).
> 
> In this note, we observe that there is a variant of the Warhol 
> strategy that could plausibly be used and that could result in all
> vulnerable servers on the Internet being infected in less than thirty
> 
> seconds (possibly significantly less). We refer to this as
> a Flash Worm, or flash infection.
> 
> We have run out of hyberbolic adjectives to describe how seriously 
> vulnerable the Internet is to security disruptions, so we
> won't comment further on the social implications of this.
> 
> The recent Code Red worm operated by probing random addresses on the 
> Internet in the CRv2 version. The later Code Red II worm used a 
> mixture of random probing of local subnets, and probing the entire 
> Internet randomly. 
> 
> The Warhol worm would first use a list of 50,000 or so sites to start
> 
> the infection from (this list would be built by scanning in
> advance), and then use a clever co-ordinated scanning technique to 
> scan and infect the rest of the Internet. In simulations,
> Weaver estimated such a worm could infect one million vulnerable 
> servers in significantly less than 15 minutes, using plausible 
> estimates of the scan rate a worm could achieve.
> 
> The nub of our observation is that it is reasonable to think that a 
> determined attacker could have obtained a list of all or almost all 
> servers with the relevant service open to the Internet in advance of 
> the release of the worm. This could be done in a number of ways: 
> 
> Straight scanning from a single site 
> 
> For the well funded three-letter agency with an OC12 connection to
> the 
> Internet, we believe a scan of the entire Internet address space can 
> be conducted in a little less than two hours (we estimate about 
> 750,000 syn packets per second can be fit down the 622Mbps of an
> OC12, 
> allowing for ATM/AAL framing of the 40 byte TCP segments. The
> return traffic will be smaller in size than the outbound. Faster
> links 
> could scan even faster. 
> 
> Distributed scanning 
> 
> For the hacker community, it will be more feasible to scan the 
> Internet from a platform of a few thousand zombies similar to what 
> they have assembled from time-to-time for DDOS attacks. Distributed 
> scanning has been seen in the wild. 
>                              
> 
> Stealthy scans 
> 
> Portscans are so common and so widely ignored that even a fast scan
> of 
> the entire Internet would be unlikely to attract law enforcement 
> attention or more than mild comment in the incident response 
> community. However, for attackers wishing to be especially careful, a
> 
> randomized stealthy scan taking several months would be very unlikely
> to attract much attention as most intrusion detection systems are not
> 
> currently capable of detecting such scans (but see Spade/Spice). 
>                              
> DNS searches 
> 
> Using widely available spam mail lists, a list of domains can be 
> assembled. The DNS can then be searched for the IP addresses of 
> mail-servers (via MX records), or web servers (by looking for 
> www.domain.com). 
>                              
> Spiders 
>                                   
> For web server worms (like Code Red), it would be possible to use 
> web-crawling techniques similar to search engines in order to produce
> 
> a list of most Internet connected web sites (some sites with no links
> 
> from other sites would be missed). This would be unlikely to attract 
> serious attention. 
> 
> Given that an attacker has the determination and foresight to
> assemble 
> a list of all or most Internet connected addresses with the relevant 
> service open, a worm can spread most efficiently by simply attacking 
> addresses on that list. There are about 12 million web servers on the
> 
> Internet (according to Netcraft), so the size of that particular 
> address list would be 48MB, uncompressed. The initial copy of the
> worm 
> can be programmed to divide the list into n blocks, and then to find 
> and infect the first address in each block (or an especially chosen 
> high-bandwidth address in that block), and then hand the child worm 
> the list of addresses for that block. That copy of the worm can then 
> re-iterate the process to infect everything in its block. A threaded 
> worm could begin infecting hosts before it had received the full host
> 
> list from its parent to work on, to maximize the parallelization 
> process, and it could start work on looking for multiple children in 
> parallel.
> 
> A related design would call for most of the address list to be
> located 
> in pre-assigned chunks on one or a number of high-bandwidth servers 
> that were well-known to the worm. Each copy of the worm would receive
> 
> an assignment from its parent, and then fetch the address list from 
> there.
> 
> This process will result in relatively little wasted effort. For 
> example, if the worm had a list of web servers, and a zero-day IIS 
> vulnerability, about 26% of the list would be vulnerable. No server 
> would be probed twice. If n were 10, then the infection tree for the
> 3 
> million vulnerable servers would be just 7 layers deep.
> 
> The spread rate of such a worm would likely be constrained by one of 
> two things. The worm itself is likely to be small (CRv2 was about 4k,
> 
> and the hypothetical Warhol worm is 100K to allow for more malicious 
> payload code). Thus the address list is much larger than the worm 
> itself at the start of the list. Thus the propagation of the worm 
> could literally be limited by the time required to transmit the host 
> list out of the initial infection site or servers were it was stored.
> 
> Since all the children of the infection will have much smaller lists 
> to transmit, they are less likely to limit the worm spread (unless a 
> first generation child has less than 1/n of the initial copy's 
> bandwidth available to it). The exact time required to transmit the 
> list will depend on the bandwidth of the storage sites, and the
> amount 
> of traffic there. For some examples, however, we point out that the 
> 48MB of address list could be pushed down an otherwise empty T1 line 
> in just over 4 minutes. It could be pushed down an OC12 link in less 
> than a second. Thus starting the worm on a high-bandwidth link is 
> desirable for the attacker, and bandwidth is probably a concern at
> the 
> next layer also. Compression of the list could make the list delivery
> 
> much faster.
> 
> The other possible limitation is simply the latency required to
> infect 
> each new layer in the tree. Given that probes can be issued in 
> parallel, and substantially more threads can be spawned than n (the 
> number of children), we do not have to add up the time required for a
> 
> given copy to cycle through its list, but simply take the maximum 
> infection latency. Weaver hypothesizes 1 second for this latency, but
> 
> with an n=10, it perhaps might take a little longer to get 10 copies 
> of the worm through a given sites link. However, not much longer - if
> 
> a 5KB worm can get 50% utilization through a 256k DSL uplink, it can 
> transmit ten copies of itself in three seconds. That leads to a sub 
> thirty second limit on the total infection time (given an infection 
> tree seven layers deep and a design were the new worm children go to
> a 
> server for their addresses).
> 
> In conclusion, we argue that a small worm that begins with a list 
> including all likely vulnerable addresses, and that has initial 
> knowledge of some vulnerable sites with high-bandwidth links, can 
> infect almost all vulnerable servers on the Internet in less than 
> thirty seconds. 
> 
> Acknowledgements: We thank Nick Weaver at UC Berkeley and Vern Paxson
> 
> at the AT&T Center for Internet Research for a very helpful
> discussion 
> of these ideas. 
> 
> 
> 


__________________________________________________
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail
http://personal.mail.yahoo.com/

------------------------ Yahoo! Groups Sponsor ---------------------~-->
Do you need to encrypt all your online transactions? Secure corporate intranets? Authenticate your Web sites? Whatever
security your site needs, you'll find the perfect solution here!
http://us.click.yahoo.com/Bre3tC/Q56CAA/yigFAA/kgFolB/TM
---------------------------------------------------------------------~->

------------------
http://all.net/ 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 



This archive was generated by hypermail 2.1.2 : 2001-09-29 21:08:40 PDT