QuickTopic logo Create New TopicNew Topic My TopicsMy Topics News
top bar
QuickTopic free message boards logo
Skip to Messages


What if smart people wrote computer viruses?

(not accepting new messages)
^     All messages            5-20 of 20  1-4 >>
  Messages 20-18 deleted by author between 04-11-2008 10:07 AM and 02-27-2004 01:49 PM
john mandisa nkoana
07:59 AM ET (US)
Deleted by topic administrator 03-05-2001 07:27 PM
D. R. Arthur
02:04 PM ET (US)

The virii of today will be the operating systems of tomorrow! D. R. Arthur circa 1988.

The real question is to determine the character of these virii. If they are bad, extermination seems the wiser path and if clever people wrote virii, then they would in turn create good ones.

Examples include:

1. Flagging down bad code segments in your operating system that could crash with a notify they exist or creation of jump code on demand to circumvent the bad stuff, i.e. code morphing in the current tech. vernacular.

2. Testing mass storage media and adding data to move up or compress to reductiion for block size optimization. This kind of controlled virus would be utilized most likely inside of an Oracle or some other DBMS closed environment to enhance performance.

3. Retrovirii code would do much more sophisticated things to segments of instructions for performance, something I worked on theoretically since the 1980's.

Many parts of life have a stem rooted in virii.

Edited 11-10-2000 02:06 PM
08:05 AM ET (US)
What if smart people wrote computer viruses?
Spyware is sort of virus and is made by smart people with James Bond's Q in mind.
Keith Dawson
05:34 PM ET (US)
A belated comment for **Chris Adams** re: #4: See The View from Softpro [1] in the last TBTF -- the uptick in OpenBSD may already have begun.

[1] http://tbtf.com/archive/2000-07-20.html#s09
Edited 08-10-2000 05:36 PM
Chirag Patnaik
01:53 PM ET (US)
Regards the reference to "The Mote is God's Eye" in No.8, the main reason why the ship died was the people were not aware of the danger and refused to call in/tell the people (for reasons not relevant here) who were aware of the same. The same analogy can be stretched to the real world, people may not be realising the danger, and on even realising it they may refuse to call in the experts.
Bill Cheswick
12:43 PM ET (US)
The Internet security field has always been an arms race, and I think it is safe to say that theory has nearly always lead practice. Sniffing was alluded to in the original Ethernet design paper, for example. Various friends and I have sat in hottubs speculating when various Bad Things would appear in the wild, generally many years before they did. (Security people are paid to think bad thoughts.) No, I am not willing to supply you with a list, certainly absent a hot tub and a level of personal trust.

A number of the ideas they are kicking around have been around for a while. Tom Duff wrote a shell virus over ten years ago.

Meaner viruses/worms replicating software is certainly on the way, but I think the statement:

   If such a worm were competently developed released
   into the world, the fate of the Internet would be in
   the hands of those who controlled it.

Is way too strong. Here is why:

1) exponential growth is very hard to control. Slow, stealth replication is very difficult. There are lessons in telomeres and cancer, mixomitosis, telescript, and the Morris worm. Robert Morris knew this, and got it wrong. I am not convinced that his worm would have remain undetected long even without the replication bug.

2) One of the Internet's great strengths is that there are teams of experts always ready to tackle a difficult problem. Even back in 1988, the worm was decoded and analyzed by at least three different groups. The SYN packet attacks at Panix in 1996(?) caused the creation of an instant mailing list of industry greybeards that sifted through many solutions and chose and quickly installed good mitigation algorithms. The recent DDOS attacks have caused a similar response, though it is unclear how well some of these new mitigations will work or be deployed.

A truly nasty worm will get similar treatment.

3) The proposed worms have a number of achilles heals. Several were mentioned by other off-liners here. Stealth spread will be impeded by the mix of attacked systems. There are solutions to polymorphic viruses.


All this said, I have no doubt that we will see uglier portable programs. There are a number of easy, widely-connected spaces available for propagation. Thanks to exponential growth, the global cost of the cleanup will be huge.
Chris Adams
11:56 AM ET (US)
I agree in general about the damage/spread relationship with viruses but I think there's simpler explanation for Newlove - it happened right after Loveletter got all of that attention, which is while people were thinking about security. The problem is that such attention will almost always drop off after the initial scare is over. LoveLetter was called a stupidity virus because the recipient had to run it; while users stopped running things right after the LoveLetter media-fest, that vigilance started to decline the instant things died down. I'm sure that within a few months you'd be able to get a large number of people to run such a thing again.

As far as the cry-wolf and limited security resources problems go, those are very real and very dangerous. Security needs a lot more attention than it gets, particularly as it should be implemented from the beginning, not grafted in much later if someone has the time (e.g. Microsoft style). Many companies do not devote enough resources to security but I have a feeling that will change as the security community starts to remind shareholders how easily such expensive damage could have been prevented. A real threat cannot be ignored forever.

I like the OpenBSD approach of taking security very seriously, because in addition to defanging theoretical monster viruses it will also stop the very real threats like the legions of script kiddies, the smaller population of attackers who know what they're doing, industrial espionage types and even limit damage from internal threats (e.g. stupidity or disgruntled employees). Warning about uberviruses would be crying wolf at this point but the other threats are very real.
David M. Chess
11:03 AM ET (US)
It's a relatively well-known rule of thumb in the anti-virus field (such as it is) that the most damaging viruses aren't the ones that spread most successfully. The "NewLove" virus, for instance, was basically an instant-destruction version of the "LoveLetter" virus; but "NewLove" went nowhere, whereas "LoveLetter" was briefly widespread. A disease that instantly kills its host never gets much of a chance to spread.

On the other hand, a virus that spread "benignly" for a long time, and only after that long time launched its payload, would most likely be detected and wiped out before it got to the "launch payload" day. So there's a hard tradeoff for the bad guys.

Certainly most of the viruses out there now are inelegant and semi-functional hacks, and Someone Who Really Knew What He Was Doing could cause much more trouble than the typical virus causes (the typical virus, in fact, never infects anyone, and just gets traded from K001 HAX0R D00D to K00l HAX0R D00D without ever infecting an actual user).

On the other hand, blithe claims that an "ubervirus" could magically evade detection because (to quote the HNN story) it "can be coded in different forms, so that there will be hundreds of different code signatures", have no teeth: anti-virus programs have been dealing with polymorphic viruses that have *millions* of different forms for years, and doing it successfully. Saying that simple polymorphism "will make it difficult for anti virus vendors to develop a program that will search for code signatures" just shows a lack of knowledge about what anti-virus programs already do.

While in theory it's possible to make a virus that can't be perfectly detected, in practice no virus has so far proven to be a major problem for the anti-virus companies to detect well enough, once they had a sample in hand.

We have never seen a virus that was discovered only after having spread undetected onto a large number of systems for a long period of time. Now it's possible that there are two distinct kinds of viruses in the world: the quickly-detected ones that we've found, and the uberstealthy viruses that spread and hide so well that we've never found even a single one. But that seems real unlikely to me! If there were "stealthy" viruses out there, it seems to me we would have found at least *one*, and then discovered that Yow it's on thousands of systems and has been for months. But that's never happened.

On the one hand it's certainly true that a really well-written virus could cause lots of trouble. On the other hand, it's easy to overestimate that trouble by making vague claims about uberviruses that could do magical things (like automatically discovering and exploiting new vulnerabilities, or uploading themselves to read-only FTP servers) that we don't know how to make ordinary application software do, let alone a virus. Remember there's nothing magical about a virus: it's just a program that spreads. If it's hard to imagine how a program could reliably do X, it's unlikely that a virus that reliably does X is going to be along anytime soon.

Sure, a virus could periodically poll a set of websites for instructions. There have in fact been at least a couple of viruses that did that! In both cases, the websites were found and taken down in short order. This is the sort of thing that governments and ISPs understand pretty well, after all. If instead the websites had to be replaced with "remove yourself without destroying any host data" instructions, that'd be quite doable as well. Again, there's no magic here.

So while uberviruses sound scary, and stories about them might in the short run scare some people into more properly securing their systems, I'm afraid of a cry-wolf effect: if we say "systems should be made more secure because uberviruses are going to be launched at any moment", and then no uberviruses are launched, people will conclude that they don't need to make their systems more secure. Better, I think, to point out that systems need to be made more secure because of *ordinary* viruses, because of script k1dd13z trolling for vulnerabilities, and because of the real possibility of targetted industrial espionage. Better to goad people into action using threats that we know really exist, rather than trying to do it with far-out speculation that might turn out to be false.

This isn't to say that there aren't nightmare scenarios that security and anti-virus folks worry about and try not to mention in public! *8) There certainly are. If there are clever things we could do to our systems that would make attacks like "samhain" and Mr. Temmingh's virus, it'd be good to do those things. But I'd give priority, myself, to first fixing the dozens of security problems on the typical system that can be exploited *without* an ubervirus. Given that we have limited resources, it seems logical to worry about the existing threats first...
Keith Dawson
10:19 AM ET (US)
**Ted** -- thanks, fixed that date error at [1] and noted at [2]. Doh, I knew that.

**John** -- the benign / invisible virus is an interesting meme. I'm getting up a soap order at Powell's, may add "Computer One" to it.

As for the beneficial virus / symbiont: they can run amok too, as well as can the purely evil virii. The best example in fiction I know of is Niven and Pournelle's "The Mote in God's Eye." The Watchmakers were like elves -- spacefarers would leave things outside their door while sleeping and in the morning would find them "magically" improved. Unfortunately the Watchmakers multiplied like crazy and if left unchecked could destroy the most disciplined and battle-hardened spaceship. The solution: periodically suit everyone up and open all the airlocks (before this crop of Watchmakers had gotten sufficiently advanced to develop little bitty spacesuits...).

I heard that at MIT in the early days of Project Athena, the hacker undergrads who were given some privs on the network were referred to as Watchmakers. Guess it was graduation that effected the opening of the airlocks...

[1] http://tbtf.com/archive/2000-07-20.html#s07
[2] http://tbtf.com/emendations.html
John Carlyle-Clarke
09:12 AM ET (US)
In the book "Computer One" by Warwick Collins, an interesting idea is floated (not sure if it has appeared elswhere). It is that if you viewed virii as subject to "natural" selection or an analogous process, the most successful virii should be those that can exist and replicate efficiently whilst being almost undetectable. This would mean virii that did *nothing* except exist and replicate, hid well, used minimum resources, and generally did not impact a host system at all.

This produces the interesting result that the more you accept this is true, and likely to have happened, the less likely it is we would know it. In this novel, he postulates that there are many such entities going undected in systems and mutating/reproducing constantly! An interesting idea I thought.

You could extend this by looking at bacteria that live in the human gut, and say that the most successful virus should be one that makes its host healthier, and provides a useful service (e.g. cleaning registry, fixing MS bugs, optimising networking).

The scenarios of intelligent agents/genetic algorithms running amok have been well covered, but I thought this was an interesting take on the idea that could be relevant to this thread.
Ted Anderson
07:33 AM ET (US)
**Keith**, my main reason for posting is to note a date error in original story: The Morris worm struck in 1988 not 1998.

And while I'm at it, in (3) **Darby** said "with a bomb, you get to race against the force of gravity". Weapon designers have noticed this regretable shortcoming and take counter measures, for example, bomb dispensers typically throw smaller bombs out of planes are truely staggering speeds. Also, of course, "smart bombs" are have active propulsion.
Matt Stiles
04:56 AM ET (US)
Wouldn't it be staggering if various governments didn't already have sophisticated plans for doing this, including code written and sites established ready to roll? Surely it would be the first thing any e-warfare group would think up....

Chris Adams
03:18 AM ET (US)
If something like this ever becomes real, I predict a large boost in the OpenBSD userbase. Pro-active security and defense-in-depth are fast becoming requirements for anyone doing serious work on the Internet.

The problem is that most people on the Internet act as if they are in a quiet Midwestern town; in reality, it's more like living in Harlem. Attacks are a question of when, not if.

Why do so many people act as if it's sufficient to secure a system against casual threats. In many cases, it's simply because securing most systems is a huge amount of work. A single Windows NT box will take at least a solid day to set up in a secure fashion (and that's just shutting down known risks; all bets are off if another security bug is found), more if you're using it for more than just a basic webserver. That's because the basic approach vendors like Microsoft (and before the Linux kiddies get uppity, Redhat) is to default to an open system and require work to secure it. Ever see how many ways you could break into the typical developer box? Yes, it's easier to get stuff working when you disable all of the security. Connecting that box to the Internet, however, is equivalent to strapping someone into bondage gear, leaving them in the shower in a maximum security prison and hoping nothing happens after the guards leave.

A much safer, easier and more effective approach with Internet connected boxes (kinda common these days) is to default to a secure state which requires you to explicitly create a security risk. Besides confirming that Microsoft still doesn't even understand basic computer security, the recent email viruses also demonstrate why the old "tons of vulnerable systems behind a [hopefully] secure firewall" approach is, shall we say, somewhat suboptimal.

Preventing uberviruses like the one described is straight forward if you treat security as a mandatory requirement from day one:

If there's no way for the virus to get in the box remotely, it has to be run by a local user, which significantly reduces both the rate of infection and the number of infected systems.

If the box is properly configured, a normal user can't infect the system, which significantly reduces both the rate of infection and the number of infected systems.

If virus-like activities are prevented (e.g. using various programs and system tweaks to defang many buffer overflows), it's harder for a local user to infect even their own account. Even something as simple as disabling the Windows Scripting Host's associations (WSH is very rarely, if ever, used by anything other than a virus) cuts down on the risk by an absurd factor.

If security isn't treated like an afterthought and is instead treated as a fact of life, users are going to avoid at least the riskiest behaviour. It's amazing how many of mine suddenly decided they could stop running attached programs when company policy was changed to bill the user for the cost of cleaning up an infection. Unreasonable? I've yet to see a company that would turn off the burglar alarm when a new employee finds it difficult to remember a password but that's almost exactly how many of them treat computer security.

(The reason I mention OpenBSD is just that it has such a good reputation in this regard. You could probably safely provide a public shell server using a default OpenBSD install. That's because the OpenBSD team member perform security work publically for peer review, do things like code audits before bugs are found and take security seriously. It's been 3 years since the last remote exploit...)
Edited 07-21-2000 03:19 AM
^     All messages            5-20 of 20  1-4 >>