top bar
QuickTopic free message boards logo
Skip to Messages

TOPIC:

"Reflections on Trusting Trust" (from TBTF for 1999-10-05)

^     All messages            27-42 of 42  11-26 >>
42
Pradeep
11-21-1999
11:20 PM ET (US)
I recently noticed that in the "New Hackers Dictionary", under the term "backdoor" the same story is mentioned and ESR notes that at least one late night download did take place of the corrupt product.
41
David M. Chess
11-18-1999
08:59 AM ET (US)
Karl: cool! I don't suppose you have any sort of reference to the description that you're remembering? I'd love to have a copy, so as to finally feel that I really Know the Answer. This is somehow one of those things where lots of people know that it really happened, but none can remember just where they got the knowledge... *8)
40
Karl Puder
11-17-1999
03:49 PM ET (US)
Re: entries of 10-07-1999 and 10-08-1999. I recall reading an earlier description of this account, and he did, in fact, implement this trojan horse at one point in the UNIX development. That account also described it as being done for research purposes, and due to the looseness of the project at that time, files (including executables) were copied between development machines all the time. The infected cc escaped from Thompson's machine.

It was discovered only because part of the trojan horse was implemented as a function (for programming convenience) rather than in-line code, so when someone else (Ritchie?) was tracking down a bug in {login|cc} (i forget which) this extra name turned up in the symbol table in the listing file. He asked Thompson about it and they fixed it back. (Or did they just get in-line that function ;-) ?
39
Scott Marlowe
11-03-1999
02:42 AM ET (US)
Another misconception I see a lot here is that people who work on Open source are either not doing it for a living or that they have this "religious" bent about writing OS and not wanting money for it.

This is simply not the case. Linux Torvalds makes good money working at transmeta. Part of his employment agreement is that he works on the kernel part of the day at work.

Linus gets a paycheck, Transmeta gets a kernel AND a kernel developer (you can do some cool stuff on advanced hardware ONLY if you have someone who can make it do said cool stuff.) While they do not "sell" the work Linus produces, they receive a great deal for their money in the contributed labor of the other OSS programmers.

I.e. instead of owning stock in Microsoft hoping for dividends, Transmeta owns stock in Linux, and they get exactly what they want for the kernel, fast response for fixing bugs and adding features to the kernel.

Alan Cox works for Red Hat. They give him a paycheck to hack the kernel and give away his code. Is that a fair exchange? Sure. Suppose a Fortune 100 company finds a bug in the 2.0.38 kernel (last ultra stable medium performance kernel made before the newer 2.2.x series, which aren't still quite 100%...) They can send email STRAIGHT to Alan himself, or through RedHat and have a fix in hours.

What does RedHat get in exhange? The free work contributed by the other hackers, most of whom are employed to hack the kernel or device drivers or some other part of the code. None of them can take the code and pervert it to put the other guys out of business and they don't want to.

Some kernel hackers work for Veritas, the company that makes commercial grade RAID software and some other great packages for Unix.

One works for Nasa and writes most of the network card drivers for the kernel.

Let me make my point clear. These are not (well, mostly not) 17 year old script kiddies. These are professionals who do what they do as part of their job. They DO make a living, albeit in an odd way, by giving away their code. They stay closely coupled across oceans and continents.

No one company can really become Microsoft, but more importantly, none want to. By hiring 1/1000th the staff needed to build and maintain an entire operating system, they receive the whole operating system.

Microsoft has a first priority of making money. So do the Linux Distrubution companies like Red Hat, Suse, and Turbo Linux to name a few. The difference is the Linux make their money primarily by charging upfront for support with their boxed sets, and on professional support for larger accounts. A stable OS is a good thing here, because the better it runs, the less pre-paid initial support they have to provide.

For microsoft, the main income is from sales, and paid per-incident support. This means the focus doesn't need to be on great code, since bad code allows you to abandon the older OS, and insist the user upgrade to the new OS for bug fixes. Plus, you get to charge the customer for a support call to tell them that the fix will be in Windows 2000 (4.0, 3.5, etc... Office 2000, 97, etc...)

I'm not a MS software basher, I use NT at work, and it's a decent desktop OS. But I really don't like Microsoft the company.

They claim that they have a single clearing house for all problems NT, but in a recent posting to NTBugtraq regarding a security issue with SCSI drivers, they said:

**** QUOTE ****

Hi All -

We did an investigation of this issue and, while it does reproduce in some cases, it's not a Windows NT issue. The problem lies in the security of the third-party SCSI drivers. Regards,

Secure@microsoft.com

**** END QUOTE ****

While under Linux the same amount of reponse time would have yielded two or three possible patches, with accompanying discussion of which ones were best, and Linus picking one in less than a day. Read the kernel mailing list archives sometimes to see the responsiveness of the Kernel Developers to security issues.
38
Scott Marlowe
11-02-1999
11:07 PM ET (US)
Amos, you say you can understand why OSS folks would build in a backdoor, but see no reason why a commercial outfit would. huh?

1: OSS is peer reviewed. Thousands of eyes going over the code. If they stuck in a backdoor, it would eventually get found, the CVS system that maintains the code would have a record of who submitted it, and they would be instantly identified and branded a pariah in the OSS community.

2: Suppose MS wrote a back door into Windows 2000 how would you ever know? And if you happened to be competing against them in some field (like instant messaging) and they wanted to steal your source code, and you were running W2K, they could do it quietly, from an anonymous account in a cybercafe, and you'd never be able to catch them.

3: Every change made to the Linux Kernel and EGCS compilers goes out publicly, and folks look it over. If you're trying to sneak in a back door, someone will catch it.

4: If it's easy to sneak in a back door in OSS software, it is doubly so for a commercial entity, with NO peer review, to do so.

5: Read about Real Media's JukeBox lately?
Really, I don't see why a commercial company would be less likely to build or use a backdoor, and since only a handful of people get to see the source code, they could quite possible stick it in at the end of the build process and only say Steve Ballmer and Bill Gates would know it was there.

Finally, if you want to see a secure stable OS, try Open BSD. At www.attrition.org, it is the one server platform listed as never being cracked while running a Web Server.
37
David M. Chess
10-18-1999
11:44 AM ET (US)
There's a significant difference, I think, between the kind of imperfect quality control that caused the Challenger disaster (I have no special knowledge about how much of that was inevitable human fallibility and how much was some sort of irresponsibility), and the kind of intentional designed-in self-perpetuating oddness that Thompson talks about.

If you have good evidence that your systems' quality control is state-of-the-art and so on, you can reassure your customers that it is. If the development process of the people you get the software from has good standards of code review and employee screening and such, you can assure your customers that all reasonable steps have been taken to ensure that no random nasty person has programmed any bad surprises into this particular piece of software. But you can't, in general, assure your customers that your systems are immune to some primordial self-perpetuating Thompson-style hack several software generations back, unless you know something that I don't. *8)

In practice, I think the multi-generation self-perpetuating hack is awfully unlikely. If your software suppliers have good quality control to avoid errors, and good checks to avoid viruses and intentional Trojan horses, you're probably acting responsibly. The main lesson of Thompson's piece, I think, is that we should never forget about the latter...

DC
36
Mark Flynn
10-16-1999
06:37 AM ET (US)
David, you're reading me correctly, and your reply confirms my interpretation of Thompson's message.

My real question stems from a sense of wariness I sometimes feel when opening a new program.

As a consumer (non-programmer) of mission-critical software products, I'm dependent on others (programmers) to supply quality product.

System malfunctions, whether due to human error (accident, incompetence, mischievous or malicious intent) or an "act of God" can cost in human life.

Borrowing from Edward Tufte on the Challenger disaster in his book Visual Explanations as quoted in Visions of Technology p348 by Richard Rhodes "Like magicians, chartmakers (read programmers) reveal what they choose to reveal".

Aside from requiring my passengers to sign an acknowledgement that they've been shown a copy of Thompson's essay, how can I assure them that their trust is not subject to the same type of systemic weakness that cost space shuttle passengers their lives?

/mf
35
David M. Chess
10-15-1999
04:01 PM ET (US)
I'm not positive what your questions mean, Mark *8) but I'm guessing that you're suggesting that neither of those pairs trust each other. And probably with good reason.

Ken T's essay is thought-provoking precisely because he points out that every time you run a piece of software that has access to anything that's valuable to you, you *are* in fact trusting anyone who ever worked on it, anyone who ever worked on the compilers and other tools that were used to build it, or the things that were used to build *them*. And so on back into the dawn of time. Sort of like if Inda realized one day that its arms supplier was a branch of, not the Pakistani government, but say the Australian... *8)

DC
34
Mark Flynn
10-14-1999
06:03 AM ET (US)
Non-programmer, referred by my trusted security advisor.

Do lawyers trust each other? Do Microsoft and Justice Dept officials trust each other? Do drivers of cars with exploding gas tanks trust car manufacturers? Do wrecked car owners trust insurance companies? Do Whitehouse interns trust the President's definition of sex? Does India trust Pakistan?

The answer my friends, is above Lincoln's head -
"In God We Trust"

Below that are the two guarantees - taxes and ...

If I have to vote, I prefer optimism, fair maidens and horses that win.
33
Michael Hill
10-13-1999
11:56 AM ET (US)
I first read Thompson's article several years ago. I am a UNIX system administrator with over five years of experience, and a fairly accomplished programmer (over 15 years), so I understood about 95% of what he was saying.

I've been building gcc from source for quite a few years, most frequently on SunOS and Solaris. A couple people mentioned trusting Sun's cc to build gcc; that applies under SunOS. What really gives me pause, however, is the fact that, in order to build gcc under Solaris, since it doesn't come with a native cc, you have to *use a precompiled gcc binary downloaded from the Net* to bootstrap the build!! Now how's that for potential for abuse? :^)

Still, of course, Thompson's hack depended on knowledge of login's source. IOW, it was very specifically targeted, and the hacked compiler was looking for a particular sequence. Given that Solaris is (at this time) closed source, I daresay the few people with the need/ability to compile login aren't using gcc. :^) I'm not saying that a general target such as "anything that looks like login/passwd/rival-operating-system" isn't possible in theory, but I suspect it would be prohibitively difficult.

As I understand the Thompson thing, too, he put it in as a form of experimentation or "theoretical research"; I doubt that the hacked binary was actually very widely distributed. I personally don't attribute hostile motives to him for this. But (of course) I don't know him personally.
32
Greg Weiss
10-12-1999
12:54 PM ET (US)
Keith, thanks for pointing me to Ventor Vinge's work on this subject. I haven't read him yet, sounds interesting. As for his fear that the US would pass laws requiring 25% of microprocessor real estate be dedicated for its own use, I'd merely point out that a much less intrusive approach would be more feasible and more likely. It'd never be done through Congress- it'd be done as an executive order for national security purposes and secretly classified. Also, just technically speaking, 25% is way more space than the government would ever need (at least for security-crippling purposes). They could compromise existing designs without incurring any floor space (real estate) penalty, and could add their own security function with probably 1% extra real estate. (The newer microprocessors' floor space is increasingly dominated by large caches which take up to 90% of the available real estate, so logic makes up only a small but still-critical fraction of the chip.) I don't mean to be a conspiracy theorist here; I have no knowledge one way or the other about what is actually going on. But if I were a spook, I'd be my duty to look at and pursue such an approach and a legal justification for it. The real trick would be making the security-disabling microcode modifications non-obvious in such a way that plausible deniability could be retained ("oh, that was a bug; sorry.") And making sure that you planted several such bugs in case one was discovered. Designing and fabbing "truly-secure" open source microprocessors requires overcoming many more barriers than writing open source security software.
31
Amos Satterlee
10-10-1999
10:14 PM ET (US)
As a non-programmer, I believe that I get the gist of Thompson's gambit without having a clue about the specific details of implementation.

The difference I see between an open source implementation and a commercial implementation is this: I can undertand open source hackers building backdoors for their own individual or group's benefit and these backdoors *could* be used for nefarious purposes. I don't understand the nefarious ends that a commercial producer would have for an undocumented backdoor.

Before you take me for a complete naif, what I don't understand is the difference between what Thompson describes and the document tracking coding that MS has already built into the Office Suite. My last comment, above, then pertains specifically to an undocumented backdoor.

The broader issue that Thompson seems to raise is undocumented capabilities of *any* software. Where there are easter eggs, there can just as easily be trojan horses. I mean, w2k has how many gablillion lines of code...
30
Greg Roelofs
10-09-1999
01:46 PM ET (US)
Mike Lonergan's comment about trusting someone who "stands behind their product" is an interesting one. It seems to me that "financial risk" is the only sense in which commercial vendors stand behind their products, and it's an incredibly weak sense for market leaders such as Microsoft in operating systems or office software suites, or Adobe in image-editing software. If these vendors *actually* stood behind their products, their license agreements wouldn't read like the complete abdication of responsibility that they are, and things like UCITA wouldn't exist.

That said, I don't claim that OSS authors accept any more responsibility--in the legal sense--than do commercial software vendors. Hell, they can't afford to; they generally derive no income from their work. But in my experience, they do tend to accept responsibility, in the sense of fixing bugs and addressing security issues, to a far greater extent than most commercial vendors.

Finally, keep in mind that much of the "moral" rhetoric surrounding Open Source software is simply marketing. The fact that the software is often free in no way implies that marketing is irrelevant. On the contrary, the more market share (and mind share) free and Open Source software achieve, the more hardware and software there will be that supports it, and the greater the benefits will be to those who happen to like using it and/or creating it.

Full disclosure: I've been a freeware and Open Source author for 15 years, and code I've written not only ships with most free OSes but also with commercial ones like Solaris and OS/2. I have no problem using and recommending commercial software (e.g., Fireworks) over Open Source software (e.g., the GIMP) when there are clear technical benefits to doing so, and I've even been known to praise Microsoft on occasion (though rarely :-) ). Yes, there is some free software that I truly love--and one special image format--but I try not to be a zealot about any of it.
29
cloister bell
10-08-1999
06:28 PM ET (US)
i was given to understand that thompson had actually done this, and that the hack was discovered by someone who noticed (probably in the course of pursuing some other compiler bug) a block of object code in the compiler that didn't seem to come from any part of the source code. that block was then dis-assembled and analyzed to discover the trojan.

i fully admit, however, that i can't even come close to finding any sort of authoritative reference for my understanding; this is simply what i remember from when a friend originally described thompson's trick to me several years ago.
28
David M. Chess
10-07-1999
04:36 PM ET (US)
Is the consensus that Ken Thompson did in fact actually implement the back-doors that he describes in his paper? I've heard various people talk as though it were true, but I've always suspected (and I've talked to other people who assumed) that he was just describing a hypothetical. Not that it matters a whole lot, the points are still valid and important. But it'd be nice to Know...

DC
$a='$a=%c%s%c; printf $a,39,$a,39'; printf $a,39,$a,39
27
Dave Long
10-07-1999
01:22 PM ET (US)
As has been mentioned here already, Thompson's trick requires collusion between the login source, toolchain source, and distributed toolchain binaries. If (as is encouraged by open source) you have multiple toolchains, or can make arbitrary semantic-preserving changes to the login source, the trick is less of an issue.

http://www.xent.com/sept99/0373.html
^     All messages            27-42 of 42  11-26 >>

Print | RSS Views: 2848 (Unique: 1054 ) / Subscribers: 0 | What's this?