Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
IE 5.5 & 6 script security bug
Message
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00580249
Message ID:
00583059
Views:
39
>>>I meant the full disclosure vrs. bug secrecy stuff
>>
>>Ahhh, I see. Well, I posted a message in the chatter forum, personally, I'm against full disclosure. I'm not very sure here, but I don't think that Full Disclosure and Bug Secrecy are mutually exclusive, though they seem to be presented that way. Perhaps the best solution is a hybrid.
>>
>>Keep in mind, I'm not a security buff. Though, I do know what its like to find bad bugs in widley distributed software. I also know what its like to loose some time fixing problems like Code Red. So I'd say I'm more partial to bug secercy.
>
>Mike;
>
>?Full Disclosure? can go too far. If by full disclosure the industry means publish any security problem giving full details (address code, tools, etc.), I feel this is irresponsible. On the other hand if we receive a notice that there is a problem and a description of the venerability, that should be enough for the general public and IT Professionals. This still puts pressure on the software vendor without putting sensitive material into the hands of potential hackers and copycats.


"History of Full Disclosure

During the early years of computers and networks, bug secrecy was the norm. When users and researchers found vulnerabilities in a software product, they would quietly alert the vendor. In theory, the vendor would then fix the vulnerability. After CERT was founded in 1988, it became a clearinghouse for vulnerabilities. People would send newly discovered vulnerabilities to CERT. CERT would then verify them, alert the vendors, and publish the details (and the fix) once the fix was available.

The problem with this system is that the vendors didn't have any motivation to fix vulnerabilities. CERT wouldn't publish until there was a fix, so there was no urgency. It was easier to keep the vulnerabilities secret. There were incidents of vendors threatening researchers if they made their findings public, and smear campaigns against researchers who announced the existence of vulnerabilities (even if they omitted details). And so many vulnerabilities remained unfixed for years.

The full disclosure movement was born out of frustration with this process. Once a vulnerability is published, public pressures give vendors a strong incentive to fix the problem quickly. For the most part, this has worked. Today, many researchers publish vulnerabilities they discover on mailing lists such as Bugtraq. The press writes about the vulnerabilities in the computer magazines. The vendors scramble to patch these vulnerabilities as soon as they are publicized, so they can write their own press releases about how quickly and thoroughly they fixed things. The full disclosure movement is improving Internet security.

At the same time, hackers use these mailing lists to learn about vulnerabilities and write exploits. Sometimes the researchers themselves write demonstration exploits. Sometimes others do. These exploits are used to break into vulnerable computers and networks, and greatly decrease Internet security. In his essay, Culp points to Code Red, Li0n, Sadmind, Ramen, and Nimda as examples of malicious code written after researchers demonstrated how particular vulnerabilities worked.

Those against the full-disclosure movement argue that publishing vulnerability details does more harm than good by arming the criminal hackers with tools they can use to break into systems. Security is much better served, they counter, by keeping the exact details of vulnerabilities secret.

Full-disclosure proponents counter that this assumes that the researcher who publicizes the vulnerability is always the first one to discover it, which simply isn't true. Sometimes vulnerabilities have been known by attackers (sometimes passed about quietly in the hacker underground) for months or years before the vendor ever found out. The sooner a vulnerability is publicized and fixed, the better it is for everyone, they say. And returning to bug secrecy would only bring back vendor denial and inaction.

That's the debate in a nutshell: Is the benefit of publicizing an attack worth the increased threat of the enemy learning about it? Should we reduce the Window of Exposure by trying to limit knowledge of the vulnerability, or by publishing the vulnerability to force vendors to fix it as quickly as possible?

What we've learned during the past eight or so years is that full disclosure helps much more than it hurts. Since full disclosure has become the norm, the computer industry has transformed itself from a group of companies that ignores security and belittles vulnerabilities into one that fixes vulnerabilities as quickly as possible. A few companies are even going further, and taking security seriously enough to attempt to build quality software from the beginning: to fix vulnerabilities before the product is released. And far fewer problems are showing up first in the hacker underground, attacking people with absolutely no warning. It used to be that vulnerability information was only available to a select few: security researchers and hackers who were connected enough in their respective communities. Now it is available to everyone.

This democratization is important. If a known vulnerability exists and you don't know about it, then you're making security decisions with substandard data. Word will eventually get out -- the Window of Exposure will grow -- but you have no control, or knowledge, of when or how. All you can do is hope that the bad guys don't find out before the good guys fix the problem. Full disclosure means that everyone gets the information at the same time, and everyone can act on it.

And detailed information is required. If a researcher just publishes vague statements about the vulnerability, then the vendor can claim that it's not real. If the researcher publishes scientific details without example code, then the vendor can claim that it's just theoretical. The only way to make vendors sit up and take notice is to publish details: both in human- and computer-readable form. (Microsoft is guilty of both of these practices, using their PR machine to deny and belittle vulnerabilities until they are demonstrated with actual code.) And demonstration code is the only way to verify that a vendor's vulnerability patch actually patched the vulnerability.


This free information flow, of both description and proof-of-concept code, is also vital for security research. Research and development in computer security has blossomed in the past decade, and much of that can be attributed to the full-disclosure movement. The ability to publish research findings -- both good and bad -- leads to better security for everyone. Without publication, the security community can't learn from each other's mistakes. Everyone must operate with blinders on, making the same mistakes over and over. Full disclosure is essential if we are to continue to improve the security of our computers and networks.
***************************************
JLK
Nebraska Dept of Revenue
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform