|You are in: Technology|
Friday, 31 January, 2003, 10:44 GMT
How digital Armageddon was averted
Technology analyst Bill Thompson wants full disclosure about potential threats to the internet, such as the recent Slammer worm.
As the e-dust settles following last weekend's attack on the internet by the Slammer worm, security consultants, internet experts and doomsayers are all picking over the electronic traces.
Some are focusing on the appalling inability of systems administrators to keep even critical programs up to date.
The vulnerability in Microsoft's SQL Server 2000 database management system which Slammer exploited had been written about and fixed last summer, yet hundreds of thousands of supposedly competent engineers had left their systems open to attack.
Even Microsoft itself, according to leaked internal e-mails, was hit badly and had to shut down many systems.
Nothing could make it clearer that existing systems for informing people of security problems and the fixes available are totally inadequate for today's network.
Partly this is because there are simply too many problems to be fixed, a consequence of poor programming and testing.
Partly it is because fixing the systems takes time and effort which companies are not willing to expend.
Both need to be addressed before Nimda and Slammer are followed by the third and fourth horsemen of our cyber apocalypse.
Others concentrate on the way the net's infrastructure stood up to attack.
But we did not see a domino effect, with the whole network crashing under the strain.
The reason why we did not have a digital Armageddon, despite the predictions of so many security experts, is that their expensively made guesses never allowed for the concerted action of the net community responding to the attack.
Slammer may have taken only a few hours to spread over the whole net, but once it had been spotted people started filtering connections, patching servers and generally blocking it.
Buried in some of the more technical articles about the worm and its operation we find a third strand of debate.
It seems that the way Slammer worked was based in large part on an idea first put forward by programmer David Litchfield.
In May 2002 he noticed the problem with the way SQL server listened for incoming connections over the network, and pointed out that this created a potential security hole.
He alerted Microsoft, who fixed the program error and issued a patch - the same patch which large numbers of stressed-out engineers spent finding and applying last weekend.
But it seems that his example code was used by the people who wrote the Slammer worm, relying on the fact that many sites would not have installed the patch.
The practice of giving every detail of a security problem, including examples of the code needed to exploit it, is called full disclosure.
Some lists, like BugTraq, are full disclosure lists and if you read them you will be told about the default administrator password in ProxyView, how to crash Hypermail and all sorts of other interesting stuff.
By and large, the people who find these bugs and figure out how they can be exploited behave responsibly.
Like Mr Litchfield, they tell the software company first, give them time to work out how to fix the problem and release a patch.
Then they go public to get some credit and make sure that the net community fully understands the problem.
This is an important public service but, as you might expect, full disclosure is not popular with the software companies.
Microsoft has waged a public relations campaign against it for years, culminating in the creation of the Organisation for Internet Safety, a group of insiders who agree not to go public in return for privileged access to Microsoft's own security briefings.
With Microsoft's backing, this group will continue to grow, reducing the number of full disclosure bug reports we see on lists and websites.
And understanding how an exploit works may be the only way to figure out a way to protect against it, by finding out how to set up a firewall or which bits of software to disable. So we will be left relying on what the vendor says when we try to protect our systems.
The main problem with this sort of control is that it is so hard to find someone trustworthy and intelligent enough to act as a censor.
In general, it is safer just to let people speak and rely on other sanctions to correct their views.
The censorship of security reports only protects the companies who release software with security holes.
If someone tries to blackmail a software company by threatening to expose a bug then let them sue, but saying that full details can only be given to trusted insiders is surely not acceptable.
It is only a matter of time before David Litchfield is blamed for the SQL Slammer worm, because he figured out how it could be done and told the world.
But he should be praised for alerting responsible administrators and Microsoft themselves to the danger that their program posed to the net.
One of Microsoft's senior managers, Scott Culp, wrote that full disclosure of security problems is like shouting "Fire!" in a crowded theatre.
But, as security expert Bruce Schneier pointed out, there actually is a fire, so maybe shouting about it is not such a bad idea.
Disclaimer: The BBC will put up as many of your comments as possible but we cannot guarantee that all e-mails will be published. The BBC reserves the right to edit comments that are published.
Bill Thompson is a regular commentator on the BBC World Service programme Go Digital.
28 Jan 03 | Technology
27 Jan 03 | Technology
27 Jan 03 | Technology
26 Jan 03 | Technology
The BBC is not responsible for the content of external internet sites
Top Technology stories now:
Links to more Technology stories are at the foot of the page.
|E-mail this story to a friend|
Links to more Technology stories
To BBC Sport>> | To BBC Weather>> | To BBC World Service>>
© MMIII | News Sources | Privacy