New plans to scan e-mails for illegal images of child abuse may give the appearance that children are being safeguarded but they may not be as effective as they first seem, argues Technology commentator Bill Thompson.
The US Attorney General has proposed tougher measures against child porn
Every time you send an e-mail it passes through a series of computers on its way to the intended destination.
Most of them are owned and managed by internet service providers, although if you use webmail from Yahoo, Google or Microsoft then it may take a different route.
But whoever provides your e-mail, the chances are they are having a look at every message you send or receive.
At the moment, their reasons are mostly benign, since they are looking for spam, viruses and other nasty stuff that we wouldn't want anyway.
Google mail users have got used to the fact that their e-mails are being read by a machine looking for context-sensitive ads to put on the same page, and most of us have encountered a company that reads all incoming e-mail looking for rude or inappropriate words, even if it sometimes appears absurd.
I used to edit an arts e-mail newsletter, and one issue was rejected by several recipients because it had an article on the Ars Electronica prize, but even with its flaws this helpful scanning is something that has obvious benefits.
And my internet service provider (ISP) helpfully lets me choose whether to have them look for spam or let it all through for me to deal with.
But if a plan being put forward by five US-based net companies goes ahead, the same approach could be used to look for e-mailed images of child abuse.
And the consequences for all net users could be more serious than just losing the odd legitimate message to the spam filters.
AOL, Yahoo, Microsoft, EarthLink and United Online have joined with the US National Center for Missing and Exploited Children (NCMEC) to create what they call a "Technology Coalition" to look for new ways to safeguard children.
Their first initiative is a plan to create a database of the images of child abuse they find, and process each to create a "digital fingerprint".
They will then look at e-mail attachments and images traded over peer-to-peer networks, swapped on messaging services, or posted on websites to try to spot illegal images.
However, they haven't yet said what will happen if they find one.
I rather hope they won't simply call the police, since with millions of images of all types being sent over the net every day, the chances of some false positives, when an entirely innocent drawing of a tree happens to generate the same code as an image of abuse, must be quite high.
But the lack of detail is typical of this sort of proposal.
The real goal, as so often with big initiatives from large companies around areas of public concern, is designed to show that "something is being done" and to tell government - in this case the US Attorney General Alberto Gonzales - that the situation is under control and no new laws or regulations are needed.
The scheme may actually work, especially since recent research from Binghamton University, New York, indicates that every digital camera has a different "signature" that can be used to identify which pictures it took. Looking for photos taken with known abusers' cameras might pay dividends.
However, the initial funding for the new coalition is only $1m, or roughly four cents for each of AOL's 25 million customers; so the suspicion has to remain that this is an attempt to get friendly headlines rather than really make a difference.
Yet it may be enough to deter the sort of government interference in their business that ISPs in the UK seem to be about to experience, because while the Senate likes to talk about how it will regulate the industry, they rarely get round to passing any actual laws.
In the UK, things tend to take a different course.
Just last week, the British Board of Film Censorship expressed an interest in taking web content under its wing.
Vernon Coaker, a parliamentary under-secretary in the Home Office, told MPs that the government expected that "by the end of 2007, all ISPs offering broadband internet connectivity to the UK general public [will] put in place technical measures that prevent their customers accessing websites containing illegal images of child abuse identified by the IWF (Internet Watch Foundation)".
The clear implication is that if they don't do it voluntarily then the law will be changed to force them to do so.
The list of websites to which he referred is drawn up by the self-proclaimed guardians of Internet morality, the IWF.
The body, which has no statutory authority and no real legal powers, provides a hotline for people to report images of child abuse and works with the police to get sites hosting such abhorrent content removed.
Not content with this role, it also provides ISPs with a list of sites and web pages it has not managed to remove, but which it considers unacceptable or illegal under current law.
The ISPs then stop their customers from viewing the sites concerned, although generally they don't actually tell you that the material concerned is banned because it is considered illegal, they just return a "page not found" error.
Both schemes, one for tracking images as they are exchanged and the other for stopping web users from accessing pages that contain banned material, offer the illusion of effectiveness while doing nothing to deal with the real problem of adult paedophiles using the network to help them abuse children.
It is clear that most of the trade in these appalling images happens on restricted servers, and most of the files are carefully encrypted or obfuscated before they are sent over the public network.
The real danger with media-friendly announcements of new internet coalitions or self-congratulatory annual reports on the number of abusive images seized by the authorities is that it encourages a belief that the situation is somehow under control, when it so clearly is not.
The tension between our freedom to use the network and the need to safeguard children is a driving issue for the Internet's development, and we need to think far more deeply about it than we have managed to do so far.
There is no simple answer, and if we settle for one then we will neither protect children nor safeguard our liberties.
Bill Thompson is a regular commentator on the BBC World Service programme Digital Planet