We are starting to hit the limits of the net's current capacity to carry data, says Bill Thompson. But it isn't a reason to panic
The net's backbone was built thanks to the dotcom buble
At the turn of the last millennium financial markets around the world realised that the valuations they were offering for companies whose business plans included the word internet were completely ridiculous and that there was no way most of them were ever going to make money.
Share prices for those that had already floated collapsed; second round venture funding for start-ups disappeared, even for good ideas with a solid track record; and the angel investors took their money elsewhere.
Individual investors - the "day traders" who had sunk their savings into stocks that looked like they would grow forever - lost the most money, but pension funds, insurance companies and other big holders of shares also suffered.
The companies, large and small, went under.
Who now remembers etoys.com or Online Publishing?
But the effect of the collapse was like a neutron bomb, a nuclear weapon that produces high doses of radiation but with a relatively small explosion, and the damage it did was limited.
A neutron bomb is designed to kill people but leave buildings standing, while the effect of the dotcom crash was to close down companies but leave the network intact.
The web servers went as companies like boo.com turned off their sites, but the cables in the ground, the routers that connected them together and the infrastructure of the internet itself remained in place.
Billions of dollars were spent making the network fast enough to support the anticipated growth in e-commerce and online activity, and when the revolution was halted in its tracks by the collapse it was already in place.
Most importantly, the long-haul links of the net's backbone, the fibre-optic cables that cross continents and oceans and make geography largely irrelevant for most network use, most of the time, were in place.
After all, how often do we notice that many instant messages cross the Atlantic - twice - on their way between two people sitting in the same room?
One of the reasons the growth in broadband use in the West has been so trouble-free has been that spare network capacity, paid for by the foolish investors in the late 1990s, was sitting there for us all to use once we got our high-speed home connections.
But now there are signs that we've used up our inheritance, pawned the last of the family silver and run down the estate by not looking to the future.
More people are online than ever before, and many of us are as profligate in our use of bandwidth as a decadent aristocrat who can't believe that the peasants will ever revolt.
We can see this most clearly in the growth of online video, where concerns about network congestion are already being expressed.
Recently I've been playing with Joost, the recently-announced video-streaming service from the people behind Skype and, before that, Kazaa.
It's still in beta, but already it's clear that it provides an easy-to-use front end and decent quality video, something that other streaming services are going to find hard to match.
Unfortunately it is a real bandwidth hog that will suck up as many bits per second as it can get, and because it is a peer-to-peer service it sends as well as receives.
Joost adoption rates are likely to be high, especially if they manage to sign up some interesting content, and when the BBC's iPlayer is finally made available it will add to the load.
Channel 4 and Five both have video on demand services, and it can't be long before Sky fully embraces the online audience too.
When that happens network congestion will become more and more common, and ISPs will find it increasingly difficult to maintain performance for their customers.
The problems with increasing demand are behind the current debate, largely taking place in the US, about network neutrality and how service providers should be able to shape the traffic they carry by charging different prices for different services.
Others are considering how to change the network itself to cope with the demand.
The Clean Slate project at Stanford University, for example, believes that "the internet's shortcomings will not be resolved by the conventional incremental and "backward-compatible" style of academic and industrial networking research" and are trying to develop a new network architecture from scratch.
The idea of a clean slate is always appealing but the team will have to come up with something exceptional if they are to make any real impact.
For one thing, it isn't clear yet that today's internet really needs this sort of grand project or that the approach we have used for the last 30 years of packet-based networks should be abandoned.
The Internet Protocol, the core standard that determines how data moves around between computers, is a wonder of our age, as significant in its impact as the invention of the internal combustion engine, and it has proved its adaptability and capability again and again.
Ask the telephone companies, who watched IP-based telephony completely overturn their business models.
And sometimes just muddling along can lead to a solution that is not just as good as one which was designed from scratch but is actually superior.
I have always believed that evolution has given us a richness and complexity of life on earth far beyond the imaginative capacity of any creator, even a supernatural one, and I don't see why the "get something that sort of works and then fix it" model that we have always taken with the network should fail us now.
Bill Thompson is an independent journalist and regular commentator on the BBC World Service programme Digital Planet.