[an error occurred while processing this directive]
BBC News
watch One-Minute World News
Last Updated: Friday, 6 August, 2004, 09:32 GMT 10:32 UK
Nasa powers up with supercomputer
By Jo Twist
BBC News Online science and technology staff

Shuttle, Nasa
The new supercomputer will plug in computing gaps
US space agency Nasa is to get a massive supercomputing boost to help get its shuttle missions back in action after the 2003 shuttle disaster.

Project Columbia, a collaboration with two technology giants, will mean Nasa's computing power will be ramped up by 10 times to do complex simulations.

It will be one of the world's biggest Linux-based supercomputers.

The new supercomputer will help the agency model flight missions, climate research, and aerospace engineering.

The system will have 500 terabytes of storage, the equivalent of 800,000 CDs. It will use the might of 10,240 Intel Itanium 2 processors for complex computer simulations.

Nasa said the supercomputer will help patch holes in its computing power limitations that were highlighted following the Columbia shuttle disaster in which seven astronauts were killed.

"This will enable Nasa to meet its immediate mission-critical requirements for return to flight, while building a strong foundation for our space exploration vision and future missions," said Sean O'Keefe, Nasa administrator.

'Considerable power'

Previously, supercomputers have taken far longer to deploy because of specially-made specifications and processors, Richard Dracott, from Intel's enterprise platforms group told BBC News Online.

Japan's Earth Simulator, for example, which is the fastest supercomputer in the world, took five years to get up and running, he said.

The Nasa project, which is based at its Ames Research Center in California, reinforces a move away from that approach.

Nasa's Project Columbia (Image: Intel)
Based on SGI Altix computers
Very large node cluster
Each Altix node has 1,000Gb of memory
Has 20x512 processors
Uses SGI switching technology and Linux operating system
"This is the epitome of change in supercomputing," said Mr Dracott.

"It is using an off-the-shelf system and taken that and built a powerful system around 512-processors which are then hooked together to give considerable power."

The increase in computing power for Nasa means researchers can do a lot more to help in future mission planning, as part of its Space Exploration Simulator.

It will play an integral part in other critical areas of scientific research, like climate change.

The Project Columbia supercomputer's shared memory means a large problem or scenario can be worked on by all the processors simultaneously.

"The more computer power you have, you can do two things: you can simulate more events, have that many more 'what if scenarios' to foresee other circumstances," said Mr Dracott.

"Taking advantage of gravity modelling would certainly be something that would be done much faster and to a greater degree of accuracy and turn around time."

The increase in computing horse power also allows for more complex analyses of scenarios.

With very large scale computing power, weather patterns which are critical for shuttle missions for instance, can be simulated, merged and stored graphically.

They can also be modelled over a time period of weeks or months instead of over just a few days.

But the system will also be used to model the human impact on climate change and global warming.

More open

The off-the-shelf approach to putting together such massive computing power also opens up the supercomputing market to countries or organisations who could not previously afford to employ build them, according to Mr Dracott.

Columbia crew
Kalpana, centre, died along with six of her colleagues
Supercomputing has become critical for many scientific and research communities.

They have been used in the human genome project, and the US army has just commissioned a supercomputer from IBM to help in their military research.

Increasingly, their use has also been driven by the power industry to simulate nuclear power station safety models.

Usually, supercomputers are built with thousands of two-processor nodes. These are clustered together, but Project Columbia will have 20 nodes.

The first one to be deployed was named Kalpana after Kalpana Chawla, an Ames alumna, who was among the seven astronauts to die in the Columbia accident.

The rest of the nodes will be in action by the end of the year, said Intel and Silicon Graphics.

The system, worth $160m (88m), will also be made available to other government agencies, and US research facilities.

The BBC is not responsible for the content of external internet sites


News Front Page | Africa | Americas | Asia-Pacific | Europe | Middle East | South Asia
UK | Business | Entertainment | Science/Nature | Technology | Health
Have Your Say | In Pictures | Week at a Glance | Country Profiles | In Depth | Programmes
Americas Africa Europe Middle East South Asia Asia Pacific