By Mark Ward
Technology Correspondent, BBC News website
A better graphics card means PC games look more realistic
Every serious PC gamer knows what a difference a good graphics card can make to the fun they have.
But it is not just hardcore gamers who have recognised the worth of a PC graphics card.
Increasing numbers of research scientists have woken up to their potential too.
But the scientists in question are not using the cards to appreciate the detail in PC games such as The Witcher. Instead they are using them as cheap sources of supercomputer-class processing power.
"They give a phenomenal bang for the buck," said Mike Giles, professor of scientific computing at the University of Oxford.
Prof Giles said the way that graphics cards were built made them very good at the repetitive computational tasks many scientists use to test theories, models and predictions.
Professor Susan Hagness from the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison has turned to graphics cards to quickly analyse breast scans to spot cancer in its early stages.
Dr Hagness said official figures suggest x-rays missed 20% of the cancers that were present when a woman underwent screening.
"There's clearly impetus to develop complementary technologies that can provide better and more robust tools to look for cancer," she said.
Women want scan results as quickly as possible
Dr Hagness' team is using microwaves to scan tissue and then pumping the results through hardware made by Canadian firm Acceleware which bolts together several graphics cards into a mini computational cluster.
The dedicated hardware meant that results emerged in a matter of hours rather than days, said Dr Hagness.
This was essential when the technique began to be used in a clinical setting, she said.
"Any woman who undergoes screening mammography wants the results straight away."
Prof Giles, who is using graphics processors to do financial modelling, said the chips were very good at doing the same thing many different times.
By contrast the Intel or AMD chips inside a typical desktop machine were good at doing many different things at the same time.
Graphics cards had far more processing cores - which execute program instructions - than Intel or AMD chips, said Prof Giles, adding that each one of the cores could do one run of the same simple task.
The financial models that Prof Giles is running test the same algorithm on each core but each one gets different random numbers as input.
With the latest graphics processors having more than 100 processing cores that can add up to a lot of number crunching.
Even better, he said, each of those processing cores was as good at crunching numbers as a single Intel or AMD microprocessor.
Modelling financial markets helps firms weather stock movements
"Each core is logically very simple but its floating point capability is the same as an Intel chip," said Prof Giles.
Developments in methods of writing code to handle the processing was also making graphics processors much more attractive, he said.
"In the early days you could only use graphics cards for graphics," said Prof Giles.
In particular, he said, graphics card maker Nvidia had released software tools called Cuda (Compute Unified Device Architecture) that made it much easier to write code.
"For a while there were only hard-to-use shader languages," said Prof Giles, "Cuda is a much more usable development environment."
By harnessing that processing power many scientists are getting results from simulations far faster than before.
PhD student Tobias Brandvik and Dr Graham Pullan in the Whittle Laboratory at the University of Cambridge engineering department had sped up simulations of turbine blade designs by 40 times by using a few graphics cards.
Each blade, said Dr Pullan, was custom designed for the jet or power plant in which they will be used.
Existing design techniques employ computer models which divide the air flowing around simulated turbine blades into 500,000 cells.
Tiny improvements in efficiency have big rewards in turbine engines
But, he said, even with this formidable level of complexity the design process models the turbulent air rather than accurately representing or resolving it.
"With a cluster of graphical processing units, we could hope to use, say, 10 million cells," he said. The simulation run would take the same amount of time as existing models.
"Then, we would resolve some of the larger turbulent eddies," said Dr Pullan. "In general, the more we resolve and the less we model, the more accurate the theory."
An alternative would be to stick with 500,000 cells and try lots more blade designs or include blades upstream and downstream to get a better overall picture of air flowing through a turbine.
Tiny improvements in design can have a huge payoff, said Dr Pullan.
"It's all about making the absolute best efficiency possible," he said.
"Improvements of even 1% in fuel consumption, for jet engines, or 1% in electricity power generated, for steam or gas turbines for power generation, is highly sought after."