The challenge to understand the brain could be helped by computer models
Professor Steve Furber is one of the pioneers of the UK's computer industry. He was a principal designer of the BBC Micro that gave many of Britain's current hi-tech workers their first taste of technology. He has now turned his attention to mimicking the human brain.
Most of the frontiers of science, from particle physics to radio astronomy, seem to be concerned with the incredibly small or the unimaginably large.
But there is a lump of stuff inside each of our heads that we could easily hold in our hands and look at, yet we have no idea how it works.
We know that our brains are built from a hundred billion small cells called neurons, and these cells sit in a biochemical bath and send electrical pulses to each other every so often.
It is a strange thing to realise that everything that we see, smell, hear, think, dream and say - indeed our very being - is just a consequence of those billions of cells inside our heads going "ping" from time to time.
We now have a fair idea of how those neurons are organised into major functional areas within the brain. Hi-tech scanners give us ever-more detailed glimpses into which brain areas are active, and in what order, when we receive particular inputs or think particular thoughts.
But we still have no idea of the spike "language" that the neurons use to talk to each other, nor how that spiking activity becomes coherent thoughts and actions.
Understanding the brain has turned out to be far more difficult than anyone imagined. Early AI focussed on symbolic logic, which computers are very good at but people aren't so that wasn't really getting at what it means for a human to be intelligent. Can we expect computers ever to begin to emulate the achievements of human intelligence?
Prof Furber is currently looking at ways to mimic human brains.
There are two ways to look at this question: Firstly, to ask when computers may be powerful enough to simulate the detailed workings of the brain, to which the answer seems to be that we aren't there yet, but we are getting close.
Secondly we can ask when we might know how to program those computers to perform this task, to which the answer is still unknown.
At the dawn of the computer age 60 years ago machines were a million million times too slow to model the brain in real time, but Petaflop supercomputers have closed that gap.
The programming challenge remains immense, though initiatives such as EPFL's Blue Brain project in Switzerland are addressing this head-on.
That is gathering huge quantities of biological data on the types and behaviours of neurons, and building high-fidelity biological models on a high-end IBM supercomputer.
As for improvements in computer software that might emerge from the quest to understand the inner working of the brain, the potential for improvement in natural language interfaces is almost limitless.