The challenge to understand the brain could be helped by computer models
Professor Steve Furber is one of the pioneers of the UK's computer industry. He was a principal designer of the BBC Micro that gave many of Britain's current hi-tech workers their first taste of technology. He has now turned his attention to mimicking the human brain.
Most of the frontiers of science, from particle physics to radio astronomy, seem to be concerned with the incredibly small or the unimaginably large.
But there is a lump of stuff inside each of our heads that we could easily hold in our hands and look at, yet we have no idea how it works.
We know that our brains are built from a hundred billion small cells called neurons, and these cells sit in a biochemical bath and send electrical pulses to each other every so often.
It is a strange thing to realise that everything that we see, smell, hear, think, dream and say - indeed our very being - is just a consequence of those billions of cells inside our heads going "ping" from time to time.
We now have a fair idea of how those neurons are organised into major functional areas within the brain. Hi-tech scanners give us ever-more detailed glimpses into which brain areas are active, and in what order, when we receive particular inputs or think particular thoughts.
But we still have no idea of the spike "language" that the neurons use to talk to each other, nor how that spiking activity becomes coherent thoughts and actions.
Understanding the brain has turned out to be far more difficult than anyone imagined. Early AI focussed on symbolic logic, which computers are very good at but people aren't so that wasn't really getting at what it means for a human to be intelligent. Can we expect computers ever to begin to emulate the achievements of human intelligence?
Prof Furber is currently looking at ways to mimic human brains.
There are two ways to look at this question: Firstly, to ask when computers may be powerful enough to simulate the detailed workings of the brain, to which the answer seems to be that we aren't there yet, but we are getting close.
Secondly we can ask when we might know how to program those computers to perform this task, to which the answer is still unknown.
At the dawn of the computer age 60 years ago machines were a million million times too slow to model the brain in real time, but Petaflop supercomputers have closed that gap.
The programming challenge remains immense, though initiatives such as EPFL's Blue Brain project in Switzerland are addressing this head-on.
That is gathering huge quantities of biological data on the types and behaviours of neurons, and building high-fidelity biological models on a high-end IBM supercomputer.
Neurons are very complex living cells that have evolved to perform an information processing function within a living organism.
One of the great unknowns in understanding the brain is the extent to which the finer details of a neuron's structure is important to its information processing function, as opposed to being required to stay alive, maintain chemical balance, take up energy, or just being an artefact of evolution and the way the cell has developed within the organism.
At Manchester we make the assumption that most of the phenomena we are interested in arise at the network level, so we discard much of the biological detail in favour of modelling larger numbers of simpler neurons. But, as the famous paraphrase of Einstein insists, "everything should be as simple as possible, but no simpler."
How far can we go before we risk losing some vital aspect of the neuron's information processing function? This question will only be answered as we begin to understand the operational principles at work inside the brain - as we begin to learn the language of the spikes.
Researchers around the world are using computer models to test the hypotheses of brain function that have emerged from work by neuroscientists and psychologists. What today's "brain modelling" computers offer is a platform that enables those models to be scaled up and to become increasingly accurate, and to enable scientists to get ever closer to the "big picture".
Where will this research lead us? The ultimate goal is the Grand Challenge of understanding the architecture of brain and mind but this is still some way beyond our grasp.
WHAT IS THE TECH LAB?
The world's leading thinkers give a personal view of future technologies
In the nearer term we can expect to see a growing understanding of brain subsystems, and from that understanding new computational approaches will emerge with applications in control, robotics and elsewhere.
The benefits of success in this research endeavour will be considerable, in directing therapies for brain injury and mental illness (it's always easier to fix something when you know how it works) and in the design of computers and computer software that will be less stupid and more able to cope with component failure (the adult brain loses a neuron a second without obvious ill effect).
We have recently begun collaborating with psychologists to build a computer model of normal human language capable of learning to read, comprehend and speak basic English words.
After training the model can be selectively "damaged" in ways that reproduce the patterns of behaviour observed in individuals who have suffered brain damage.
The model will then be used to test the effectiveness of various different speech therapies, and its predictions checked against the results of using those therapies with stroke patients who have language problems.
As the computing platforms used for this work scale up in performance, the accuracy and scope of the models they can support will scale up too, and we hope to gain an ever-deeper understanding of how the brain supports language, how it can fail, and the best ways to achieve recovery from those failures.
The need for computers to become better at coping with component failure is underlined by the trends in the semiconductor technology from which they are built.
Research could help those suffering speech problems as a result of serious injury
As transistors approach atomic scales there is an inevitable degradation in the consistency of their operation and designers are searching for ways to build microchips that can tolerate high rates of transistor failure.
The brain is an existence proof that it is possible to accommodate high component failure rates without significant loss of functionality, and there is much to be learnt from biology about building reliable systems on unreliable technology.
As for improvements in computer software that might emerge from the quest to understand the inner working of the brain, the potential for improvement in natural language interfaces is almost limitless.
At present you have to put a lot of effort into learning how to use your computer effectively. Imagine if this changed around, and it became the computer's job to learn how to be useful to you, just like a good human personal assistant. This would require the computer to build a model of how you - and in particular your mind - work.
The fear you may have of humanoid robots taking over the world as a result of computers approaching the capability of modelling the human brain can be dispelled relatively easily.
Any computer capable of running these models will be large, expensive and very power-hungry for the foreseeable future.
Biology will continue to offer the cheapest way of making portable, low-power brains (in highly dangerous embodiments) for a long time yet.