We use probability all the time; from pool to physics, and from craps to car insurance. The laws of probability are well known and fairly simple, and they even appear to work. What we *don't* know as a species is what a probability actually means. There have been a few ideas on the subject and some of them may even be right!

**Frequentism**

The frequentist position is the one you probably heard at school; perform an experiment lots of times, and measure the proportion where you get a positive result. This proportion, if you perform the experiment enough times, is the probability.

The problem comes in those cases where we haven't performed an experiment yet, or where there's no possible way an experiment could be performed - in these cases, frequentism can't help us. Also, there's the *Category Problem*, which is normally expressed by asking questions like 'what is the probability that the sun will rise tomorrow? is it:

- Undefined, because we've never tested the sun to see if it will rise tomorrow.
- 1, because every time we've tested to see if the sun will rise in the past, it's risen.
- 1 - e, where e is the proportion of observable stars per day that go supernova.

In the first case, the category is 'the sun rising on date X'. In the second case, the category is 'the sun rising'. In the third case, the category is 'observable suns rising'. It's not immediately clear which of these is the correct set of 'experiments' to use.

**Objectivism**

Objectivism is the belief that probabilities are real; they're not just numbers we make up or guess, but have a real significance. For example, suppose you flip a coin without looking at it, and then cover it with a piece of paper. Many would say that although there is a 50% chance that it is either heads or tails, nevertheless the coin is actually either one or the other. An objectivist would instead say that the *coin itself* has a probability of 50% of being heads or tails, regardless of whether anyone is thinking about the subject.

Objectivism comes unstuck when two people have differing probability estimates for the same thing. For example, suppose you are at a party where the host has scribbled either a zero or a one on everyone's forehead, and there are no mirrors^{1}. Now consider the question of what the probability is that your forehead reads zero - an objectivist would say that it is either probability one or zero, which the other guests know - but that you have to *estimate* it to be a half. This can feel somewhat unwieldy and unnatural.

One curious formula for the probability that the sun would rise was given by Laplace, an objectivist. He claimed that this probability was (d+1)/(d+2), where d is the number of days that the sun has risen - and that this formula applied in all cases where we knew nothing (or where what we did know was swamped by what we didn't). It's not entirely clear where he pulled this figure from, but it does seem to work.

**Subjectivism**

A subjectivist would tell you that probabilities are simply degree-of-beliefs by rational agents, with no objective reality. Unlike a frequentist, a subjectivist would be happy to accept that we can deduce the probability that the sun will rise again tomorrow merely from its age, colour, chemical composition, and so forth. Unlike an objectivist, a subjectivist has no problem with differing people giving different probabilities to something happening, and all being correct.

In practice, it's quite tricky to get humans (or, if we ever met any, other rational agents) to tell you what their degrees of belief are - we do all kinds of things like hedging our bets, peer pressure, being suspicious, trusting our friends, or looking for patterns - in general, all the things which mark us as intelligent beings are seen as a downside for probability researchers.

To get round this, people normally call upon others to 'put their money where their probabilities are'. Specifically, when someone states their degree-of-belief in something, other experimental subjects are free to place small bets (usually with plastic tokens) for or against that belief, with appropriate odds. Confronted with material gain or loss, most people quickly change their quoted odds to be more accurate. So effective is this method that it's been designated by some as the fundamental meaning of probability: the willingness to take or place a bet.

**Formalist**

The formalist definition of probability just defines it as a mathematical toy and neglects to say what it means or where it means it. As a theorist might say;

*Hey, we just play about with it - if those crazy people we work for think it **means* something in the real world then that's their problem...'

Specifically, a formalist would say that a probability is a function p(x) with the properties:

- p(x) >= 0 for all x
- The integral from negative infinity to positive infinity of p(x) = 1

Or, in English:

- The probability of any event is always positive or zero.
- The probabilities of all possible (mutually exclusive) events add up to one.

For example, the probability of flipping 'heads' on a coin is one half, which is positive. The probability of flipping 'tails' is also one half, which is still positive. The total of these two probabilities is half plus half, which is one. With this definition, all of the Maths can be done to work out how probabilities add together, conditional probabilities, and so forth, without making any claims about how it applies to 'the real world', whatever that is.

**Adhoccery**

Most people, though, seem happy not knowing what probability actually is, as long as it works - this is probably the dominant view: 'It's just this thing'. There are even those deluded fools who worship it as some kind of god. Indeed do many things come to pass.