In the early days of computing, man first made 'mechanical' computers. Through much research and development, electronic computers were eventually developed. These early systems used primitive technology in comparison with today's computers, but for their day they were fairly complex machines and they consumed a great deal of floor space. Their actual computational abilities were very limited in nature. A computer taking up half a warehouse could do no more, and probably less, than a simple desktop calculator of today.

Early designers and programmers adopted the binary system^{1} as a way to simplify the task of delivering instructions to the computer. They created 'switches' that were on (electrical current is present) or off (electrical current is absent). Using this as a starting point, they were able to do basic calculations. In short, they used 'bits' or Binary digITs. It is usually represented in the computer world with a 1 or a 0.

Another way of looking at it is as a True or a False, an ON or an OFF. If something equals one, then it exists, or is true, and if it equals 0, then it doesn't exist, or is false. Machines were simplified because they didn't have to remember all the digits of 2 through 9. It simply was or it wasn't. One or zero. True or false.

In school, we are taught 'decimal maths' or 'base ten' arithmetic^{2} with ten as the point where we 'start over'. That is, we count through nine and then the number ten is 1 again, followed by a 0 (10), then the next number, eleven, is a 1 followed by another 1 (11). So we first learn to count to 10, then on from there maybe to 100.

In binary digits you do the same thing, but the digits which represent 2-9 do not exist, so 10 comes early. In other words, instead of the decimal 'columns' of 1 (ones), 10 (tens), 100 (hundreds), 1000 (thousands), you get the binary 1 (ones), 10 (twos), 100 (fours), 1000 (eights) and so on.

For example, as you know, zero is 0. One is 1. So far, so good. Decimal is the same as binary, to this point. But then, two becomes 10 because the digit which represents two (2) in the decimal world simply doesn't exist in the binary world. And three, of course, is represented by 11.

Now what? Four becomes 100. And five is 101, and six is 110 and seven 111. Eight, therefore, would be 1000 and nine 1001; finally, ten is 1010.

So count with me: zero, one, two, three, four, five, six, seven, eight, nine, ten. And now, write:

0 (zero), 1 (one), 10 (two), 11 (three), 100 (four), 101 (five), 110 (six), 111 (seven), 1000 (eight), 1001 (nine), and 1010 (ten).

Addition works the same as with decimal. Ten plus ten still equals twenty, but it is expressed as:

1010 (ten)
+1010 (ten)
=10100 (twenty)

That is, right to left, zero plus zero = zero (0). One plus one equals two (10) – like decimal maths, you put down the 0 and carry the one. Zero plus zero plus the carried over one equals one (1) and one plus one equals two (10).

Even multiplication works the same as decimal, in that 0 times 0 equals 0, 0 times 1 equals 0, 1 times 0 equals 0 and 1 times 1 equals 1.

Ten times ten can be expressed as:

1010
x1010
=1100100 (one hundred)

If you care to do the maths, it's there, simple as can be. Just like regular old maths, but with only two digits to worry about.

To take this a step further beyond pure maths, alphabetic characters and other symbols may be represented through binary by assigning numeric codes to them. For example, using the American Standard Code for Information Interchange (ASCII), the code to represent the letter 'A' is decimal 65 (binary 100001) and the letter 'a' is 97 (binary 1100001). Alphabetic characters and their representation on computers are not the focus of this article. However, you may get an idea of how ASCII can be represented, as well as a chart of table of the decimal assignments for each letter and symbol, at this fun h2g2 Edited Entry: ASCII Art.

Quotable Quote: 'The world has only 10 kinds of people. Those who get binary, and those who don't.'^{3}