The Hidden Mysteries of the Enigmatic Zero - is It a Number?
The biggest miscalculation the world ever made in recent times was celebrating the birth of the second modern millennium on 1 January 2000 - it was a whole year before time! That may not seem important, until we look at why this should have happened at all. The facts are that we should in fact just have been marking the passage of 1999 years. Correctly, the celebration of 2000 years should have been 1st January 2001.
How did this glaring error occur? - The reason is very simple, because in the B.C. Calendar era (also called B.C.E. depending on one's religious persuasion) the year 1 B.C. Was the 1st year before A.D. 1 – there was no room reserved for a Zero year.
That recent event serves to bring into focus the constant, and ongoing wrangle and perplexities surrounding the use and understanding of what Zero is, what it does, and what purpose it serves – if any – it is, after all the ONLY entity that can claim to be nothing, and exist at the same time!
Indeed, if the idea of a Zero had not existed, neither would computers in their present form. In our normal way of counting, beginning with a single digit, counting proceeds through each symbol, in increasing order. Decimal counting uses the symbols 0 through 9, while binary only uses the symbols 0 (zero) and 1.
With decimal, it works this way: when the symbols for the first digit are exhausted, the next-higher digit (to the left) is incremented, and counting starts over at 0. So counting proceeds thus:
000, 001, 002, ... 007, 008, 009, (rightmost digit starts over, and next digit is incremented) 010, 011, 012, ......090, 091, 092, ... 097, 098, 099, (rightmost two digits start over, and next digit is incremented) 100, 101, 102, …
In binary, counting is the same except that only the two symbols 0 and 1 are used. Thus after a digit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next digit to the left, thus:
0001, (rightmost digit starts over, and next digit is incremented)
0010, 0011, (rightmost two digits start over, and next digit is incremented)
0100, 0101, 0110, 0111, (rightmost three digits start over, and the next digit is incremented)
1000, 1001, ...
In earlier times, the idea of using different numbers, in this case 1 through 9 was routinely used to determine the answers to simple real-life problems. For example, how many head of cattle were owned by a farmer.
If the ancients, at any time wanted to solve a problem, say, about how many cows 2 farmers owned, then at that time neither the concept of 0 or -10 was going to provide a satisfactory answer. It was as basic as the answer to the question of 5 cows for farmer A, and 4 cows for farmer B equaled a total of 9 cows. They only considered the manipulation of numbers in concrete terms, like cows, pots, houses,wives......
The intellectual problem that arose was, (although they were not unduly troubled by it) that when a farmer owned 10 cows, and all were stolen, how many cows did he then have. From a modern perspective, you may just say any ONE of: Zero, null, zilch, nothing, nil, nix, nada, null, aught, cipher, cypher, goose egg, naught, zip.
But it was not that simple then – the concept of Zero just did not exist until aeons later!
So, who in fact discovered Zero?
Across the millennia, in the course of the history of humankind, zero only made shadowy appearances, only to disappear again – mainly because of the fact that the mathematics of the time did not seem to grasp its major significance, or if they did, only held onto it for a fleeting moment, like a bird, only to let it loose into the ether again.
If we examine the importance of the CONCEPT of Zero as it stands, or contained in a sequence of numbers, although it does not have a value in itself, it contributes value to a series of numbers, by virtue of its placement, greater than the numbers may have been otherwise.
Consider this example:
The number 3104 is 2790 greater than the number 314 without the Zero. But surely that must be incorrect? But, if we install a zero in a sequence of numbers, it then magically almost, acquires a value by virtue of it being exalted to being a “place-holder”.
If we then look at the number 02, and then we remove the 0, unlike in the previous paragraph, the number 02 still retains the value of 2. Take the 0 to the other side of the 2 now, and the number magically becomes 20 – an increase of 18 – simply by adding 0, nothing, zilch.....get my point?
Having demonstrated this apparent enigmatic behavior, we might then be persuaded to believe that the 0 must ALWAYS be used in that way as place-holder to determine a value.
But how do we explain the fact that the advanced civilization of the Babylonians (from circa 1696 – 1654 BC) got away with NOT using a zero for over a millennium? From original texts inscribed in unbaked clay they simply etched in two wedge-like symbols for the “zero” - i.e. 31”4 – not 3104. But again, very mysteriously, no script has ever been found for the number 3140 – i.e. 314”
The learned folk of the time appeared to think that the number notation should be read in the “context” of the script to determine its value.
If a reference to “context” above seems misplaced when looking back 4000 years, it becomes a bit creepy to acknowledge that we STILL USE IT, unwittingly, in a very similar way to this day: e.g. if one catches an express bus in New York the fare quoted would be “five fifty” ($5.50) – then again, if we speak about the cost of an air flight we also say “five fifty” and don't get confused, but mean $550! - weird? The Babylonians would have hooted.
What then, can we conclude from all this? - Simply that the early use of a Zero to denote an empty space is not really the use of a NUMBER at all – it is just some sort of unacknowledged, perhaps even lonely, punctuation mark allowing the correct interpretation of the value of a sequence of numbers in the context of the problem.
Credits: Wikipedia Commons