## First Steps

The first point to make is that our all-time favourite, decimal, uses Base 10 - that is, every *ten* you count up, you knock the right-hand digit back to zero but add one to the digit on the left. This is called counting. Two more commonly used bases: binary and hexadecimal (hex) work exactly the same but binary (Base 2) wraps the digit back to zero every two counts (meaning you only get as high as '1' before going back to zero) whereas hex does that every sixteen counts. Since there aren't sixteen number symbols then hex has to borrow the first six letters; meaning hex goes 0, 1, 2 ... 8, 9, A, B, C, D, E, F before going back to zero. When writing number, decimal number are just written as we normally do - e.g. 10 means ten. Hex numbers are proceeded by "0x" and binary numbers are trailed by a "b".

The second point is whatever base you are expressing a number in, it is still the same quantity if you convert it to another base. 50 sheep in a field is also, obviously, the same as 110010b sheep in a field.

To prove the above, and particularly if you are new to non-decimal ways of writing numbers, fire up Windows Calculator and using the menu button set the mode to "Programmer". You can then click on "HEX", "DEC" or "BIN" to enter numbers in hexadecimal, decimal and binary respectively and it will convert them, as you type, to the other base numbers (ignore "OCT", that is only useful for people over 80). You may experiment by clicking on DEC and then type "1++" and then pressing enter repeatedly to watch the numbers going up in each base. Note how they roll over when they reach groups of their highest digit such as 999 rolling over to 1000, 0xFFF rolling to 0x1000 and 111b rolling over to 1000b. Interesting.

## CPU Maths

CPUs work internally on numbers using binary arithmetic. Binary numbers can be any number of bits long but are limited in most devices to factors of 8 i.e. for the Pilot 8, 16, 24 and 32 bits long. Binary numbers are long winded written down so we prefer hex (base 16). The reason for hex being used is traditional and comes from the fact that the binary numbers can be grouped into blocks of four bits (16 possible values) and so can be written as one single hex character - making it half-way between humans and CPUs way of looking at things.

For example 10001000b can be grouped as 1000b : 1000b and each group converted to hex without reference to the other groups, in this case, giving 0x88.

NOTE: the standard of writing in decimal i.e. base 10 is a very poor way of representing the data in CPUs since base 10 is not a multiple of the underlying CPU binary base 2. In the above example, 10001000b is 136 in decimal which shows no obvious (or easily calculable) value in common. In summary converting between binary and hex feels quite natural (after a while) but decimal (our favourite) is the odd one out. There is no mathematical reason at all why people count in decimal, everyone should learn to count, add and multiply in hex. It is the future.

## Truncation Confusion

If you offer up the pattern 1001, 1008, 1056 and I give you 1, 8, 56 in return you can easily see I've been truncating the numbers. The problem with all numbers (decimal, hex or any other base) is that if you chop out parts of the number, the remaining part can be hard to recognise when shown in a different base.

For example:

0x3A hex is 58 in decimal. But if we chop off ("mask out" in computer terminology) the '3' from 0x3A we're left with 0x0A, clear, but which is 10 in decimal. Not obvious.

If given 0x12345678 and I split it into groups of 8 bits I have 0x12, 0x34, 0x56, 0x78 which is clear they are parts of the fist number. But in decimal the original number is 305419896 and chopping that into the same 8 bit bytes the parts are 18, 52, 86, 120. Again, no obvious relationship at all.