Time for the explanation!
As you may have guessed from hints on earlier posts, this thread is all about
bits. But what is a bit? We're all used to the base 10 (decimal) system (digits 0 to 9) and most of us have seen the base 2 (binary) system (digits 0 and 1). A bit is a base 2 digit, either 0 or 1 (off or on).
Sometimes you may have declared a variable in your VBA code as a Byte data type. You can see in the VBA helpfile that a Byte is an
unsigned,
eight bit number ranging in value from 0 to 255. All of this is easier to understand when we summarise as follows:
Rich (BB code):
Decimal Binary (Byte)
0 0000 0000 'smallest value
1 0000 0001
2 0000 0010
3 0000 0011
4 0000 0100
...etc...
Each 0 and 1 in the binary column is a bit, and eight bits makes a byte. It's that simple!
Let's revisit the code:
Code:
Sub Predict_The_Output_III()
Debug.Print 2 Or 4
Debug.Print 2 And 4
Debug.Print Not 2
End Sub
What data type(s) are these numbers? Are they bytes? We can determine this by using the TypeName() function:
Rich (BB code):
Debug.Print TypeName(2) 'returns Integer
Debug.Print TypeName(4) 'returns Integer
We can also use the TypeName() function to see that the output of each these operations is an integer.
Okay, so we're dealing with integers, not bytes here. You can see in the VBA helpfile that, in VBA, an integer is a
signed,
sixteen bit number ranging in value from -32,768 to 32,767. The 'sign factor' makes these slightly different to bytes: the first bit (highlighted in red) determines whether the number is negative or positive.
Rich (BB code):
Decimal Binary (Integer)
-32768 1000 0000 0000 0000 'largest negative value
...etc...
-3 1111 1111 1111 1101
-2 1111 1111 1111 1110
-1 1111 1111 1111 1111
0 0000 0000 0000 0000
1 0000 0000 0000 0001
2 0000 0000 0000 0010
3 0000 0000 0000 0011
4 0000 0000 0000 0100
5 0000 0000 0000 0101
6 0000 0000 0000 0110
...etc...
32767 0111 1111 1111 1111 'largest positive value
I've included a few more numbers there so we can refer to them later on.
So far so good, but what does all of this have to do with OR, AND and NOT? OR, AND and NOT are all bitwise operators: they manipulate bits. We can use truth tables to summarise the results of the operations:
OR has the following truth table:
Rich (BB code):
0 Or 0 = 0
1 Or 0 = 1
0 Or 1 = 1
1 Or 1 = 1
AND has the following truth table
Rich (BB code):
0 And 0 = 0
1 And 0 = 0
0 And 1 = 0
1 And 1 = 1
NOT has the following truth table
NOT is slightly different to OR and AND because it only operates on a single bit: it is an unary operator.
Now we are armed with this information let's try to predict the outputs of the procedure.
Firstly
Let's write the integers 2,4 in binary:
Rich (BB code):
Decimal Binary
2 0000 0000 0000 0010
4 0000 0000 0000 0100
If we apply our OR truth table to each bit we get the following:
Rich (BB code):
Decimal Binary
2 0000 0000 0000 0010
4 0000 0000 0000 0100
=
? 0000 0000 0000 0110
If you look back at the integer summary table you can see that ? = 6 which is the output in the immediate window if you run the procedure.
Secondly
If we apply our AND truth table to each bit we get the following:
Rich (BB code):
Decimal Binary
2 0000 0000 0000 0010
4 0000 0000 0000 0100
=
? 0000 0000 0000 0000
If you look back at the integer summary table at you can see that the output ? = 0
Lastly
If we apply our NOT truth table to each bit we get the following:
Rich (BB code):
Decimal Binary
2 0000 0000 0000 0010
=
? 1111 1111 1111 1101
If you look back at the integer summary table at you can see that the output ? = -3
Hope that explains the mystery!