A computer uses the binary numbering system, which has only two digits, 0 and 1. Any number can be represented by a string of these digits, known as bits (from binary digit).
For example, the decimal number 5 is equal to the binary number 101.
As a bit can have only two values, it can be represented by a voltage that is either on (1) or off (0). This is also known as logical 1 and logical 0. Typical values sed in a computer are 0 V for logical 0 and +5 V for logical 1, although it could lso be the other way around i.e. 0 V for 1 and +5 V for 0. A string of eight bits is called a ‘byte’ (or octet), and can have values ranging from 0 (0000 0000) to 25510 (1111 11112). Computers generally manipulate data in bytes or multiples of bytes.
Programmers use ‘hexadecimal’ notation because it is a more convenient way of defining and dealing with bytes. In the hexadecimal numbering system, there are 16 digits (0–9 and A–F) each of which is represented by four bits. A byte is therefore represented by two hexadecimal digits.
A ‘character’ is a symbol that can be printed. The alphabet, both upper and lower case, numerals, punctuation marks and symbols such as ‘*’ and ‘&’ are all characters. A computer needs to express these characters in such a way that they can be understood by other computers and devices. The most common code for achieving this is the American Standard Code for Information Interchange (ASCII).
Followers
Thursday, November 5, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment