scert class 11 computer application explain the method of representing a floating point number in a 32 bit computer
Answers
Explanation:
Computer Representation of Numbers
Computers are designed to use binary digits to represent numbers and other information. The computer memory is organized into strings of bits called words of same length. Decimal numbers are first converted into their binary equivalents and then are represented in either integer or floating point form.
Integer Representation
The largest decimal number that can be represented , in binary form , in a computer depends on its word length. An n-bit word computer can handle a number as large as . For instance a 16-bit word machine can represent numbers as large as . How do we represent negative numbers ? Negative numbers are stored using $ 2's$ complement. This is obtained by taking the $ 1's$ complement of the binary representation of the positive number and then adding to it.
For example let us represent $ -17$ in the binary form.
$\displaystyle (17)_{10}$ $\displaystyle =$ $\displaystyle (10001)_{2}$
$\displaystyle 17$ $\displaystyle =$ $\displaystyle 010001$
$\displaystyle =$ $\displaystyle 101110$
$\displaystyle \qquad$ $\displaystyle +$ $\displaystyle 000001$
$\displaystyle \line(1,0){30}$
$\displaystyle =$
$\displaystyle \line(1,0){30}$
% latex2html id marker 246
$\displaystyle \therefore \quad -17$ $\displaystyle =$ $\displaystyle 101111$
Here in an extra zero to the left of the binary number is appended to indicate that it is positive. If this extra leftmost binary digit is set to then it indicates that the binary number is negative. So the general convention for storing signed numbers is to append a binary digit 0 or to the left of the binary number depending on the positive or negative sign of the number. So in a n-bit word computer, as one bit is reserved for sign , one can use maximum up to $ (n-1)$ bits to store a signed number. So the largest signed number a 16-bit word can represent is $ 2^{n-1}-1= 32767 $. On this machine since zero is defined as it is redundant to use the number to define a "minus zero". It is usually employed to represent an additional negative number i.e and hence the range of signed numbers that can be represented on a 16-bit word machine is from to .
I hope it will help you