Computer Science, asked by parth9553, 1 year ago

Define bit. What are the symbols used to represent a bit ?

Answers

Answered by sriya60
9
The bit (a portmanteau of binary digit)[1] is a basic unit of information used in computingand digital communications. A binary digit can have only one of two values, and may be physically represented with a two-state device. These state values are most commonly represented as either a 0or1.

The two values of a binary digit can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its bit-length.

In information theory, one bit is typically defined as the information entropy of a binary random variable that is 0 or 1 with equal probability,[2] or the information that is gained when the value of such a variable becomes known.[3][4]

Confusion often arises because the words bit and binary digit are used interchangeably. But, within Shannon's information theory, a bit and a binary digit are fundamentally different types of entities. A binary digit is a number that can adopt one of two possible values (0 or 1), whereas a bit is the maximum amount of information that can be conveyed by a binary digit (when averaged over both of its states). By analogy, just as a pint-sized bottle can contain between zero and one pint, so a binary digit can convey between zero and one bit of information. A less confusing terminology is to refer to bits as Shannons (see below).

In quantum computing, a quantum bit or qubitis a quantum system that can exist in superposition of two classical (i.e., non-quantum) bit values.

The symbol for binary digit is either simply bit(recommended by the IEC 80000-13:2008 standard) or lowercase b (recommended by the IEEE 1541-2002 and IEEE Std 260.1-2004standards). A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.

As a unit of information in information theory, the bit has alternatively been called a shannon,[5] named after Claude Shannon, the founder of field of information theory. This usage distinguishes the quantity of information from the form of the state variables used to represent it. When the logical values are not equally probable or when a signal is not conveyed perfectly through a communication system, a binary digit in the representation of the information will convey less than one bit of information. However, the shannon unit terminology is uncommon in practice.


Similar questions