The distance between the code words 0100 and 1010 is
Answers
Answer:
Linear Codes P. Danziger
1 Hamming Distance
Throughout this document F means the binary field F2.
In F2 we could define dot product, magnitude and distance in analogy with R
n
, but in this case we
would get all vectors having length 0 or 1, not very interesting. Instead we use a different definition
of magnitude and distance, which is much more useful in this case.
Definition 1 (Hamming distance) Given two vectors u, v ∈ F
n we define the hamming distance
between u and v, d(u, v), to be the number of places where u and v differ.
Thus the Hamming distance between two vectors is the number of bits we must change to change
one into the other.
Example Find the distance between the vectors 01101010 and 11011011.
01101010
11011011
They differ in four places, so the Hamming distance d(01101010, 11011011) = 4.
Definition 2 (Weight) The weight of a vector u ∈ F
n
is w(u) = d(u, 0), the distance of u to the
zero vector.
The weight of a vector is equal to the number of 1’s in it. The weight may be thought of as the
magnitude of the vector.
Example Find the weight of 11011011.
11011011 contains 6 ones, so w(11011011) = 6.
2 Error Correcting Codes
Error correcting codes are used in many places, wherever there is the possibility of errors during
transmission. Some examples are NASA probes (Galileo), CD players and the Ethernet transmission
protocol.
We assume that the original message consists of a series of bits, which can be split into equal size
blocks and that each block is of length n, i.e. a member of F
n
.
The usual process consists of the original block x ∈ F
n
this is then encoded by some encoding
function to u ∈ F
n+k which is then sent across some (noisy) channel. At the other end the received
value v ∈ F
n+k
is decode by means of the corresponding decoding function to some y ∈ F
n
.
x — Encode −→ u —— Transmit —−→ v — Decode −→ y
If there are no errors in the channel u = v and x = y.
1
Linear Codes P. Danziger
Definition 3 (Code) A code is a set C ⊂ F
m, where m = n + k, together with a 1-1 encoding
transformation T : F
n −→ F
m with Ran(T) = C and an onto decoding transformation D : C −→ F
n
.
In practice the domain of D is often larger than C to allow for corrections.
Let d be the smallest Hamming distance between two codewords in a code C, d = minu,v∈C{d(u, v)}.
Thus to change one codeword to another requires at least d bit changes. Then C can detect up to
d − 1 errors, since any d − 1 transmission errors cannot change one codeword to another.
A code is characterized by the three numbers:
n - the original message length (bits),
k - the number of bits added in encoding, and
d - the minimum distance between codewords.
Suppose that u was sent and v was received and d(u, v) ≤ (d − 1)/2, ie. less than (d − 1)/2 errors
occurred. Then, the distance of v to any codeword other than u, w ∈ C say, is greater than (d−1)/2,
since d(u, w) ≥ d by the definition of d. Thus u is the nearest codeword to v, the number of bit
changes required to get from u to v (the number of errors in the channel) is less than the number
of errors required to get from any other codeword to v. We correct v to u, so C can correct up to
t = (d − 1)/2 errors.
Example (3× Repetition Code)
n = 1, each bit is a block so a message is either 0 or 1. k = 2, so m = 3, C = {000, 111}.
Encode: (T)
0 → 000
1 → 111.
Decode: (D)
001, 010, 100, 000 → 0
bro this took soo long now you have to follow me!!!