Rules of addition and multiplication of matrix with example
Answers
Answered by
0
Rules for Matrix Arithmetic
The examples in the preceding section should make clear that matrix multiplication is not completely like multiplication of numbers. In particular, it is important to remember that matrix multiplication is in general not commutative. Even if $ A$ and $ B$ are both square ( $ n \times n$) matrices, so the products $ AB$ and $ BA$ are both defined and have the same dimensions ( $ n \times n$), it will usually be the case that $ AB \neq BA$.
Another way in which matrix multiplication differs from multiplication of numbers, which we have already seen, is the following: It is possible for some non-zero matrices $ A$ to have $ AB = AC$ but $ B \neq C$, or $ BA = CA$ but $ B \neq C$. In other words, we can't ``cancel out'' $ A$ in the equation $ AB = AC$.
Also, it is possible for neither $ A$ nor $ B$ to be zero, but for $ AB$ to be zero.
There's some good news as well. Many of the familiar laws for arithmetic of numbers do hold in the case of matrix arithmetic. In order to say what these are, we will first define two other operations on matrices, which we have not used so far but which will be useful in many contexts.
The sum of two matrices $ A$ and $ B$ is defined when $ A$ and $ B$ have the same dimensions. To add $ A$ and $ B$, add their corresponding entries:
$\displaystyle \left(\begin{array}{cc} 1 & 2 \\ 3 & 4
\end{array}\right) +
\le...
...nd{array}\right) =\left(\begin{array}{cc} 2 & 3 \\ 2 & 2
\end{array}\right).
$
The product of a scalar (number) $ c$ and a matrix $ A$ is computed by multiplying each entry of $ A$ by $ c$:
$\displaystyle 3
\left(\begin{array}{cc} 1 & 2 \\ -1 & 0
\end{array}\right) =
\...
...d{array}\right) =
\left(\begin{array}{cc} 3 & 6 \\ -3 & 0
\end{array}\right).
$
These operations on matrices are just like the corresponding operations on vectors and obey the same sorts of rules:
$\displaystyle A+B = B+A$
$\displaystyle A+(B+C) = A+(B+C)$
$\displaystyle c(A+B) = (cA)+(cB)$
$\displaystyle c(dA) = (cd)A$
Other rules apply to their interaction with matrix multiplication:
$\displaystyle A(B+C) = (AB) + (AC)$
$\displaystyle (B+C)A = (BA) + (CA)$
$\displaystyle c(AB) = (cA)B = A(cB)$
Finally, an important property of matrix multiplication: Matrix multiplication is associative:
$\displaystyle A(BC) = (AB)C$
In fact, the definition of matrix multiplication is designed to make sure that matrix multiplication is associative. This is because the mathematicians who came up with this definition were interested in linear functions of the sort
$\displaystyle L_A(X) = AX.$
Suppose we have two such linear functions, given by two matrices $ A$ and $ B$. Let's look now at the composition,
$\displaystyle (L_A \circ L_B)(X) = L_A\big(L_B(X)\big) = L_A(BX) = A(BX).$
Given that matrix multiplication is associative, we can continue with
$\displaystyle (L_A \circ L_B)(X) = A(BX) = (AB)X = L_{AB}(X).$
In other words, the composition of linear functions is another linear function, and its matrix is the product of the matrices of the original functions.
If you study multivariable calculus, you will see that this gives a nice ``chain rule'' for derivatives of functions from $ n$-dimensional space to $ m$-dimensional space
The examples in the preceding section should make clear that matrix multiplication is not completely like multiplication of numbers. In particular, it is important to remember that matrix multiplication is in general not commutative. Even if $ A$ and $ B$ are both square ( $ n \times n$) matrices, so the products $ AB$ and $ BA$ are both defined and have the same dimensions ( $ n \times n$), it will usually be the case that $ AB \neq BA$.
Another way in which matrix multiplication differs from multiplication of numbers, which we have already seen, is the following: It is possible for some non-zero matrices $ A$ to have $ AB = AC$ but $ B \neq C$, or $ BA = CA$ but $ B \neq C$. In other words, we can't ``cancel out'' $ A$ in the equation $ AB = AC$.
Also, it is possible for neither $ A$ nor $ B$ to be zero, but for $ AB$ to be zero.
There's some good news as well. Many of the familiar laws for arithmetic of numbers do hold in the case of matrix arithmetic. In order to say what these are, we will first define two other operations on matrices, which we have not used so far but which will be useful in many contexts.
The sum of two matrices $ A$ and $ B$ is defined when $ A$ and $ B$ have the same dimensions. To add $ A$ and $ B$, add their corresponding entries:
$\displaystyle \left(\begin{array}{cc} 1 & 2 \\ 3 & 4
\end{array}\right) +
\le...
...nd{array}\right) =\left(\begin{array}{cc} 2 & 3 \\ 2 & 2
\end{array}\right).
$
The product of a scalar (number) $ c$ and a matrix $ A$ is computed by multiplying each entry of $ A$ by $ c$:
$\displaystyle 3
\left(\begin{array}{cc} 1 & 2 \\ -1 & 0
\end{array}\right) =
\...
...d{array}\right) =
\left(\begin{array}{cc} 3 & 6 \\ -3 & 0
\end{array}\right).
$
These operations on matrices are just like the corresponding operations on vectors and obey the same sorts of rules:
$\displaystyle A+B = B+A$
$\displaystyle A+(B+C) = A+(B+C)$
$\displaystyle c(A+B) = (cA)+(cB)$
$\displaystyle c(dA) = (cd)A$
Other rules apply to their interaction with matrix multiplication:
$\displaystyle A(B+C) = (AB) + (AC)$
$\displaystyle (B+C)A = (BA) + (CA)$
$\displaystyle c(AB) = (cA)B = A(cB)$
Finally, an important property of matrix multiplication: Matrix multiplication is associative:
$\displaystyle A(BC) = (AB)C$
In fact, the definition of matrix multiplication is designed to make sure that matrix multiplication is associative. This is because the mathematicians who came up with this definition were interested in linear functions of the sort
$\displaystyle L_A(X) = AX.$
Suppose we have two such linear functions, given by two matrices $ A$ and $ B$. Let's look now at the composition,
$\displaystyle (L_A \circ L_B)(X) = L_A\big(L_B(X)\big) = L_A(BX) = A(BX).$
Given that matrix multiplication is associative, we can continue with
$\displaystyle (L_A \circ L_B)(X) = A(BX) = (AB)X = L_{AB}(X).$
In other words, the composition of linear functions is another linear function, and its matrix is the product of the matrices of the original functions.
If you study multivariable calculus, you will see that this gives a nice ``chain rule'' for derivatives of functions from $ n$-dimensional space to $ m$-dimensional space
Similar questions