What are the advantages of matrix transpose?
Answers
Answered by
0
Recall that between two vectors uu and vv, uTvuTv is equal to the dot product between u and v.
u⋅v=∥u∥∥v∥cosθu⋅v=‖u‖‖v‖cosθ,
where θθ is of course the angle between u and v and ∥u∥‖u‖ is the length of u, and vice versa. The squared length of a vector (e.g. u) is therefore given by u⋅uu⋅u.
Thus we see immediately that the transposition of vectors are critical for providing the properties of sizes and angles. In fact, the reason why linear algebra is so useful is that vectors are the simplestmathematical objects for which notions of sizes and angles, and thus similarity, can be provided.
Now, how does this relate to matrices? At this point it is important to consider what a matrix is, for which I present two perspectives:
It is a bunch of (column) vectors stacked side by side, ORIt represents a linear function acting on a vector (e.g. y=Axy=Ax).
If we use the first perspective, suppose that A=[a1a2⋯am]A=[a1a2⋯am], B=[b1b2⋯bk]B=[b1b2⋯bk], and P=ATBP=ATB. Let the i,ji,j-th entry of PP be given by pijpij. Then it shouldn't be too hard to see that
pij=ai⋅bjpij=ai⋅bj,
so the product ATBATB is nothing more than a table of how the vectors stacked by AA and BBrelate to each other through size and angle!
But lets try to apply this to the second perspective, by setting P=yP=y, M=ATM=ATand B=xB=x. Since PP and BB are now just vectors, we can therefore see that each entry of y=[y1y2⋯ym]Ty=[y1y2⋯ym]T is just the dot product between the columns of AA and xx. This means that every linear functiontaking a vector xx as an argument essentially just takes dot products between a collection of vectors and xx (since every linear function on a vector can be written as a matrix product)!
Hopefully these examples has helped illustrate a few applications of the matrix transpose to break down some very important mathematical ideas through the notions of size and angle, and gave a clue towards why these properties can be so incredibly beneficial.
Thanks.
Tripatby
u⋅v=∥u∥∥v∥cosθu⋅v=‖u‖‖v‖cosθ,
where θθ is of course the angle between u and v and ∥u∥‖u‖ is the length of u, and vice versa. The squared length of a vector (e.g. u) is therefore given by u⋅uu⋅u.
Thus we see immediately that the transposition of vectors are critical for providing the properties of sizes and angles. In fact, the reason why linear algebra is so useful is that vectors are the simplestmathematical objects for which notions of sizes and angles, and thus similarity, can be provided.
Now, how does this relate to matrices? At this point it is important to consider what a matrix is, for which I present two perspectives:
It is a bunch of (column) vectors stacked side by side, ORIt represents a linear function acting on a vector (e.g. y=Axy=Ax).
If we use the first perspective, suppose that A=[a1a2⋯am]A=[a1a2⋯am], B=[b1b2⋯bk]B=[b1b2⋯bk], and P=ATBP=ATB. Let the i,ji,j-th entry of PP be given by pijpij. Then it shouldn't be too hard to see that
pij=ai⋅bjpij=ai⋅bj,
so the product ATBATB is nothing more than a table of how the vectors stacked by AA and BBrelate to each other through size and angle!
But lets try to apply this to the second perspective, by setting P=yP=y, M=ATM=ATand B=xB=x. Since PP and BB are now just vectors, we can therefore see that each entry of y=[y1y2⋯ym]Ty=[y1y2⋯ym]T is just the dot product between the columns of AA and xx. This means that every linear functiontaking a vector xx as an argument essentially just takes dot products between a collection of vectors and xx (since every linear function on a vector can be written as a matrix product)!
Hopefully these examples has helped illustrate a few applications of the matrix transpose to break down some very important mathematical ideas through the notions of size and angle, and gave a clue towards why these properties can be so incredibly beneficial.
Thanks.
Tripatby
Similar questions