• Every square matrix satisfies its own characteristic
Answers
Answer:
In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex field) satisfies its own characteristic equation.
Answer:
Explanation:
equation
In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex field) satisfies its own characteristic equation.
Arthur Cayley, F.R.S. (1821–1895) is widely regarded as Britain's leading pure mathematician of the 19th century. Cayley in 1848 went to Dublin to attend lectures on quaternions by Hamilton, their discoverer. Later Cayley impressed him by being the second to publish work on them. Cayley proved the theorem for matrices of dimension 3 and less, publishing proof for the two-dimensional case. As for n × n matrices, Cayley stated “..., I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree”.
William Rowan Hamilton (1805–1865), Irish physicist, astronomer, and mathematician, first foreign member of the American National Academy of Sciences. While maintaining opposing position about how geometry should be studied, Hamilton always remained on the best terms with Cayley.
Hamilton proved that for a linear function of quaternions there exists a certain equation, depending on the linear function, that is satisfied by the linear function itself.
If A is a given n×n matrix and In is the n×n identity matrix, then the characteristic polynomial of A is defined as {\displaystyle p(\lambda )=\det(\lambda I_{n}-A)}{\displaystyle p(\lambda )=\det(\lambda I_{n}-A)}, where det is the determinant operation and λ is a variable for a scalar element of the base ring. Since the entries of the matrix {\displaystyle (\lambda I_{n}-A)}{\displaystyle (\lambda I_{n}-A)} are (linear or constant) polynomials in λ, the determinant is also an n-th order monic polynomial in λ,
{\displaystyle p(\lambda )=\lambda ^{n}+c_{n-1}\lambda ^{n-1}+\cdots +c_{1}\lambda +c_{0}~.}{\displaystyle p(\lambda )=\lambda ^{n}+c_{n-1}\lambda ^{n-1}+\cdots +c_{1}\lambda +c_{0}~.}
One can create an analogous polynomial {\displaystyle p(A)}p(A) in the matrix A instead of the scalar variable λ, defined as
{\displaystyle p(A)=A^{n}+c_{n-1}A^{n-1}+\cdots +c_{1}A+c_{0}I_{n}~.}{\displaystyle p(A)=A^{n}+c_{n-1}A^{n-1}+\cdots +c_{1}A+c_{0}I_{n}~.}
The Cayley–Hamilton theorem states that this polynomial results in the zero matrix, which is to say that {\displaystyle p(A)=\mathbf {0} }{\displaystyle p(A)=\mathbf {0} }. The theorem allows An to be expressed as a linear combination of the lower matrix powers of A. When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the minimal polynomial of a square matrix divides its characteristic polynomial.
The theorem was first proved in 1853 in terms of inverses of linear functions of quaternions, a non-commutative ring, by Hamilton. This corresponds to the special case of certain 4 × 4 real or 2 × 2 complex matrices. The theorem holds for general quaternionic matrices. Cayley in 1858 stated it for 3 × 3 and smaller matrices, but only published a proof for the 2 × 2 case. The general case was first proved by Frobenius in 1878.
Examples
1×1 matrices
For a 1×1 matrix A = (a1,1), the characteristic polynomial is given by p(λ) = λ − a, and so p(A) = (a) − a1,1 = 0 is trivial.
Hope it helps you ✌️