We are given two matrices and . Assume that matrix is elementary.
Case 1. is obtained from by interchanging two rows. In this case,
Since multiplying by on the left will result in interchanging the associated rows
Case 2. is obtained from by adding a constant multiple of a row to another. In this case,
Since multiplying by on the left will result in the corresponding elementary row operation,
But this means
Case 3. is obtained from by multiplying a row by a real number Here,
Similar to previous cases
As we see, when is an elementary matrix then
This argument can be extended to
Suppose two matrices and are given. There are two cases:
case 1. is nonsingular. Then can be written as a product of elementary matrices, say By observation 1,
Case 2. is singular. Then use elementary matrices to reduce until you get a matrix with 1 row completely zero. Observe that multiplication of and any other matrix will result in one row or column of zeros. Therefore and Using the fact that is singular, we know and therefore
Suppose an matrix is given. The following statements are equivalent to one another:
2. is singular.
We assume is singular. To obtain a contradiction, we assume that . Then use Gauss-Jordan elimination to reduce to obtain, say Either is the identity matrix or it has one zero row. Since is singular, Therefore has one zero row. But this says that and in result This is a contradiction.
We assume To obtain a contradiction, we assume that is nonsingular that is exists. Then
This is the desired contradiction.
Suppose we are given a system
Then it is an easy practice to show
Note that this can be written as
This is the very well-known method, Cramer’s Rule. Note that when solving for any unknown variable, the associated column is replaced by the resulting vector in the matrix on top. In denominator, determinant of the associated matrix remains. We will use the Cramer’s Rule to solve the next system
Suppose we are given the matrix equation
We would like to use Cramer’s Rul. So we start by computing