Determinants
Vector Space

More on Determinants

Observation 1.

 

We are given two n \times n matrices E and B. Assume that matrix E is elementary.

 

Case 1. E is obtained from I by interchanging two rows. In this case,

(1)   \begin{equation*}\det E = -\det I = -1 \end{equation*}

Since multiplying B by E on the left will result in interchanging the associated rows

(2)   \begin{equation*}\det \left (EB\right ) = -\det B \end{equation*}

and therefore

(3)   \begin{equation*}\det \left (EB\right ) = -\det B =\det E\det B \end{equation*}

 

Case 2. E is obtained from I by adding a constant multiple of a row to another. In this case,

(4)   \begin{equation*}\det E =\det I =1 \end{equation*}

Since multiplying B by E on the left will result in the corresponding elementary row operation,

(5)   \begin{equation*}\det \left (EB\right ) =\det B \end{equation*}

But this means

(6)   \begin{equation*}\det \left (EB\right ) =\det E\det B \end{equation*}

 

Case 3. E is obtained from I by multiplying a row by a real number k . Here,

(7)   \begin{equation*}\det E =k\det I =k \end{equation*}

Similar to previous cases

(8)   \begin{equation*}\det \left (EB\right ) =k\det B =\det E\det B \end{equation*}

 

As we see, when E is an elementary matrix then

(9)   \begin{equation*}\det \left (EB\right ) =\det E\det B \end{equation*}

This argument can be extended to

(10)   \begin{equation*}\det \left (E_{n}E_{n -1} . . .E_{2}E_{1}B\right ) =\det E_{n}\det E_{n -1} . . .\det E_{1}\det B \end{equation*}

 

Observation 2.

 

Suppose two n \times n matrices A and B are given. There are two cases:

 

case 1. A is nonsingular. Then A can be written as a product of elementary matrices, say E_{n}E_{n -1} . . .E_{2}E_{1} =A . By observation 1,

(11)   \begin{equation*}\det \left (AB\right ) =\det A\det B \end{equation*}

 

Case 2. A is singular. Then use elementary matrices to reduce A until you get a matrix A^{ \prime } with 1 row completely zero. Observe that multiplication of A^{ \prime } and any other matrix will result in one row or column of zeros. Therefore \det \left (A^{ \prime }B\right ) =0 and \det \left (AB\right ) =0. Using the fact that A is singular, we know \det A =0 and therefore

(12)   \begin{equation*}\det \left (AB\right ) =\det A\det B \end{equation*}

Observation 3.

 

Suppose an n \times n matrix A is given. The following statements are equivalent to one another:

 

1. \det A =0

 

2. A is singular.

 

(2 \Rightarrow 1)

We assume A is singular. To obtain a contradiction, we assume that \det A \neq 0. Then use Gauss-Jordan elimination to reduce A to obtain, say A^{ \prime } . Either A^{ \prime } is the identity matrix or it has one zero row. Since A is singular, A^{ \prime } \neq I . Therefore A^{ \prime } has one zero row. But this says that \det A^{ \prime } =0 and in result \det A =0. This is a contradiction.

 

\left (1 \Rightarrow 2\right )
We assume \det A =0. To obtain a contradiction, we assume that A is nonsingular that is A^{ -1} exists. Then

(13)   \begin{equation*}1 =\det I =\det \left (AA^{ -1}\right ) =\det A\det A^{ -1} =0\det A^{ -1} =0 \end{equation*}

This is the desired contradiction.

 

Observation 4.

 

Suppose we are given a system

(14)   \begin{align*}ax +by =k \\ cx +dy =p\end{align*}

Then it is an easy practice to show

(15)   \begin{equation*}x =\frac{kd -bp}{ad -bc} ,y =\frac{ap -kc}{ad -bc} \end{equation*}

Note that this can be written as

(16)   \begin{equation*}x =\frac{kd-bp}{ad -bc} =\frac{\det \left (\begin{array}{cc}k & b \\ p & d\end{array}\right )}{\det \left (\begin{array}{cc}a & b \\ c & d\end{array}\right )} \end{equation*}

(17)   \begin{equation*}y =\frac{ap -kc}{ad -bc} =\frac{\det \left (\begin{array}{cc}a & k \\ c & p\end{array}\right )}{\det \left (\begin{array}{cc}a & b \\ c & d\end{array}\right )} \end{equation*}

This is the very well-known method, Cramer’s Rule. Note that when solving for any unknown variable, the associated column is replaced by the resulting vector in the matrix on top. In denominator, determinant of the associated matrix remains. We will use the Cramer’s Rule to solve the next system

Example 1.

 

Suppose we are given the matrix equation

(18)   \begin{equation*}\left (\begin{array}{ccc} -1 & 3 & 0 \\ 1 & 2 & 1 \\ 1 & -2 & 8\end{array}\right )\left (\begin{array}{c}x \\ y \\ z\end{array}\right ) =\left (\begin{array}{c}1 \\ -1 \\ 0\end{array}\right ) \end{equation*}

We would like to use Cramer’s Rul. So we start by computing

(19)   \begin{align*}\det \left (\begin{array}{ccc}1 & 3 & 0 \\ -1 & 2 & 1 \\ 0 & -2 & 8\end{array}\right ) =16 +2 -3\left ( -8\right ) =42 \\ \det \left (\begin{array}{ccc} -1 & 1 & 0 \\ 1 & -1 & 1 \\ 1 & 0 & 8\end{array}\right ) =8 -7 =1 \\ \det \left (\begin{array}{ccc} -1 & 3 & 1 \\ 1 & 2 & -1 \\ 1 & -2 & 0\end{array}\right ) =2 -3 -4 = -5 \\ \det \left (\begin{array}{ccc} -1 & 3 & 0 \\ 1 & 2 & 1 \\ 1 & -2 & 8\end{array}\right ) =18 -3\left (7\right ) = -3\end{align*}

And therefore

(20)   \begin{equation*}x =\frac{42}{ -3} ,y =\frac{1}{ -3} ,z =\frac{ -5}{ -3} \end{equation*}

Leave a Reply