Cramer's rule
Cramer's rule is a way of solving a system of linear equations using determinants. Consider the following system of equations:
The above system of equations can be written in matrix form as Ax = b, where A is the coefficient matrix (the matrix made up by the coefficients of the variables on the left-hand side of the equation), x represents the variables in the system of equations, and b represents the values on the right-hand side of the equation:
Given the above, Cramer's rule states that the solution to the system of equations can be found as:
where Ai is a new matrix formed by replacing the ith column of A with the b vector. Referencing matrix A above,
Computing the determinant of each of these matrices makes it possible to find the solution for the desired variable. For example, to find x1, compute the determinant of A1 then divide it by the determinant of A. There are a number of different ways to compute the determinants of square matrices; refer to the determinant page if necessary. For this example, we will use cofactor expansion to find the determinants:
Thus:
The determinants of A2 and A3 can be calculated in the same manner:
The solutions to x1 and x2 can then be calculated as:
We can then test the solutions with each of the equations in the system:
The general form of Cramer's Rule, using matrix notation can be written as follows. A system of linear equations,
, |
can be written in the form Ax = b in matrix form as,
,
and Cramer's rule is as stated above,
,
where Ai is the new matrix formed by replacing the ith column of A with the b column vector, and vi represents the ith column vector in a:
Proof of Cramer's rule
We reinterpret the matrix-vector equation Ax = b as
In other words, b = x1v1 + ... + xnvn where each vi is the ith column of matrix A (see Matrix Multiplication). If we plug this expression for b into Ai, the matrix made by replacing the ith column of A with b, we get:
From the, properties of determinants, we can perform column operations of the type (i) + k(j) → (i), where k is a scalar, without changing the determinant. Therefore we can use the columns containing v1, ..., vi - 1, vi + 1, ... vn to subtract out every term in x1v1 + ... + xnvn except for xivi. In other words,
From another property of determinants, a column of type k(i) → (i) has the same effect of multiplying the determinant by k. Therefore we can pull the scalar factor xi from the ith column which contains xivi. In other words,
=
Since
,
we can combine the above to give us det(Ai) = xidet(A). Dividing by det(A) gives us , which is the original statement of Cramer's rule.
Limitations of Cramer's rule
- Because we are dividing by det(A) to get , Cramer's rule only works if det(A) ≠ 0. If det(A) = 0, Cramer's rule cannot be used because a unique solution doesn't exist since there would be infinitely many solutions, or no solution at all.
- Cramer's rule is slow because we have to evaluate a determinant for each xi. When we evaluate each det(Ai) we have to perform Gaussian elimination on each Ai for a total of n times. In comparison, if we were to use the augmented matrix, [A|b], we would only need to perform Gaussian elimination once to solve Ax = b.