Chapter 6 | Direct Methods for Solving Linear Systems¶
Linear Systems of Equations¶
Gaussian Elimination with Backward Substitution¶
Let \(A^{(1)} = A, \mathbf{b}^{(1)} = \mathbf{b}\).
Elimination¶
For Step \(k\ (1\le k \le n-1)\), if \(a^{(k)}_{kk}\ne0\) (pivot element), compute \(m_{ik} = \dfrac{a^{(k)}_{ik}}{a^{(k)}_{kk}}\) and
After \(n-1\) steps,
Backward Substitution¶
Then,
Complexity¶
Recap that we have
For Elimination,
Multiplications/divisions
Addtion/subtraction
For Backward-substitution,
Multiplications/divisions
Addtion/subtraction
In total,
Multiplications/divisions
Addtion/subtraction
Pivoting Strategies¶
Motivation: For Gaussian Elimination with Backward Substituion, if the pivot element \(a_{kk}^{(k)}\) is small compared to \(a_{ik}^{(k)}\), then \(m_{ik}\) is large with high roundoff error. Thus we need some transformation to improve the accuracy.
Partial Pivoting (a.k.a Maximal Column Pivoting)¶
Determine the smallest \(p \ge k\) such that
and interchange row \(p\) and row \(k\) .
Requires \(O(N^2)\) additional comparisons.
Scaled Partial Pivoting (a.k.a Scaled-Column Pivoting)¶
Determine the smallest \(p \ge k\) such that
and interchange row \(p\) and row \(k\), where \(s_i = \max\limits_{1 \le j \le n} \left|a_{ij}\right|\).
(Simply put, place the element in the pivot position that is largest relative to the entries in its row.)
Requires \(O(N^2)\) additional comparisons and \(O(N^2)\) divisions.
Complete Pivoting (a.k.a Maximal Pivoting)¶
Search all the entries \(a_{ij}\) for \(i,j = k, \dots,n\) to find the entry with the largest magnitude. Both row and column interchanges are performed to bring this entry to the pivot position.
Requires \(O\left(\dfrac{1}{3}N^3\right)\) additional comparisons.
LU Factorization¶
Considering the matrix form of Gaussian Elimination, for total \(n-1\) steps, we have
where
It's simple to compute that
Thus we let
and
Then we get
Theorem 6.0
If Gaussian elimination can be performed on the linear system \(A\mathbf{x} = \mathbf{b}\) without row interchanges, then the matrix \(A\) can be factored into the product of a lower-triangular matrix \(L\) and an upper-triangular matrix \(U\) .
If \(L\) has to be unitary, then the factorization is unique.
Special Types of Matrices¶
Strictly Diagonally Dominant Matrix 严格主对角占优矩阵¶
Definition 6.0
The \(n \times n\) matrix \(A\) is said to be strictly diagnoally dominant when
Theorem 6.1
A strictly diagonally dominant matrix is nonsingular. And Gaussian elimination can be performed without row or column interchanges, and computations will be stable with respect to the growth of roundoff errors.(满秩、无需交换行列、误差稳定)
Positive Definite Matrix 正定矩阵¶
Definition 6.1 (Recap)
A matrix \(A\) is positive definite if it's symmetric and if \(\forall\ (\mathbf{0} \ne) \mathbf{x} \in \mathbb{R}^n\), \(\mathbf{x}^tA\mathbf{x} > 0\).
Theorem 6.2
If \(A\) is an \(n \times n\) positive definite matrix, then
- \(A\) is nonsingular;
- \(a_{ii} > 0\), for each \(i = 1, 2, \dots, n\);
- \(\max\limits_{1 \le k, j \le n}|a_{kj}| \le \max\limits_{1 \le i \le n}|a_{ii}|\);
- \((a_{ij})^2 < a_{ii}a_{jj}\), for each \(i \ne j\).
Choleski's Method (LDLt factorization)¶
Further decompose \(U\) to \(D\tilde U\).
\(A\) is symmetric \(\Rightarrow\) \(L = \tilde U^t\).
Thus
Let
and \(\widetilde{L} = LD^{1/2}\).
Then
Tridiagonal Linear System 三对角矩阵¶
Definition 6.2
An \(n \times n\) matrix \(A\) is called a band matrix if \(\exists\ p, q\) \((1 < p, q < n)\), s.t. whenever \(i + p \le j\) or \(j + q \le i\), \(a_{ij} = 0\). And \(w = p + q - 1\) is called the bandwidth.
Specially, if \(p = q = 2\), then \(A\) is called tridiagonal, with the following form,
Crout Factorization¶
the time complexity is \(O(N)\).
创建日期: 2022.12.21 01:34:02 CST