John Travis
Mississippi College
Attempting to solve the linear system $Ax=b$ exactly will virtually always lead to an accumulation of roundoff errors even in the best circumstances. Hence, any solution for $x$ will inevitably be only approximately correct. Further, the problem that this linear system models will generally suffer from simplifying assumptions therefore the actual equations will already suffer from a bias that is kind of like a truncation errors.
So the question remains...why solve an inexact model using inexact arithmetic EXACTLY? One answer is that at least we can control adding additional truncation error into the solution technique by making the logical path for getting the solution an exact one. However, why not attempt to use some iterative method for solving $Ax=b$ in a manner similar to what was done when finding roots for the scalar problem $f(x) = 0$? Indeed, this generalization from scalars to a linear system might provide not only an answer but one that, with certain well-behaved systems, will provide a suitably accurate answer but using significantly fewer calculations.
So, consider the linear equations from a nxn system $Ax=b$ with
$ a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + ... + a_{1n}x_n = b_1$
$ a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + ... + a_{2n}x_n = b_2$
$ a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + ... + a_{3n}x_n = b_3$
...
$ a_{n1}x_1 + a_{n2}x_2 + a_{n3}x_3 + ... + a_{nn}x_n = b_n$
and solve the kth equation for $x_k$ to yield
$ x_1 = (b_1/a_{11} - (a_{12}x_2/a_{11} + a_{13}x_3/a_{11} + ... + a_{1n}x_n/a_{11})) $ $ x_2 = (b_2/a_{22} - (a_{21}x_1/a_{22} + a_{23}x_3/a_{22} + ... + a_{2n}x_n/a_{22})) $ $ x_3 = (b_3/a_{33} - (a_{31}x_1/a_{33} + a_{32}x_2/a_{33} + ... + a_{3n}x_n/a_{33})) $ ... $ x_n = (b_n/a_{nn} - (a_{n1}x_1/a_{nn} + a_{n2}x_2/a_{nn} + ... + a_{n,n-1}x_{n-1}/a_{nn})) $
This new system can be expressed in general as
$(D-L-U)x = b$
or
$Dx = (L+U)x + b$
or
$x = D^{-1}(L+U)x + D^{-1}b$.
Let's just rename and let $T = D^{-1}(L+U)$
and
$c = D^{-1}b$
$x^{(k)} = Tx^{(k-1)} + c $
where T is the matrix of coefficients of the form $ a_{k,j}/a_{kk}$
and c a vector of terms of the form $b_k/a_{kk}$
and where we will ponder an iterative approach where we start with a reasonble guess for $x^{(0)}$ and proceed forward generating succesive values for k=1, k=2, ... until the algorithm converges. This is known as the Jacobi Method.
A = 4 1 -1 -1 3 1 2 2 5 b = 5 -4 1 exact = 1.44776 -0.835821 -0.0447761 T = 0 -0.25 0.25 0.333333 0 -0.333333 -0.4 -0.4 0 c = 1.25 -1.33333 0.2 x0 = 0 0 0 x1 = 1.25 -1.33333 0.2 x2 = 1.63333 -0.983333 0.233333 x3 = 1.55417 -0.866667 -0.06 x4 = 1.45167 -0.795278 -0.075 x5 = 1.43007 -0.824444 -0.0625556 x6 = 1.44047 -0.835792 -0.04225 x7 = 1.44839 -0.839093 -0.0418722 error = -0.000624223 0.0032717 -0.0029039 A = 4 1 -1 -1 3 1 2 2 5 b = 5 -4 1 exact = 1.44776 -0.835821 -0.0447761 T = 0 -0.25 0.25 0.333333 0 -0.333333 -0.4 -0.4 0 c = 1.25 -1.33333 0.2 x0 = 0 0 0 x1 = 1.25 -1.33333 0.2 x2 = 1.63333 -0.983333 0.233333 x3 = 1.55417 -0.866667 -0.06 x4 = 1.45167 -0.795278 -0.075 x5 = 1.43007 -0.824444 -0.0625556 x6 = 1.44047 -0.835792 -0.04225 x7 = 1.44839 -0.839093 -0.0418722 error = -0.000624223 0.0032717 -0.0029039 |
You might notice that for a full system, this appears to take longer to converge than it might take to just solve the problem directly using Gaussian Elimination and that often is the case. However, for system that are "sparse"...that is, that contain a lot of zero entries...this approach can prove to be quite a lot faster and easier to implement.
n = 20 n = 20 |
exact = 0.67508 0.53154 0.36193 0.41980 0.56398 0.66889 1.00206 0.78442 0.48220 0.29163 0.36481 0.83704 0.68706 0.79476 0.74476 0.59417 0.73339 0.54546 0.63037 0.60639 x = 8 6 4 2 4 4 9 6 1 3 3 5 7 9 8 3 9 9 8 9 x = 0.67513 0.53164 0.36225 0.42073 0.56500 0.67014 1.00302 0.78582 0.48315 0.29216 0.36502 0.83734 0.68720 0.79480 0.74485 0.59423 0.73346 0.54547 0.63038 0.60640 error = -0.0000548347 -0.0000975555 -0.0003203840 -0.0009303011 -0.0010255324 -0.0012547167 -0.0009619485 -0.0013928977 -0.0009481656 -0.0005275699 -0.0002102662 -0.0002919856 -0.0001362957 -0.0000419588 -0.0000863708 -0.0000544016 -0.0000698108 -0.0000053514 -0.0000079794 -0.0000023996 This error vector has norm ans = 0.0027955 exact = 0.67508 0.53154 0.36193 0.41980 0.56398 0.66889 1.00206 0.78442 0.48220 0.29163 0.36481 0.83704 0.68706 0.79476 0.74476 0.59417 0.73339 0.54546 0.63037 0.60639 x = 8 6 4 2 4 4 9 6 1 3 3 5 7 9 8 3 9 9 8 9 x = 0.67513 0.53164 0.36225 0.42073 0.56500 0.67014 1.00302 0.78582 0.48315 0.29216 0.36502 0.83734 0.68720 0.79480 0.74485 0.59423 0.73346 0.54547 0.63038 0.60640 error = -0.0000548347 -0.0000975555 -0.0003203840 -0.0009303011 -0.0010255324 -0.0012547167 -0.0009619485 -0.0013928977 -0.0009481656 -0.0005275699 -0.0002102662 -0.0002919856 -0.0001362957 -0.0000419588 -0.0000863708 -0.0000544016 -0.0000698108 -0.0000053514 -0.0000079794 -0.0000023996 This error vector has norm ans = 0.0027955 |
In general, the rate of convergence for the Jacobi Method can be somewhat slow. One thing to consider in the setup however is that computing the new x-values from the old x-values is done line-by-line independently. However, once $x_1$ is updated in the first equation, that value can be used as an improved guess in the remaining equations. When $x_2$ is updated, it can be utilized as an improved guess in the equations below that.
Symbolically, this means to again use the splitting $ A = D - L - U$ but now use
$Ax = b$
equals
$(D - L)x = Ux + b$
and setting this up for iteration give the recursion
$(D - L)x^{(k)} = Ux^{(k-1)} + b$
which can be solved for $x^{(k)}$ using Forward Substitution.
error: 'A' undefined near line 2 column 13 error: called from /home/sageserver/.sage/temp/travis-PowerEdge-T430/379890/interface/tmp37\ 9896 at line 2 column 7 error: 'A' undefined near line 2 column 13 error: called from /home/sageserver/.sage/temp/travis-PowerEdge-T430/379890/interface/tmp379896 at line 2 column 7 |
Create functions for Jacobi and Gauss-Seidel that accept as input a matrix A and vector b for solving a general system $Ax=b$ and have them create the various parts that comprise each method. Make attempts to economize calculations (such as are there better ways to do, for example, $D^{-1}$).
tstart = 1701380076.98946 tend = 1701380077.00098 ans = 0.011525 tstart = 1701380076.98946 tend = 1701380077.00098 ans = 0.011525 |
|