SOME GRAPHICS ( if you want to see the figure give click on the picture you want to see)

viernes, 23 de julio de 2010

GAUSS-SEIDEL METHOD

Two important facts about the Gauss-Seidel method should be noted. First, the computers in (2) appear to be serial, since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method.

Second, the new iterate x(k) depends upon the order in which the equations are examined. The Gauss- seidel method is sometimes called the method of successive displacements to indicate the dependence of the iterates on the order. If this ordering is changed, the components of the new iterate (and not their just their order) will also change.

The Gauss-Seidel method typically converges faster than the Jacobi method by using the most recently available approximations of the elements of the iteration
vector. The other advantage of the Gauss-Seidel algorithm is that it can be implemented using only one iteration vector, which is important for large linear
equation systems where storage of a single iteration vector alone may require 10GB or more. However, a consequence of using the most recently available solution
approximation is that the method is inherently sequential – it does not possess natural parallelism. The Gauss-Seidel method has been used for parallel solutions of
Markov.

The successive over-relaxation (SOR) method extends the Gauss-Seidel method using a relaxation factor !, analogous to the JOR method discussed above. For a good choice of !, SOR can have considerably better convergence behaviour than GS. However,
a priori computation of an optimal value for ! is not feasible.


Gauss Seidel Method

JACOBI METHOD

THE JACOBI METHOD

The Jacobi method is easily derived by examining each of the equations in the linear system Ax = b in isolation.

The Jacobi method is based on solving for every variable locally with respect to the
other variables; one iteration of the method corresponds to solving for every variable once. The resulting method is easy to understand and implement, but convergence is slow.

Jacobi method belongs to the category of so-called stationary iterative methods. These methods can be expressed in the simple form x(k) = Fx(k−1) + c, where
x(k) is the approximation to the solution vector at the k-th iteration and neither F nor c depend on k.

The Jacobi method does not converge for all linear equation systems. In such cases, Jacobi may be made to converge by introducing an under-relaxation parameter
in the standard Jacobi. Furthermore, it may also be possible to accelerate the convergence of the standard Jacobi method by using an over-relaxation parameter.
The resulting method is known as Jacobi overrelaxation (JOR) method.


Jacobi Method

ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS


The direct method are generally employed to solve problems of the first category, while the iterative methods to be discussed ion chapter 3 is preferred for problems of the second category. The iterative methods to be discussed in this project are
the Jacobi method, Gauss-Seidel, soap.

ITERATIVE METHODS




The approximate methods for solving system of linear equations makes it possible to obtain the values of the roots system with the specified accuracy as the limit of the sequence of some vectors. This process of constructing such a sequence is known as iteration. Three closely related methods studied in this work are all iterative in nature. Unlike the direct methods, which attempts to calculate an exact solution in a finite number of operations, these methods starts with an initial approximation and generate successively improved approximations in an infinite sequence whose limit is the exact solution. In practical terms, this has more advantage, because the direct solution will be subject to rounding errors. The procedures involved
in the various methods are described as follows:


THE JACOBI METHOD
THE GAUSS-SEIDEL METHOD



BIBLIOGRAPHY

http://www.bioline.org.br/request?ja08066
http://www.netlib.org/templates/templates.pdf
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-650.pdf

INVERSE OF A MATRIX




For a square matrix A, the inverse is written A-1. When A is multiplied by A-1 the result is the identity matrix I. Non-square matrices do not have inverses.

Note: Not all square matrices have inverses. A square matrix which has an inverse is called invertible or nonsingular, and a square matrix without an inverse is called noninvertible or singular.


AA-1 = A-1A = I




Inverse of a Matrix

LU DESCOMPOSITION

In linear algebra, the LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant

Solving linear equations

Given a matrix equation Ax=LUx=b we want to solve the equation for a given A and b. In this case the solution is done in two logical steps:

1.First, we solve the equation Ly = b for y
2.Second, we solve the equation Ux = y for x.
Note that in both cases we have triangular matrices (lower and upper) which can be solved directly using forward and backward substitution without using the Gaussian elimination process (however we need this process or equivalent to compute the LU decomposition itself). Thus the LU decomposition is computationally efficient only when we have to solve a matrix equation multiple times for different b; it is faster in this case to do an LU decomposition of the matrix A once and then solve the triangular matrices for the different b, than to use Gaussian elimination each time.




LU DESCOMPOSITION



LU DESCOMPOSITION.

GAUSS JORDAN METHOD




This is a variation of Gaussian elimination. Gaussian elimination gives us tools to solve large linear systems numerically. It is done by manipulating the given matrix using the elementary row operations to put the matrix into row echelon form. To be in row echelon form, a matrix must conform to the following criteria:
If a row does not consist entirely of zeros, then the first non zero number in the row is a 1.

If there are any rows entirely made up of zeros, then they are grouped at the bottom of the matrix.

In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right that the leading 1 in the higher row.
From this form, the solution is easily(relatively) derived. The variation made in the Gauss-Jordan method is called back substitution. Back substitution consists of taking a row echelon matrix and operating on it in reverse order. Normally the matrix is simplified from top to bottom to achieve row echelon form. When Gauss-Jordan has finished, all that remains in the matrix is a main diagonal of ones and the augmentation, this matrix is now in reduced row echelon form. For a matrix to be in reduced row echelon form, it must be in row echelon form and submit to one added criteria:

Each column that contains a leading 1 has zeros everywhere else.

Since the matrix is representing the coefficients of the given variables in the system, the augmentation now represents the values of each of those variables. The solution to the system can now be found by inspection and no additional work is required. Consider the following example:







Gauss Jordan Method





BIBLIOGRAPHY

http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT-KANPUR/mathematics-2/img739.png

http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/csa/node37.html

http://www.krellinst.org/AiS/textbook/unit
/example_projects/starter/math/matrix/gauss.html

DIRECT METHODS FOR SOLVING LINEAR EQUATIONS SYSTEMS



A system of linear equations can be written in the matrix notation as Ax=b, where A denotes the coefficient matrix,b is the right-hand side, and x represents the solution vector we search for. The system Ax=b, has a solution if b and only if belongs to the vector space spanned by the columns of A.

If m=n, but A does not have a full rank (which means that some equations are linear combinations of the other ones), the system is underdetermined and there are either no solution at all or infinitely many of them. In the latter case, any solution can be written as a sum of a particular solution and a vector from the nullspace of A. Finding the solution space can involve the SVD decomposition.

If m>n and the matrix A has a full rank, that is, if the number of equations is greater than the number of unknown variables, there is generally no solution and the system is overdetermined. One can search some X such that the distance between Ax and b is minimized, which leads to the linear least-squares problem if distance is measured by L norm.

If m=n and the matrix A is nonsingular, the system Ax=b has a unique solution.

From here on, we concentrate on systems of equations with unique solutions.

There are two basic classes of methods for solving system Ax=b. The first class is represented by direct methods. They theoretically give an exact solution in a (predictable) finite number of steps. Unfortunately, this does not have to be true in computational praxis due to rounding errors: an error made in one step spreads in all following steps. Classical direct methods are discussed in this section. Moreover, solving an equation system by means of matrix decompositions, can be classified as a direct method as well. The second class is called iterative methods, which construct a series of solution approximations that (under some assumptions) converges to the solution of the system.

First, even if a unique solution exist, numerical methods can fail to find the solution: if the number of unknown variables is large, rounding errors can accumulate and result in a wrong solution. The same applies very much to systems with a nearly singular coefficient matrix. One alternative is to use iterative methods, which are less sensitive to these problems. Another approach is to use the QR or SVD decompositions, which can transform some nearly singular problems to nonsingular ones. Second, very large problems including hundreds or thousands of equations and unknown variables may be very time demanding to solve by standard direct methods. On the other hand, their coefficient matrices are often sparse, that is, most of their elements are zeros.