SOME GRAPHICS ( if you want to see the figure give click on the picture you want to see)

viernes, 23 de julio de 2010

GAUSS-SEIDEL METHOD

Two important facts about the Gauss-Seidel method should be noted. First, the computers in (2) appear to be serial, since each component of the new iterate depends upon all previously computed components, the updates cannot be done simultaneously as in the Jacobi method.

Second, the new iterate x(k) depends upon the order in which the equations are examined. The Gauss- seidel method is sometimes called the method of successive displacements to indicate the dependence of the iterates on the order. If this ordering is changed, the components of the new iterate (and not their just their order) will also change.

The Gauss-Seidel method typically converges faster than the Jacobi method by using the most recently available approximations of the elements of the iteration
vector. The other advantage of the Gauss-Seidel algorithm is that it can be implemented using only one iteration vector, which is important for large linear
equation systems where storage of a single iteration vector alone may require 10GB or more. However, a consequence of using the most recently available solution
approximation is that the method is inherently sequential – it does not possess natural parallelism. The Gauss-Seidel method has been used for parallel solutions of
Markov.

The successive over-relaxation (SOR) method extends the Gauss-Seidel method using a relaxation factor !, analogous to the JOR method discussed above. For a good choice of !, SOR can have considerably better convergence behaviour than GS. However,
a priori computation of an optimal value for ! is not feasible.


Gauss Seidel Method

JACOBI METHOD

THE JACOBI METHOD

The Jacobi method is easily derived by examining each of the equations in the linear system Ax = b in isolation.

The Jacobi method is based on solving for every variable locally with respect to the
other variables; one iteration of the method corresponds to solving for every variable once. The resulting method is easy to understand and implement, but convergence is slow.

Jacobi method belongs to the category of so-called stationary iterative methods. These methods can be expressed in the simple form x(k) = Fx(k−1) + c, where
x(k) is the approximation to the solution vector at the k-th iteration and neither F nor c depend on k.

The Jacobi method does not converge for all linear equation systems. In such cases, Jacobi may be made to converge by introducing an under-relaxation parameter
in the standard Jacobi. Furthermore, it may also be possible to accelerate the convergence of the standard Jacobi method by using an over-relaxation parameter.
The resulting method is known as Jacobi overrelaxation (JOR) method.


Jacobi Method

ITERATIVE METHODS FOR SOLUTION OF SYSTEMS OF LINEAR EQUATIONS


The direct method are generally employed to solve problems of the first category, while the iterative methods to be discussed ion chapter 3 is preferred for problems of the second category. The iterative methods to be discussed in this project are
the Jacobi method, Gauss-Seidel, soap.

ITERATIVE METHODS




The approximate methods for solving system of linear equations makes it possible to obtain the values of the roots system with the specified accuracy as the limit of the sequence of some vectors. This process of constructing such a sequence is known as iteration. Three closely related methods studied in this work are all iterative in nature. Unlike the direct methods, which attempts to calculate an exact solution in a finite number of operations, these methods starts with an initial approximation and generate successively improved approximations in an infinite sequence whose limit is the exact solution. In practical terms, this has more advantage, because the direct solution will be subject to rounding errors. The procedures involved
in the various methods are described as follows:


THE JACOBI METHOD
THE GAUSS-SEIDEL METHOD



BIBLIOGRAPHY

http://www.bioline.org.br/request?ja08066
http://www.netlib.org/templates/templates.pdf
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-650.pdf

INVERSE OF A MATRIX




For a square matrix A, the inverse is written A-1. When A is multiplied by A-1 the result is the identity matrix I. Non-square matrices do not have inverses.

Note: Not all square matrices have inverses. A square matrix which has an inverse is called invertible or nonsingular, and a square matrix without an inverse is called noninvertible or singular.


AA-1 = A-1A = I




Inverse of a Matrix

LU DESCOMPOSITION

In linear algebra, the LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or calculate the determinant

Solving linear equations

Given a matrix equation Ax=LUx=b we want to solve the equation for a given A and b. In this case the solution is done in two logical steps:

1.First, we solve the equation Ly = b for y
2.Second, we solve the equation Ux = y for x.
Note that in both cases we have triangular matrices (lower and upper) which can be solved directly using forward and backward substitution without using the Gaussian elimination process (however we need this process or equivalent to compute the LU decomposition itself). Thus the LU decomposition is computationally efficient only when we have to solve a matrix equation multiple times for different b; it is faster in this case to do an LU decomposition of the matrix A once and then solve the triangular matrices for the different b, than to use Gaussian elimination each time.




LU DESCOMPOSITION



LU DESCOMPOSITION.

GAUSS JORDAN METHOD




This is a variation of Gaussian elimination. Gaussian elimination gives us tools to solve large linear systems numerically. It is done by manipulating the given matrix using the elementary row operations to put the matrix into row echelon form. To be in row echelon form, a matrix must conform to the following criteria:
If a row does not consist entirely of zeros, then the first non zero number in the row is a 1.

If there are any rows entirely made up of zeros, then they are grouped at the bottom of the matrix.

In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower row occurs farther to the right that the leading 1 in the higher row.
From this form, the solution is easily(relatively) derived. The variation made in the Gauss-Jordan method is called back substitution. Back substitution consists of taking a row echelon matrix and operating on it in reverse order. Normally the matrix is simplified from top to bottom to achieve row echelon form. When Gauss-Jordan has finished, all that remains in the matrix is a main diagonal of ones and the augmentation, this matrix is now in reduced row echelon form. For a matrix to be in reduced row echelon form, it must be in row echelon form and submit to one added criteria:

Each column that contains a leading 1 has zeros everywhere else.

Since the matrix is representing the coefficients of the given variables in the system, the augmentation now represents the values of each of those variables. The solution to the system can now be found by inspection and no additional work is required. Consider the following example:







Gauss Jordan Method





BIBLIOGRAPHY

http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT-KANPUR/mathematics-2/img739.png

http://fedc.wiwi.hu-berlin.de/xplore/ebooks/html/csa/node37.html

http://www.krellinst.org/AiS/textbook/unit
/example_projects/starter/math/matrix/gauss.html

DIRECT METHODS FOR SOLVING LINEAR EQUATIONS SYSTEMS



A system of linear equations can be written in the matrix notation as Ax=b, where A denotes the coefficient matrix,b is the right-hand side, and x represents the solution vector we search for. The system Ax=b, has a solution if b and only if belongs to the vector space spanned by the columns of A.

If m=n, but A does not have a full rank (which means that some equations are linear combinations of the other ones), the system is underdetermined and there are either no solution at all or infinitely many of them. In the latter case, any solution can be written as a sum of a particular solution and a vector from the nullspace of A. Finding the solution space can involve the SVD decomposition.

If m>n and the matrix A has a full rank, that is, if the number of equations is greater than the number of unknown variables, there is generally no solution and the system is overdetermined. One can search some X such that the distance between Ax and b is minimized, which leads to the linear least-squares problem if distance is measured by L norm.

If m=n and the matrix A is nonsingular, the system Ax=b has a unique solution.

From here on, we concentrate on systems of equations with unique solutions.

There are two basic classes of methods for solving system Ax=b. The first class is represented by direct methods. They theoretically give an exact solution in a (predictable) finite number of steps. Unfortunately, this does not have to be true in computational praxis due to rounding errors: an error made in one step spreads in all following steps. Classical direct methods are discussed in this section. Moreover, solving an equation system by means of matrix decompositions, can be classified as a direct method as well. The second class is called iterative methods, which construct a series of solution approximations that (under some assumptions) converges to the solution of the system.

First, even if a unique solution exist, numerical methods can fail to find the solution: if the number of unknown variables is large, rounding errors can accumulate and result in a wrong solution. The same applies very much to systems with a nearly singular coefficient matrix. One alternative is to use iterative methods, which are less sensitive to these problems. Another approach is to use the QR or SVD decompositions, which can transform some nearly singular problems to nonsingular ones. Second, very large problems including hundreds or thousands of equations and unknown variables may be very time demanding to solve by standard direct methods. On the other hand, their coefficient matrices are often sparse, that is, most of their elements are zeros.

jueves, 6 de mayo de 2010

ROOTS OF EQUATIONS

ROOTS OF EQUATIONS





Given a continuous function f (x), find the value x0 of x for which f (x0) = 0. We assume that both x and f (x) are real, although some of the algorithms that we see are valid for complex functions analytic complex variable. The values for which x0 satisfies f (x0) = 0 are called roots of the equation.



CLOSED METHODS

1. Graphic Methods


A simple method to obtain an approximation to the root of the equation f (x) = 0, is to plot the function and see where it crosses the x-axis.

Example: f(x)=x3+x2-3x+5


The graph shows the existence of some roots, including a double root at x = 4.2






There is a root in X= 4.23 and X=4.26

metodo grafico



2. The Bisection Method



In general, if f(x) is real and continuous in the interval from X1 to Xu and f(x1) and have f(Xu) opposite signs, that is

F(X1)F(Xu)<0>

then there is at least one real root between X1 and Xu.

METODOS CERRADOS


OPEN METHODS



3. Simple Fixed-point Iteration
Open methods employ a formula to predict the root. Such a formula can be develped for simple fixed-point iteration by rearranging the function f(x)=0, so that s is on the left-hand side of the equation:

x=g(x)






Figure 2.1: Graphical depiction of simple fixed-point method.



4. The Newton-Raphson Method


If the initial guess at the root is Xi, a tangent can be extended from the point (Xi, X(Xi)). The point where this tangent crosses the x axis usaually represents an improved estimate of the root.
The Newton-Raphson formula is

Xi+1=Xi –( f(Xi)/f’(Xi))



http://www.uv.es/diazj/cn_tema2.pdf



Open Methods



ROOTS OF POLYNOMINALS


Every polynomial P in x corresponds to a function, ƒ(x) = P (where the occurrences of x in P are interpreted as the argument of ƒ), called the polynomial function of P; the equation in x setting f(x) = 0 is the polynomial equation corresponding to P. The solutions of this equation are called the roots of the polynomial; they are the zeroes of the function ƒ (corresponding to the points where the graph of ƒ meets the x-axis). A number a is a root of P if and only if the polynomial x − a (of degree one in x) divides P. It may happen that x − a divides P more than once: some power (x − a)m divides P; the largest such power m is called the multiplicity of the root a in P. When P is the zero polynomial, the corresponding polynomial equation is trivial, and this case is usually excluded when considering roots: with the above definitions every number would be a root of the zero polynomial, with undefined (or infinite) multiplicity. With this exception made, the number of roots of P, even counted with their respective multiplicities, cannot exceed the degree of P.

A polynomial function in one real variable can be represented by a graph.

The graph of the zero polynomial
f(x) = 0
is the x-axis.
The graph of a degree 0 polynomial
f(x) = a0, where a0 ≠ 0,
is a horizontal line with y-intercept a0
The graph of a degree 1 polynomial (or linear function)
f(x) = a0 + a1x , where a1 ≠ 0,
is an oblique line with y-intercept a0 and slope a1.
The graph of a degree 2 polynomial
f(x) = a0 + a1x + a2x2, where a2 ≠ 0
is a parabola.



The graph of a degree 3 polynomial
f(x) = a0 + a1x + a2x2, + a3x3, where a3 ≠ 0
is a cubic curve.



The graph of any polynomial with degree 2 or greater
f(x) = a0 + a1x + a2x2 + ... + anxn , where an ≠ 0 and n ≥ 2
is a continuous non-linear curve.
The graph of a non-constant (univariate) polynomial always tends to infinity when the variable increases indefinitely (in absolute value).

Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.

The illustrations below show graphs of polynomials.





Roots of Polynomials


miércoles, 5 de mayo de 2010

NUMERICAL APPROXIMATION

Numerical Approximation

Approximation usually occurs when an exact form or an exact numerical number is unknown or difficult to obtain. However some known form may exist and may be able to represent the real form so that no significant deviation can be found. It also is used when a number is not rational, such as the number π, which often is shortened to 3.14, or √7 as ≈ 2.65. Numerical approximations sometimes result from using a small number of significant digits.

Significant Digits
Significant digits give an indication of the accuracy of a number. A digit which is 0 is significant if it is not a place holder.

Accuracy and Precision
Accuracy
refers to the number of significant digits in a number.
Precision refers to the decimal position of the last significant digit.

Example:
Comparing the two numbers 0.041 and 7.673, we see that 7.673 is more accurate because it has four significant digits, where 0.041 only has two.
The numbers have the same precision, as the last significant digit is in the thousandths position for both.

Mathematical error

In applied mathematics, the difference between a true value and an estimate, or approximation, of that value. In statistics, a common example is the difference between the mean of an entire population and the mean of a sample drawn from that population. In numerical analysis, round-off error is exemplified by the difference between the true value of the irrational number π and the value of rational expressions such as 22/7, 355/113, 3.14, or 3.14159. Truncation error results from ignoring all but a finite number of terms of an infinite series.

http://www.intmath.com/Numbers/5_Approximate-numbers.php
http://en.wikipedia.org/wiki/Approximation

MODELING

MODELING

Mathematical model

A mathematical model is an abstract model that uses mathematical language to describe the behaviour of a system. It is defined as ‘'a representation of the essential aspects of an existing system, which presents knowledge of that system in usable form’’.
Mathematical models are used particularly in the natural sciences and engineering disciplines, it can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models.
These models can overlap, with a given model involving a variety of abstract structures. There are six basic groups of variables: decision variables, input variables, state variables, exogenous variables, random variables, and output variables. Since there can be many variables of each type, the variables are generally represented by vectors.

A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. The values of the variables can be practically anything; real or integer numbers, boolean values or strings.

Classifying mathematical models
1. Linear vs. nonlinear
2. Deterministic vs. probabilistic (stochastic)
3. Static vs. Dynamic

Example of mathematical models

Model of a particle in a potential-field. In this model we consider a particle as being a point of mass m which describes a trajectory in space which is modeled by a function x : R → R3 giving its coordinates in space as a function of time. The potential field is given by a function V : R3 → R and the trajectory is a solution of the differential equation

m .(d^2/dt^2).X(t)=-grad(V)(X(t))

Darcy's law


In fluid dynamics and hydrology, Darcy's law is a phenomenologically derived constitutive equation that describes the flow of a fluid through a porous medium.
Darcy's law is a simple proportional relationship between the instantaneous discharge rate through a porous medium, the viscosity of the fluid and the pressure drop over a given distance.

Q=-kA.(Pb-Pa)/mL

The total discharge, Q (units of volume per time, e.g., ft³/s or m³/s) is equal to the product of the permeability (κ units of area, e.g. m²) of the medium, the cross-sectional area (A) to flow, and the pressure drop (Pb − Pa), all divided by the dynamic viscosity m (in SI units e.g. kg/(m•s) or Pa•s), and the length L the pressure drop is taking place over. The negative sign is needed because fluids flow from high pressure to low pressure.


http://en.wikipedia.org/wiki/Modelos matematicos
http://www.unesco.org.uy/phi/libros/obrashidraul/Cap7a.html



Serie de Taylor

THIS VIDEO IS INTERESRTING...