In numerical mathematics, input data is given to an algorithm which then produces output data. The algorithm is a finite number of precise instructions,
which require specific input data and, executed in a given sequence, determine the output data. Typically, there are errors in the input data either because the input data is not known precisely or because it cannot be expressed precisely as finite-digit numbers. The algorithm typically introduces more errors. These errors can be methological errors or round-off errors associated with representing the state of the system at every step by finite-digit numbers. When working with a computer, it is of outmost importance to keep track of all possible errors so that the precision of the results that are obtained can be estimated. More discussion on errors in numerical methods can be found in references [Dorn, 1972] and [Björck, 1972].
In this example, the equivalent code in c++ and FORTRAN is provided. To use this code it must be copied into a program that can execute it. Only JavaScript will execute in the webpage.
The results of the machine precision check algorithm obtained on a PC IBM are shown in the table below.
We assume that a is determined by an experiment, and thus
affected by uncertainity:
a=25.503±0.001
It is easy to verify that the exact solutions for this system for
a=25.503 are: x=2.0 and y=3.0.
The table below shows how the values of x and y change,
if we assume that the last digit of a changes by one unit:
Obviously, this set of equations constitutes an extremely ill-conditioned
problem! The results, given the experimental uncertainity on a,
are completely meaningless!
Because of roundoff errors, the result is 4.999999999999998. The reason for this is that the number 0.1, which is representable
without problems in a decimal system, is represented an infinitely repeating pattern of bits in binary.