iconEuler Home

Interval Arithmetic

Interval arithmetic uses worst case estimates to get an inclusion for results. This can be essential, if the source data are already not precisely known. But even if they are, the computation may spoil the result. Interval arithmetic uses fixpoint theorems to guarantee that there is a solution in the computed interval.

This notebook introduces the interval arithmetic of Euler. For a reference to intervals and algorithms using intervals have a look at the following pages.

Euler Core
Interval Functions

The closed interval [2,3] is entered with the following notation.


Alternatively, you can use plus-minus. If you do not have this key, press F8.


Plus-Minus is also available for output.


To set this output format permanently, use ipmformat. To reset to the normal format, call ipmformat(false).

>ipmformat(true); ~1,2~, ipmformat(false);

Basic Rule

Operators between intervals yield an interval result.

The basic rule is: The result contains all possible results, when the operator is applied to all arguments in all intervals.

>~2,3~ * ~4,7~

Note that the following two expressions are different.

>~-1,1~^2, ~-1,1~*~-1,1~,

The first one contains all squares of elements in [-1,1], the second contains all s*t, where -1 <= s,t <=1.

The interval ~s~ is a small interval around s.

>~2,2~, ~2~,

For a simple example, we compute the height of a mountain in 20.2km distance, which appears at an angle of 3.2° above the current level. We assume that all values are known to the significant digits.

>d=20.2km±0.05km; a=3.2°±0.05°; d*sin(a)

Indeed, if we take the extremal values of our errors, we get a large interval of possible results.

>20.15km*sin(3.15°), 20.25km*sin(3.25°),

Error Control

Evaluating a longer expression in the interval arithmetic can lead to error propagation. Assume we have the following function.


Let us evaluate the function in an interval of length 0.2.

>x:=1.2±0.1; expr(x)

As we will see, this is not a good inclusion of the result. One of the main reasons is, that the interval evaluation takes a different x from the interval, whenever the variable x occurs, and includes all these results.

The function "mximieval" uses Maxima to compute the derivative of the expression, and with the derivative, we get a much closer inclusion.


Sub-dividing the interval yields an even better inclusion.


A simple subdivision may lead better results too, but with more effort. However, we do not have to compute the derivative. So this method works for functions too, not only for expressions.


If we plot the function, we see, that the maximal value on the interval is at 1.1, and the minimal value is somewhere inside the interval.


08 - Intervals

We can find the place of the minimum with fmin, and then evaluate the expression there to get the minimal value. Thus we get the true limits of the image of [1.1,1.3] under this mapping.

>expr(fmin(expr,1.1,1.3)), expr(1.1)

Plotting Intervals

We can make a plot of an interval function. To demonstrate this, we disturb the x values -1 to 2 by 0.01, and evaluate our expression. We print the first three values only.

>x:=-1:0.01:2; y:=expr(x±0.01); y[1:3]
[ ~-0.07,0.07~,  ~-0.04,0.098~,  ~-0.01,0.13~ ]

The plot2d function will plot the y-ranges as a filled curve.

>plot2d(x,y,style="\/"); plot2d(x,expr(x),thickness=2,add=1):

08 - Intervals


The 10-th partial sum of the Taylor series of exp is the following polynomial.

>p=1/fac(0:10); fracprint(p')

Check with Maxima.

          10        9       8       7     6     5     4    3    2
         x         x       x       x     x     x     x    x    x
       ------- + ------ + ----- + ---- + --- + --- + -- + -- + -- + x
       3628800   362880   40320   5040   720   120   24   6    2
                                                                   + 1

We evaluate this polynomial in -2.

>longformat; evalpoly(-2,p)

Compare to the correct result for the infinite series.


We can get an inclusion of the true value from the Taylor expansion with the remainder term in interval notation. The error of the series expansion is less or equal the next term in the series, since the series is alternating.

Note that we have to use ~p~ because p is not exactly represented in the computer. But we can use the one point interval [-2,-2].


Some more terms. Note that 21! is exactly representable in the computer.

>n=20; evalpoly(-2,~1/fac(0:n)~)+~-1,1~*2^(n+1)/fac(n+1)

Inclusions of Solutions

The interval arithmetic provides guaranteed inclusions of zeros of functions using the interval Newton method.

For this, we need a function f and its derivative f1.


Maxima can compute the derivative for us.


Compare with the real Newton method.

>longestformat; newton("cos(x)-x","-sin(x)-1",1)

There is a very simple function, which uses a bisection method controlled by interval computation.


One famous example occurs, if we imagine a chord around the earth, which is 1m longer than the perimeter of the earth. We lift that chord at one point. How high can we lift it?

If we denote the angle of lifting by 2*alpha, the problem is essentially equivalent to the solution of

08 - Intervals

Let us try this.

>r=rEarth$; alpha=bisect("tan(x)-x",0,0.1,y=1/(2*r))

From this, we get the height with elementary trigonometric computations.


We have no idea, how good this solution is. So we check with an interval solver, and find that the answer is quite reliable.

>mxminewton("tan(x)-x",alpha,y=1/(2*r)); r*(sec(%)-1)

For larger values of r, we get less reliable results.

>r=1.2e15; ibisect("tan(x)-x",0,0.1,y=1/(2*r)); r*(sec(%)-1)

Systems of Equations

Let us try a multidimensional example. We solve

08 - Intervals

First, we need a function.

>function f([x,y]) &= [x^2+y^2-1,x^2/2+2*y^2-1]
                       2    2         2   x
                     [y  + x  - 1, 2 y  + -- - 1]

First we plot the contour lines of the first and the second component of f, i.e., the solutions of f_1(v)=0 and f_2(v)=0. The intersections of these contour lines are the solutions of f(v)=0.

>plot2d("x^2+y^2",r=2,level=1); ...

08 - Intervals

For the Newton method, we need a derivative matrix.

>function df([x,y]) &= jacobian(f(x,y),[x,y])
                             [ 2 x  2 y ]
                             [          ]
                             [  x   4 y ]

Now we are ready to get an inclusion of the intersection points (the zeros of f) using the two dimensional Newton method. The interval Newton method yields a guaranteed inclusion of one of the solutions.

[ ~0.8164965809277252,0.816496580927727~,
~0.5773502691896251,0.5773502691896264~ ]

The usual Newton method yields the solution as a limit point of the Newton iteration.

[0.816496580927726,  0.5773502691896257]

The Broyden method does the same, but does not deliver a guaranteed inclusion. It does not need the derivative.

[0.816496580927726,  0.5773502691896257]

Differential Equations

There are also interval solvers for differential equations. We try

08 - Intervals

The first, very simple interval solver does not use derivatives and is very coarse. We get a very rough inclusion with the step size 0.1.

>x=1:0.1:5; y=idgl("y/x^2",x,1); plot2d(x,y,style="|"):

08 - Intervals

With a finer step size of 0.01, the inclusion is better.

>x=1:0.01:5; y=idgl("y/x^2",x,1); plot2d(x,y,style="|"):

08 - Intervals

The following solver uses a high order method and Maxima. We get a very good inclusion.

>y=mxmidgl("y/x^2",x,1); y[-1], plot2d(x,left(y)_right(y));

Of course, the default ode solver gets a good result too, and very quickly.


Example: Eigenvalues

Let us demonstrate the use of interval methods for the inclusion of a Eigenvalue.

We first generate a random postive definite matrix.

>A=random(20,20); A=A'.A;

Then we compute its Eigenvalues in the usual way.


We take the first Eigenvalue.

>longformat; l=l[1]

Compute an eigenvector.


We normalize it in a special way.


We now set up the function

08 - Intervals


08 - Intervals

>function f(x,A) ...
 n=cols(x); return ((A-x[1]*id(n)).(1|x[2:n])')';

The following is the starting point of our algorithm.


We could use the Broyden method to determine the zero of this function.

>yb=broyden("f",lx';A); yb[1],

For the interval Newton method, we define the Jacobian of f.

08 - Intervals

>function f1(x,A) ...
 return B;

Let us expand lx a bit, so that it probably contains a solution of f(x)=0.


The interval Newton method now proves a guaranteed inclusion of the Eigenvalue.

>{y,v}=inewton2("f","f1",ilx';A); y[1]

v=1 means, that the inclusion is verified.



We try to compute

08 - Intervals

1/n is much smaller than 1, and while 1+1/n is correct up to 16 digits, we loose many digits from 1/n.

>longestformat; n=12346789123; 1+1/n

And thus the following result is inaccurate, if we compare to the correct result. This is called cancellation.

>(1+2/n)-(1+1/n), 1/n

Using interval arithmetic, we can see that our result is not good.


As a consequence, the following approximation of E is much worse than it should be.

>(1+1/n)^n, E

The correct error is of the order E/n~=1e-10. But we have only 1e-6. We can compare with a slow long precision computation in Maxima.

>&fpprec:30; &:float((1+1b0/@n)^@n) // (1+1/n)^n with 30 digits

However, with a binomial expansion and an estimate we get a much more precise evaluation of (1+1/n)^n.

>kn:=15; k=~1:kn~; 2+sum(cumprod((1-k/n)/k)/(k+1))+~0,1/kn!~

We can design a function for log(1+x), which works even for very small x. Depending on the size of x, we use a Taylor expansion for log(1+x), or the usual evaluation. The Taylor expansion can be computed by Maxima at compile time.

>function map logh (x:scalar) ...
   ## Compute log(1+x) exactly
   if abs(x)<0.001 then
     if isinterval(res) then
     return res;
     return log(1+x);

Now we get a good evaluation of (1+1/n)^n.


And for intervals we get an exact inclusion of the value.


Euler Home