(Or, Why Computers Don't (Always) Add Correctly.)
Introduction, Exact numbers, Floating-point numbers, Overflow, Underflow, Round-off Adding numbers of widely different magnitude, Tradeoffs
Read and work through the following (mostly on your own), to understand how you can find a list three of numbers which, when the computer adds them, is incorrect by as much as you want.
What is a trillion plus 17?
My niece can answer this question correctly (isn't she precocious?). However, many languages and calculators don't -- their built-in "addition" function return a trillion as the answer. Similarly, many computer languages have division functions which claim that one divided into 3 is 0.3333333333333. Not far off, although then they are put in a bit of a conundrum -- is 0.3333333333333 + 0.3333333333333 + 0.3333333333333 exactly 1? Scheme happens to handle these to cases correctly, but it too can give wrong answers: For instance, my brilliant niece outsmarts both scheme and my calculator when asked what the square of the square-root of 3 is.
For fun:
evaluate (/ 1 183)
;
and click on the result's ellipses a few times.
(This behavior is the default in the student languages;
in the Language... you can show-options
to decide how to display rationals.)
Is this some inherent difficulty that computers can't handle? Of course not -- Scheme witnesses that computers can always handle rational numbers exactly. Further, some languages like Mathematica also handle irrationals exactly correctly.
So, why even tolerate inexact numbers? After all, if the new airplanes -- which are largely controlled by a computer -- you might have a particularly vested interest in knowing that your computer really gets the correct answer to 185 * (1/185). It is a accuracy-speed tradeoff: Scheme represents all rational numbers exactly, actually keeping track of the numerator and denominator exactly. However, for irrational numbers, it gives up and just keeps track of an approximation -- up to (say) ten decimal places. (Again, this is not any sort of inevitable necessity of dumb computers -- it's just a by-product of language designers who are pre-emptively optimizing our calculations for us, introducing errors.)
However, occasionally mathematical programs in Scheme can be unacceptably slow because they calculate exact answers. For example, consider finding the integral of a curve using the standard beginning calculus method, adding the areas of several hundred billion small trapezoids under the curve. Scheme would accurately calculate these areas in a result like 3141592653589793238462643383279/ 2718281828000000000000000000000. Working with such fractions can be slow, so Scheme can instead work to return an approximate result, like #i1.155727350137, much more quickly. (The #i stands for inexact.) (A partial explanation: Consider multiplying two fractions. This really means multiplying twice, once each for the denominators and numerators, plus also simplifying the result.) The idea is that the answer might be inexact, but hopefully the error is small. Today's lab discusses this trade-off between efficiency and accuracy, and also demonstrates how sometimes computers give answers that are just flat-out wrong.
Keep in mind that actually, for most real-world cases, exact arithmetic doesn't cause a slowdown. I once wanted to do some calculations about probabilities in a card game with two decks, and knew that my naive math formulas would generate intermediate fractions with denominators of 104!, and be doing lots of addition with these fractions, and I didn't want this to slow down my program. (After all, 104! is a number that literally puts the term "astronomical" to shame.) So, I wrote my program in a smart way to do cancellations early and lots of addition of numbers smaller than the width of the universe as measured in proton-widths. After an hour of programming, I had my answer, I had all the functions debugged, happily computed my answer (which came back immediately), and could get back to playing my card game. Out of curiosity, I also wrote the straightforward program in 5 minutes, and ran that -- and after running for about a half second, that program confirmed my answer! The upshot is, by prematurely worrying about optimization, I wasted 54 minutes and 59.5 seconds of my time, on a 5 minute problem. (Had I wanted to know about problems involving seven decks, my "fast" program might have been necessary, but alas I wasn't interested in that problem.)
As mentioned before, Don't Pre-emptively try to Optimize. Write your program in the clearest, simplest, most direct way first; only if you know it isn't fast enough should you profile it to see where the bottleneck is, and take action.
While there is no bound on how big or tiny a mathematical number can be, if a computer decides to represent numbers as a fixed number of digits, there will be a bound on how big or tiny a number it can represent that way.
You have probably already noticed that Scheme uses "arbitrary precision arithmetic": It can represent very large integers accurately. For instance, ten to the eightieth power plus seventeen can be typed directly into Scheme, 100000000000000000000000000000000000000000000000000000000000000000000000000000017, and the value stored exactly. The only limit to such numbers in Scheme is the actual amount of memory of the computer. To do: Try it!
In order to represent these large, exact-precision integers, Scheme is essentially keeping a list of the digits. When it adds or multiplies two exact-precision numbers together, Scheme uses the grade-school algorithms.
There is an alternate way to represent numbers: one can assume that all integers have (say) 9 digits. Knowing the size of all numbers, we can use better algorithms for addition and multiplication, and implement these algorithms in hardware. This makes addition and multiplication of numbers simpler and faster, and might be sufficient for most programmer's needs. Of course the drawback is that only about 109 numbers can be expressed in this manner.
This fixed-size representation doesn't apply only to integers,
but can be applied to fractional numbers.
Most computers nowadays represent numbers by using
about 15 significant digits.
In order to express fractional amounts,
of numbers, scientific notation is used. For instance,
#i6.022e23
is 6.022 with the decimal place shifted
23 places to the right (i.e., 60.22 hexillion).
Just as the mantissa (the "6.022" part) is restricted to about 15 digits,
the exponent (the "23" part) is also restricted in size; we'll see how
big it can be in a moment.
This method of representing numbers is called
floating-point representation,
since the decimal point "floats" among the
15 significant digits, depending on the exponent.
Notice that inside the computer, each floating-point number needs
the same, fixed number of digits to be stored. This convenience
sometimes makes up for the fact that such numbers might only be
approximations to the numbers we're actually interested in.
Note: The actual size restrictions are on the internal binary number representation, not on the printed decimal number representation.
In Scheme, floating-point numbers are called "inexact" numbers because they aren't exact (duh!). In some languages, they are referred to as "reals", because they approximate the real numbers, or as "floats", short for floating-point.
In addition to arbitrary-precision rational numbers,
Scheme also uses floating-point numbers whenever a
number is typed in as a decimal point.
To do:
Evaluate (/ 2 37)
and compare it to (/ #i2.0 #i37.0)
.
(If you really need them,
there are also functions exact->inexact
and
inexact->exact to convert between the two formats.)
Using fixed-width numbers up to a billion
might be fine for most everyday calculations, but what
happens when you try to add a couple large 9-digit numbers and
get a 10-digit number?
The result can no longer be stored as an exact integer;
this is called overflow.
What happens next depends on the particular system.
The program might halt; it might procede with
the result tagged as a special, anomolous value
perhaps written +Infinity
; or (most frightful yet)
it might procede with some totally spurious number without
even informing anybody that all subsequent calculations are garbage.
Just as with integers, it is possible to think of numbers whose exponent is larger than can be stored in a computer's floating point number; calculations whose answer exceeds the largest expressible number is again called overflow.
For any number x
, if we double x
and then
divide by two, hopefully we get x
back.
But here's where overflow can get in the way:
If x
is large enough that
doubling it causes an overflow, then
doubling x
followed by halving the result will likely not
give us x
back.
Let's put on our safety goggles, and do some experiments
to find out about how big x
needs to be,
for this to happen.
To do: Let's take a number, and keep doubling it until (using this approach) we find an overflow:
(define (find-huge x) (cond [(= x (/ (* x 2) 2)) (find-huge (* x 2))] [else x])) (define huge (find-huge #i1.0))What is the value of huge? What is the representation of (* 2 huge), which can't be represented accurately?
Underflow occurs when a number is too tiny to be expressed as a floating-point number; i.e., when you get a non-zero number which is very close to zero. Q: What number do you get when you type in the number #i1e-400? Underflow is not the same as "overflow in the negative direction", that is obtaining a very large negative number than cannot be represented.
To do: Write a function find-tiny which finds a number which is within a factor of two of the smallest (positive) representable fixpoint number. This should be similar to find-huge.
There is an IEEE floating point standard which also incorporates gradual underflow, a scheme to try to go underflow "softly" from a non-zero value to zero (in some situations this difference is a qualitative change, as well as quantitative one).
Sometimes, when dividing one number with 10 decimal places by another with 10 decimal places, the the result requires 11 or more decimal places to store the answer. No problem, you round the last digit, and you have an answer that's pretty darn close to exact. How do calculators know that 3 * (1/3) is exactly one, yet punching in 3 * 0.333333333 yields 0.99999999? Why don't they goof up? It turns out, they are actually storing an extra digit or two, which they never display (nor let people type in). However, a digit or two isn't always enough -- as this example will show. The problem is, round-off error can accumulate.
To do:
The following function adds n copies of m:
; add-copies : natnum num -> num ; Returns the addition of n copies of m. (define (add-copies n m) (cond [(zero? n) 0] [else (+ (add-copies (sub1 n) m) m)]))What is the difference between the following, for example?
(add-copies 185 1/185) (add-copies 185 (/ #i1 #i185))
Write a similar function to subtract n copies of m from 1 and try similar examples. What are the results?
If we are using only 15 significant digits, then we run
into problems when adding numbers which vary by more than a
factor of 1016:
#i1.0e16 + 1
should be #i1.00000000000000001e16
,
but if you only have 15 digits the closest answer is #i1.0e16
,
which is what you started with!
You might think this error is reasonable; being wrong by one part
in 1016 (ten million billion)
is admittedly close enough for government work.
However, small errors can accumulate.
To do/Q:
copy/paste the contents of
/home/comp210/public_html/Labs/Lab14/sum.ss
into DrScheme.
Use the functions sum-r
and sum-l
(defined in sum.ss) to
add some example lists of numbers:
list1 list2, and list3.
These are all different arrangements of the same numbers, so you
would want the same result for each. Is that what you get?
Use the function order (defined in sum.ss) to order the numbers in increasing magnitude. (E.g., (order list1). Since the lists contain the same numbers, it doesn't matter which you use.) What's the sum of this list?
Does ordering elements by magnitude always solve the problem?
Compute sums of the following list, named janus
after
the two-faced Roman god:
(define janus (list 99 #i3.2e+270 -99 #i-2.4e+270 1234 4 #i-8e+269 -4)) ; (sum-r janus) = ?? ; (sum-l janus) = ?? ; (sum-r (order janus)) = ?? ; (sum-l (order janus)) = ?? ; The correct sum = ??What's going on? Do you still trust your computer to give you reasonably correct answers as much as you did 30 seconds ago?
For the curious... Can you modify the small-magnitudes-first rule of thumb to better handle all cases?
For the curious... There are some subtleties with rounding. For example, if the answer is halfway between the two choices, which way do you round? E.g., should -2.5 be rounded to -2 or -3? Methods such as "always round down" or "always around away from 0" suffer from the flaw that all round-off errors, though individually tiny, may all accumulate in the same direction, leading to results that drift away from the true answer. One solution is to round to the nearest even number. Hopefully (though not guaranteedly) this allows two round-off errors to cancel each other half the time, and therefore the total round-off error grows much more slowly. (Will it still accumulate at all, on average?)
So if Scheme has arbitrary precision integers and rationals, why do we ever bother with floating-point arithmetic? Like any interesting choice, there is a trade-off between the two alternatives (otherwise, it wouldn't be interesting). Using arbitrary precision arithmetic can sometimes slow us down, and often the added accuracy isn't enlighting.
To do:
Consider writing our own version of expt
to raise
one number to another, where the second number is a nice, positive
integer. You should be able to whip this out in no time:
; my-expt : number natnum[>=1] -> number ; Computes n to the given power. (define (my-expt n power) (cond [(zero? power) 1] [else (* n (my-expt n (sub1 power)))]))
Now, we try our function on two versions of the same number, one inexact and one exact:
(my-expt (+ 1 #i1e-12) 30) ; 1e-12 is inexact (floating-point) (my-expt (+ 1 (expt 10 -12) 30) ; (expt 10 -12) is exactWhich answer is more useful to you? (Check out the interesting symmetry of the numerator in this second version.)
Try similar examples with a much larger power. Is the result familiar?
Computers must give consistent results for even bizarre math questions like 1/0.
To do:
Try examples like (/ 1 0), (/ #i1.0 #i0.0), (/ #i-1.0 #i0.0), and (/ #i1.0 #i-0.0). Note that we can tell the difference between #i0.0 and #i-0.0!
What about (/ 0 0) and (/ #i0.0 #i0.0)? The latter returns something read as "Not a Number".
In summary -- as you fly home for the holiday, you now know that your airplane is using fixed-sized arithmetic to control the plane, and is blindly hoping that the round-off error will never become significant.
To understand more about floating-point numbers, take Comp 320 or Elec 320. To understand more about the consequences of imprecise arithmetic, and how to still compute valid results, take Caam 353.