Comp 210 Lab 7: Generative recursion, Homework hints

A Funny Function, Mergesort, Finding a root of a function, Homework hints

Generative recursion is simply a catch-all term for recursion that doesn't necessarily follow from the data definition. When our functions do follow the data definition, life is good: half our work is done by simply writing the template, plus we are guaranteed that our function will terminate (by the mathematical principle of induction -- see Comp 280). But sometimes life (and programming) is more complicated.


A Funny Function

Consider the function f : positive -> positive,
f(n) = 1 if n=1
n/2 if n is even
3n+1 if n is odd
While it may look like a pretty mundane function, try iterating it (repeating it over and over, using the previous result), and seeing how long it takes to reach 1 (if ever). This is known as Colatz's function, and we've seen it before in threes:

; threes : positive -> positive
(define threes
   (lambda (n)
     (if (= n 1)
         1
         (threes (if (even? n)
                     (/ n 2)
                     (+ 1 (* n 3)))))))

An example: if we start with (say) 5, we find that:

So starting with an initial value of 5, iterating f leads to 1 in five steps. "Sure," you say, "that starting point of 5 happens to lead to 1 after a few steps, just lucky!" What if we start iterating from 3? From 17?

Write a program stops? which takes a number n, and returns #t if n, f(n), f(f(n)), ... ever reaches the number 1, i.e., if (threes n) stops. (What does your program do if this doesn't happen?) It should not call threes, but it will resemble it. This function is an example of generative recursion: the answer to (stops? n) might depending generating a new instance, 3n+1, and recursing on that. Notice that unlike structural recursion, there is no guarantee that our generative recursion problem is going to finish.

Why do we choose to stop at f(1)?

Does threes stop for all inputs? (Don't feel bad if you can't answer this. No one else has yet.)

Now, write a modified function time-til-stop (or tts if you hate typing), which instead of returning #t, it returns the number of steps until 1 is reached. That is, (time-til-stop 5) = 5.

(This function can be written without using an accumulator. If you want, you could also use an accumulator for it. Really, you should have enough knowledge to write it either way.)

Test time-til-stop on some large numbers, say 1001, 2349872349247990, and 2349872349247991.


Mergesort

Here's a more standard version of mergesort than the one you have seen:

; mergesort : list-of-numbers -> list-of-numbers
(define mergesort
  (lambda (lon)
    (if (has-zero-or-one-elements? lon)
	lon
	(local [(define len (length lon))
		(define len-half (truncate (/ len 2)))]
	  (merge (mergesort (take lon len-half))
		 (mergesort (drop lon len-half)))))))
(The full code is in ~comp210/public_html/Labs/lab07/mergesort.ss.)

As mergesort recurs, clearly the length of the argument lon decreases, and we eventually get to our base case. That is how we convince ourselves that this function always terminates.

For the curious...

To be more accurate, we have two ways of describing what is decreasing:

  1. the base two logarithm of the amount of data: we test if it is 0 in the base case, and it decreases by one in each recursive call, or
  2. the length of the list: we test if it is 0 or 1 in the base case, and it decreases by about half in each recursive call.
Why is this distinction interesting? In Comp280 you'll see two flavors of induction, one where the inductive case depends only on the previous case (e.g., subtracting by 1), and one where the inductive case depends on any smaller cases. Induction is just the mathematical/logical equivalent of recursion.


Finding a root of a function

(Labbies: Omit this if you don't have time.)

As a different example of generative recursion, consider using Newton's Method to find a root of a function--that is, where the graph crosses the x-axis.

Newton's method will be described by the TA on the board. Roughly, you start with an initial guess x0 as to where a function f crosses the x-axis. The guess is probably wrong, so generate a next guess by the following method:

  1. Calculate the slope of the function f at x0.
  2. Extend the tangent line from (x0,f(x0)) and see where that line crosses the x-axis.
  3. Use that line-crossing as an estimate as to where the complicated function f crosses the line, making that new point x1.
  4. Repeat this process using x1, x2, etc., until your solution is "close enough".
A picture is worth a thousand words (but don't worry about all the words on that link unless you want to).

Newton's method often converges very quickly to a root, but you can also think of cases where it is foiled: it might divide by 0, it might loop or diverge, and even if it gives you a root, it might not be the root you want.

A hint and test data are available.


Homework hints

In lab, we will discuss some ideas about what states are for the Missionaries & Cannibals problem. The following are some generalities to consider:

  1. The first step is to think about is what pieces of information are important to know. For this problem, what do we need to know about the missionaries, cannibals, boat, and sides of the river?
  2. The second step is to choose how to represent that data.

Your M&C program needs to build up a list of moves to the result. That sounds sort of similar to our in-class example that built up a list of 'mother and 'father for a path in a family tree.