Below are my reminders for lecture.
A more fleshed out version of most of these
topics can be found from
previous semesters:
(pdf)
Look at the code for each of
Is there a pattern? A template, to help guide you?
Well, barely:
-
- base case (when are we trivially done?)
-
- else,
-
+ subproblems (what smaller problems would help us?)
-
+ combine (how to combine results from sub-problems)
-
- NEW: provide Termination argument (when possible! -- see threes, below)
For each of the programs we wrote,
what is the base case? subprobs? combination?
Termination arg?:
-
+ hilo-v2: Argument already provided in the code for hilo.
Recall that it was tricky; the handwaving "we always recur on
a smaller interval" wasn't really true for our first version!
-
+ sierpinski argument of termination:
-
(a) The function too-small? will end the recursion when any of a
triangle's sides are smaller than some fixed positive number.
[from code for too-small?, and sierpinski's base case.]
-
(b) On every recursive call, the size of each side of triangle
becomes half the size [based on a highschool geometry th'm].
-
(c) Repeated halving will get smaller than any fixed positive number.
[from math class]
Thus, every recursive call will terminate.
[Note that if too-small stops on size-zero triangles,
then this argument fails; indeed the algorithm also fails to terminate.]
-
+ For mergesort the argument is similar to hilo:
-
(a) We terminate on lists of size 0 or 1 [by code's base case]
-
(b) On each recursive call, the list is at least 1 element shorter.
[We only recur if the length is >= 2, so unzip returns
two lists with length >= 1, so each list has at least one
element that isn't in the original (it's in the other half).]
Note: this wouldn't hold, if we tried unzip'ing a list of length 1!
-
(c) Therefore the recursive calls always terminate;
the only extra work we do is merge (which also terminates
[it is structural induction (template)]), so we also terminate.
Be careful not to assert that mergesort recurs on lsits that
are <= half the original size, because that's not always true!
(Whenver the original list has odd length.)
Here's threes;
(define (threes n)
(cond [(= n 1) true]
[else (threes (if (even? n)
(/ n 2)
(+ (* 3 n) 1)))]))
; example hand-evaluation:
(threes 3)
= (threes 10)
= (threes 5)
= (threes 16)
= (threes 8)
= (threes 4)
= (threes 2)
= (threes 1)
= true
Does this fit the pattern for template (structure recrsion)?
For generative recursion?
What about an argument of termination?
For threes, termination is difficult to show -- in fact,
it's a longstanding open question in mathematics!
It happens to terminate for each number anybody's tried
(and waited for an answer for ...), but that's not a proof.
To do:
Modify threes to return the number
of iterations until 1 is reached.
(Hint: while it's generative recursion,
and matches the gen. recur. template, the rules for base case
and combining subproblems can still be simple
(and still in the flavor of the structural-recursion templates).
New problem:
quicksort
Invented by Tony Hoare, 1966, a classics major from Cambridge.
- first, just filter > and <=
[note to prof:
if base case is length <= 1,
then this buggy version will still work on *some* inputs!
(A termination condition of empty will work later,
but would always expose the bug immediately at this point.)
Should probably start with length<=1, show a falsely reassuring
test case, then show bug.
In very final version, show that base-case empty would also works.]
- then filter < and >, and include pivot manually
- (Mention trying: filter <=,
and place pivot to assure both partitions smaller)
- finally, filter =s.
Termination argument: (similar to mergesort's):
(a) We don't recur on empty lists [by the code]
(b) When we recur, the only other work we do is
append and some filters; these terminate [by structural recursion].
(c) When recurring, we always recur on a shorter list:
in particular, we never recur on a list containing
the "pivot" value (first nums).
Thus the recursive call will terminate, and so will this call.
Notes: - we saw in lab, some inputs for which quicksort does poorly;
(there is a fix, though sometimes it will still do poorly)
- in-depth studies actually use quicksort until list is ~ length 10,
and then switch over to insert-sort.
- remember that correctness is more important than efficiency;
we only dwell on efficiency when it's a frequently-called subroutine.
(even experts can predict which subroutines are frequently-called).
- append still takes some time;
there is a different way of storing numbers (arrays)
where quicksort is clever and gets the append for free.
So, lists aren't quicksort's forte (though it still does reasonably)