[an error occurred while processing this directive]

Sets

"set", "element-of" are undefined operations (just as point/line/plane in geometry; in fact "incident on a line" is really same as "element-of"); it all comes down to being able to answer, is a particular element in the set?

Two ways of writing sets:

Examples;
the empty set, ∅.
Standard names: B (booleans), N, Z, Q, ℜ strings: Σ*

Code considerations, implementing these two views. (code)

Note that integer?, boolean?, etc just define sets — in programming languages, "type"s are just certain sets. (Imagine a prog.lang where a type is the set of primes, or the set {'black,'white,'blue,'red',green,'orange}. In fact, various languages allow for some such extensions.)

Subsets: for all x, x ∈ A implies x ∈ B.
Notation reminiscent of "≤".

Equal sets (two equivalent characterizations):

  1. x ∈ A iff x ∈ B.
  2. (ii) A ⊆ B, and B ⊆ A.
The first is clearly the correct def'n. The second seems correct, and has the advantage that it translates directly into code:
  (define (set= A B)
     (and (subset A B) (subset B A)))
(note how the first characterization doesn't translate immediately into code, w/o some looping over all elements x.)

BUT… while (i) and (ii) really do seem equivalent after a moment's reflection, it used to seem clear that 90 ≠ 100. Can we prove that (i) and (ii) are equivalent?

Not in text:
Theorem: statements (i) and (ii) are equivalent: Proof: We have to show two things:

Okay, this was written out in great detail, with constant reminders of what was being assumed and what was to be shown. But step back and observe the structure of the proof: showing two parts of "(i) equivalent to (ii)" (each of which again further required an "if" and an "only if" direction.)

Granted, this is not an earthshaking fact (unlike 90=100, which not many people realize before seeing that proof). However, there is a tangible difference between the two definitions: If we try to write code for def'n(i), it's not clear how to test over all x. But the second translates immediately into two calls to subset?.

Def'n: proper subsets: not equal.
Def'n: nontrivial subset: non-empty proper subset.

With our two implementations of sets:

How can we restrict our functions, to stay tractable? By and large — finite sets. (Leave predicates "even?" and "prime?" not as sets which we can call union, intersect on.)
Special cases:

Set-constructor operations

Making new sets out of existing ones.
If we do use our original data def'n and support infinite sets, (at the expense of set=?, etc), then write union, intersect, cross-prod, power-set. (Leave as exercise?)

Our naive set theory:

Despite implementation problems of inf. sets, mathematically easy: as long as you can tell if a given item is in the set or not (whether or not it's convenient to compute it), you completely understand the set. Knowing the set's "indicator function" is tantamount to the knowing the set. ("isomorphic")

Not in text:
Consider the set whose indicator function is the constant function true — (λ (elt) true).
What set is this? We'll call it SuperU (for "universe"?). Note that not only are 4 and my car elements of SuperU, but so is the entire set ℜ is in SuperU, as well as the three-item set { {}, ℜ, ℜ∪Z }. (That's fine to have sets that contain sets, just as lists can contain lists.)
Somewhat disturbingly though, SuperU is itself contained in SuperU! A set containing itself?! A bit dizzying, but we'll let it in the door.

Uh-oh, logician/philosopher Bertrand Russell (around 1900) sees SuperU, and senses he can do something wicked.

First, he suggests SuperB ("bertrand"), the set of all sets which which contains themselves. The indicator function is even easy to code up: (define (SuperB elt) (element-of? elt elt)) For instance, SuperU is one of the elements of SuperB. (Think of "a book which lists all the books which mention their own title".) This discomfits us, but we don't kick Bertrand out; maybe if we just ignore him he'll go away and stop giving us a headache. [Interesting: does SuperB contain itself? Either answer is consistent, but it tips us off that our set's "easy condition" doesn't seem to fully specify the set.]

Not so lucky: encouraged by this success, Bertie cranks it up a notch, and suggests the set SuperR, of all sets which don't contain themselves. Again the indicator function is easy: (define (SuperR elt) (not (element-of? elt elt))) (Think of "a book which lists all the books which don't mention their own title".) SuperR seems to include relatively normal elements and sets, but our head is pounding more than ever. It's too late, for Bertrand jumps up and cackles as he asks:

Does SuperR contain itself?
"Well", we say, flustered: "Either it contains itself or it doesn't — after all that's all there is to sets: they either contain a given item or not. So, let's see…" "Aaaahhh!" And we are carted away to the asylum, as Bertrand sits in our favorite easy chair, drumming his fingers together, saying "Excellent…"

Indeed, this was very disturbing news to mathematicians around 1900: the concepts of sets, which underlies all of mathematics, has a paradox! Within mathematics, there could be no worse possible catastrophe. This began a program to re-vamp set theory.

Russell's own approach: a "tiered" classifications sets :
sets of atomic elts, sets containing sets of elts, …
It can be finagled to work, but is very unsatisfying.

Used today, when a rigorous foundation needed: a formalization ``Axiomatic set theory''. But for our purposes, we'll just use ``naive set theory'' —

Thus we disallow SuperU, even though it has the easiest indicator function of all.

Yes, this is deeply connected to the halting problem (though discovered 30yrs apart).

Reading: §§ 1.6(sets), 1.7(set ops), 1.8(functions)

[an error occurred while processing this directive] [an error occurred while processing this directive]