Carnival of Mathematics 180

In which this blog hosts the traveling carnival, founded by the fine folks at aperiodical.com

This is number 180 in the series, which I suppose means this has been going on for some 15 years. Tradition has it that I should give some interesting facts about this number. The prime factorization is \(2^2\cdot 3^2\cdot 5\) and so our hero has the largest number of factors of any number so far (Wikipedia tells me such numbers are called highly composite). It fits neatly between the twin primes 179 and 181. It is the number of degrees in half a circle, the sum of the angles of a triangle (in the Euclidean plane, of course), the sum of two squares (144 and 36), the sum of six consecutive primes (19+23+29+31+37+41), and it is a 61-gonal number.

As I write this, the world is consumed by the COVID-19 pandemic. My university, like most others in the United States, has moved its classes online for the remainder of the spring term. My local government (Alachua County, Florida) has issued a shelter-in-place order, designed to minimize social contact. I am working from home, as is my wife, and our son is home from his North Carolina university, completing his courses online. I have purchased plenty of dry goods (rice and beans, mostly), and we’re all trying to make the best of it. I hope that wherever you are reading this that you and yours are safe and healthy.

If you’re anything like me, you’ve been reading lots of articles about modeling the spread of the coronavirus. Some of these are rather grim, but very informative. This one, by Tomas Pueyo, made the rounds on Twitter; it’s worth a read. One of my colleagues, a mathematical biologist, shared this one by Alvin Powell, about the idea of on-again/off-again social distancing as a strategy. Jay Daigle wrote a nice explainer of the SIR model for epidemics.

But enough of that. I assume you read the news and have seen the predictions. How about some non-pandemic math? I received several submissions:

  1. We all know the apocryphal story of a young Gauss correctly finding the sum of the first 100 positive integers in a few seconds. Tom Edgar and Enrique Trevino shared a collection of proofs of the formula for the sum of the first \(n\).

  2. Benjamin Leis shares a good activity relating Pascal’s triangle to Vandermonde’s Identity.

  3. From Games4Life, a connection between tessellations and the Fibonacci sequence.

  4. The Klein bottle, via JA Sites.

  5. Dynamical systems are fun. Here’s an introduction to stability by Ari Rubinsztejn.

  6. And, I want to plug this one, by my podcast co-conspirator Evelyn Lamb (which brings to mind this great song).

  7. Quanta Magazine had some nice articles this past month. Anyone who has ever visited the Mathematisches Forschunginstitut Oberwolfach knows what a special place it is; this article by Kevin Hartnett sums it up nicely. Susan D’Agostino had a nice interview with Ronald Rivest. Erica Klarreich speculated about the geometry of the universe.

If you’re into audio and video, may we suggest the following?

  1. (ahem) My Favorite Theorem featured Ben Orlin in March (sorry for the shameless self-promotion).

  2. Grant Sanderson (3blue1brown) released an excellent video about COVID-19.

  3. Rob Ghrist released the first part of his video series on Applied Dynamical Systems.

That’s it for this edition of the Carnival. Carnival 181 will be hosted by Ben at Math Off the Grid; submission instructions available at the link to The Aperiodical above. In the meantime, stay home, wash your hands, and keep you and yours safe. Be well, everyone.

In memoriam: Peter Fletcher (1939-2019)

In 1987 I went off to Blacksburg, VA, to major in mathematics at Virginia Tech. My goal: to go on to a PhD and become a college professor. I had no idea that this was pretty ambitious for a first-generation student; I merely had the supreme confidence of an 18-year-old who knew he loved math and figured it would all work out. When people at home would ask what it took to get a PhD in math I naively answered, “write a calculus book, I guess.” (See, even then calculus was held up as the end-all-be-all of mathematics to school kids; what could possibly lie beyond the “most advanced math” there was?)

My freshman year was complicated by the fact that I was on an Air Force ROTC scholarship and was therefore a member of the Virginia Tech Corps of Cadets. The Corps is a military-school environment embedded within the larger campus; we wore a common uniform to class everyday, rose early for formation, engaged in physical training and military drill in the afternoons. I didn’t like it very much, but one benefit was the enforced quiet hours in the dorms from 7:00 to 11:00 pm, which meant I had no excuses for not staying up on my homework. That year I took the honors second-year math sequence—multivariable calculus, linear algebra, differential equations—with Bill Floyd (a student of Thurston). I did really well in that class (Bill is an amazing teacher).

Sophomore year was a bit rougher. Foundations of mathematics (set theory, logic, and beginning group theory) introduced me to what it was like to do more advanced math. I worked really hard for that B+. Vector calculus wasn’t so bad. Honors advanced calculus was brutal. Groups and rings: I liked that.

And then, in the fall of 1989, I walked into Introduction to Topology, taught by Professor Peter Fletcher, and my life was changed forever. We spent quite a while on some serious set-theoretic stuff, proving the equivalence of the Axiom of Choice and Zorn’s Lemma and the Well-Ordering Principle (and probably something else, it was 30 years ago, after all). Separation axioms and examples of \( T_3 \) spaces that aren’t \( T_ \) (don’t ask me to produce one). Compact spaces and the Tychonoff Product Theorem and ultrafilters (oh my!). Metric spaces at the end. A pretty standard first topology course, but not in the Munkres style.

From this class alone I got a couple of things that I’ve never forgotten. The first is something Peter said that I still pull out from time to time: “Topology is analysis done right.” This is extremely glib of course, but it’s fun to say. I mean, the Intermediate Value Theorem is nothing but the easily proved assertion that the continuous image of a connected space is connected, but there’s a good bit of work buried in proving that intervals on the real line are connected. The other was Peter’s inimitable style of proving things by contradiction. He would always begin these like this (on the board): Proof: Suppose (ha!) that the conclusion is false (or whatever). It’s that “ha!” that I can still picture in his chalk scrawl (I probably got that from him, too).

Thanks to the magic of AP credits I finished my BS in three years, but I had to stick around to complete ROTC training and so I enrolled in the master’s program in math. To be able to finish this off in only a year I had to pursue the thesis option, and I approached Peter about supervising it. It turned out that he had been thinking about something called “pointless topology” (don’t say it) and suggested I think about it, too. Here’s the idea: a frame is a distributive lattice with a unique minimal element 0, a unique maximal element 1, in which finite meets exist (given elements \( a, b\), there is a unique element \( a\wedge b\) less than or equal to both), arbitrary joins exist (given an arbitrary collection of elements \( a_\alpha \), there is a unique element \(\bigvee a_\alpha) greater than or equal to all of them), and finite meets distribute over arbitrary joins. Example: the lattice of open sets of a topological space. So now let’s see what we can prove about these things themselves, without thinking about them as open sets in a topological space (so “pointless” topology, get it?). Most of the basic notions from topology can be phrased in these terms—compactness and other covering properties, separation axioms, etc. So I proved and wrote up a bunch of standard theorems in this context. For example, a regular Lindelöf frame is normal. Stuff like that. I learned a lot, and then went off to pursue my PhD when I was done.

That Peter would get interested in frames was no surprise. He was one of the world’s experts in quasi-uniform spaces. We all know about metric spaces, and get used to working with the defining properties of a metric on a space: reflexivity, symmetry, and the triangle inequality. Here’s a question: can you capture this same information just using open sets instead of a distance function? That’s roughly what a uniformity is on a space, and a metric determines such a structure (but not necessarily conversely; that is, there are uniform spaces that aren’t metric spaces). In life, we know that distances aren’t always symmetric; indeed, one-way streets can really affect the distance between two points on a map depending on which order you’re trying to get from one to the other. So we have the corresponding notion of quasimetric, and therefore quasiuniformity. Peter developed a lot of the theory of these objects, beginning with his dissertation at UNC-Chapel Hill in the mid-1960s.

I left Virginia Tech in 1991 and Peter took the state up on its offer of an early retirement package. He was in his early 50s at the time, and he had a lot of math left in him. Over the next couple of decades he started doing some work in number theory, a pretty radical switch but one that yielded some dividends.

Peter was a kind and generous man, always willing to talk math or anything else. He played the guitar and could sing pretty well while doing it. He introduced me to Brazilian food, an exotic thing for me at the time. I could always count on Peter for advice and good humor. He passed away at the end of July after a long battle with heart disease and kidney disease.

An infinite sum formula for pi

My fabulous podcast cohost, Evelyn Lamb, is competing in something called the Big Internet Mathoff. Her first entry is about the Wallis sieve and you can read it here. The summary is the following. Take a square of side length 1. Divide each side in thirds and remove the middle square. If you wanted to construct the Sierpinski carpet, you would iterate this procedure, dividing each of the remaining 8 squares into ninths and removing the middle square of each. The Wallis sieve does something different. After removing the middle third square, divide each of the remaining 8 squares into 25 pieces (a \(5\times 5\) grid) and remove the middle square of each of those. Then divide the squares remaining into 49 pieces and remove the middle squares. And so on. What remains is the Wallis sieve.

What is the area of this object? When you build the Sierpinski carpet, it turns out that you remove everything (well not really, but the area is 0). That's not true with the Wallis sieve. Thanks to the Wallis product formula we can see that the area of the sieve is \(\pi/4\). Cool.

But I want to look at this from a different point of view: let's add up the areas of the pieces we remove. Since the area of the Wallis sieve is \(\pi/4\) and we started with a square of area 1, the areas of the pieces we removed must add up to \(1-\pi/4 \approx 0.21460183\dots\) What does this infinite series look like? The first term is \(1/9\), the area of the small square in the center. Then for each of the 8 remaining squares we remove a square of area \(1/(3\cdot 5)^2 = 1/225\). Then in each of those 8 squares we divide the 24 remaining squares into a \(7\times 7\) grid and remove the center square; the next term in the series is then \(8\cdot 24/(3\cdot 5\cdot 7)^2\). Do you see the pattern? The denominators are the squares of the double factorials \((2n+1)!! = 3\cdot 5\cdot 7\cdot (2n+1)\). The numerators are \(1\cdot 8\cdot 24\cdot 48 \cdots (4n)(n-1)\) (you might need to work that out on a piece of paper). There are lots of factors that can be rearranged in that numerator. I'll leave it to you to work out the details, but the series we end up with is \[\sum_{n=1}^\infty \frac{4^{n-1}n!(n-1)!}{[(2n+1)!!]^2} = 1-\frac{\pi}{4}.\] 

Cool! This is the sort of series I might put on a quiz in Calculus II as a question about the ratio test. All those factorials just scream for using this test to determine convergence. For this series, though, the test is inconclusive, yielding a ratio of 1. None of the other standard tests we teach our students will show that this series converges, but thanks to Evelyn's post, we know that in fact it does converge and we even know its sum.

Given that this series hangs on the edge of the ratio test, one might wonder how quickly the series converges. Well, I started adding up terms. The individual terms do go to 0 in the end, but they're not in much of a hurry to do so. After 12 terms the partial sum is up to \(0.19935565\dots\) I would have gone further, but I was doing this on my phone and the OEIS list for the double factorials stops at that stage (and yes, I know I could have calculated more, but you get it). Even at that stage, the terms are still greater than \(0.001\), so convergence will be pretty slow, I think. At that rate I would anticipate needing another 10 terms or so just to get the answer correct to two significant digits. 

Anyway, good luck to the participants in this fun event. You can probably guess where my sympathies lie, but all the entries so far have been pretty cool.

I learned a new word and it reminded me of Morse theory

Thalweg. Obviously German, built from thal, an outdated word meaning valley or dale, and weg, meaning way. So a thalweg is a "valley way," whatever that might mean.

But if you've ever taken a hike then you know exactly what it means: the path along the lowest part of the valley, which in principle should be the easiest path to take.

I came across this word while reading How to Read Water, by Tristan Gooley. He wasn't talking about valleys but rather about the other use of the word thalweg: the lowest part of the bed of a river, usually used as the official border between two states lying on either side of the river. If you don't think about it carefully, you might assume that the deepest part of a riverbed is in the center. If you are looking at a straight section you're probably right, but what happens in a bend of the river? The water is flowing because of gravity (otherwise you'd have a lake) and when there is a bend physics demands that the loose sediment on the bottom get pushed toward the outside of the curve. The result is that the thalweg runs closer to the outside bank in a bend; the river bed is steeper on that side and more gently sloped on the inside of the curve. Again, if you've ever looked closely at a clear, shallow river where you can see the bottom you might have noticed this. Here's a picture:

the dotted line is the thalweg.

the dotted line is the thalweg.

 

There's math here, and the math I'm thinking of is Morse theory. Specifically, I'm thinking about parametrized families of smooth functions, which are well-understood thanks to a theorem of Cerf from 1970. That's a lot of words, so let me explain.

A smooth function \(f:M\to {\mathbb R}\) on a manifold \(M\) is Morse if all its critical points are nondegenerate. This means that the matrix of second partials is nonsingular at each critical point. Moreover, Sylvester's Law implies that the number of negative entries in any diagonalization of this symmetric matrix is the same; we call this number the index of the critical point. The prototypical examples of these are given by the functions \({\mathbb R}^2\to {\mathbb R}\) defined by \[ (x,y)\mapsto x^2+y^2 \quad (x,y) \mapsto -x^2+y^2 \quad (x,y)\mapsto -x^2-y^2\] 

The index of these maps is, respectively, 0, 1, and 2; geometrically they are a minimum, saddle point, and maximum, respectively. The Morse Lemma says that critical points of Morse functions all look like this; that is, there is a coordinate system centered at the critical point \(p\) of index \(i\) where the function has the form \(f(x) = f(p) - x_1^2-x_2^2-\cdots -x_i^2 + x_{i+1}^2+\cdots +x_n^2\). The existence of a Morse function on \(M\) (and there are lots of them) implies a lot about the topology of \(M\); this is a fascinating story, but not the one I have in mind here.

Say you have a smooth function \(f:M\to {\mathbb R}\) that isn't Morse. How not-Morse can it be? In isolation it can be pretty bad, but what Cerf was interested in was what can happen in a family of smooth maps on \(M\). Hopefully, you've come up with your favorite non-Morse function by now. It's \(f:{\mathbb R}\to {\mathbb R}\) defined by \(f(x) = x^3\), right? You can generalize this to any Euclidean space by taking this map in the first coordinate and then a sum of quadratics in the others: \(f(x) = \pm x_1^3 - x_2^2 - \cdots -x_i^2 + x_{i+1}^2 + \cdots + x_n^2\). The critical point at the origin is degenerate, but only because we've cubed the first coordinate.

So now let's think about a family of smooth maps \(F:M\times [0,1]\to {\mathbb R}\). This means that (a) each \(F(-,t)\) is a smooth map on \(M\), and (b) the assignment \(F\) is a smooth map on the manifold (with boundary) \(M\times [0,1]\). There are lots of questions we might ask. The first is if it is possible for each \(F(-,t)\) to be a Morse function. This is certainly possible: take the constant family \(F(x,t) = f(x)\) determined by a single Morse function \(f:M\to {\mathbb R}\). This is not very interesting. However, Morse functions are generic on a given manifold. That is, given any smooth map on \(M\), there is a Morse function arbitrarily close by. So we might then turn to the more interesting question of just how not-Morse functions in the family can be. And this is where Cerf's work comes in.

The executive summary is this: Suppose you have a family of smooth functions \(F(-,t)\) where \(F(-,0)\) and \(F(-,1)\) are Morse. Then Cerf proved that there is another family \(G(-,t)\), arbitrarily close to \(F(-,t)\) such that each \(G(-,t)\) is Morse except for finitely many values of \(t\in [0,1]\). At those values, there is a single degenerate critical point \(p\) in \(M\), and there are coordinates around it so that \(G(x,t) = c + x_1^3 +\epsilon_2 x_2^2 +\cdots + \epsilon_n x_n^2\) where \(\epsilon_j\in\{\pm 1\}\). So, if you're willing to wiggle your family just a little bit you get Morse functions almost everywhere and where you don't you just get a cubic singularity. The whole story is richer than this, of course, and involves birth-death points and bifurcation diagrams. That's another post, though.

What does all this have to do with the thalweg? You have to take the right point of view first. Thinking of the riverbed as the depth function over some 2-dimensional patch of the earth is not especially illuminating. Most of the time you won't have any critical points since the riverbed gently slopes downstream. Sure, there are pools that form in rivers where there are depressions in the bed, but those are not part of the thalweg as a rule. No, the proper thing to do here is to consider cross-sections of the riverbed as graphs of a function on an interval (the width of the river). Like this:

a cross-section of the river

a cross-section of the river

Here we have a function on some interval and using standard calculus we can find its local extrema. We then think of the riverbed as the graph of a family of these cross-sections \(F:I\times L\to {\mathbb R}\), where \(I\) is an interval of length equal to the maximum width of the river and \(L\) is an interval of the form \([0,\ell]\) where \(\ell\) is the length of the river. For each \(t\in L\), let \(X_t\) denote the finite collection of local minima for \(F(-,t)\). The locus of points \(\{(x,t): x\in X_t, t\in L\}\) will have possibly many connected components, but it will contain one component \(T\) extending the length of the river; \(T\) is the thalweg. The other components will correspond to places where a ridge may arise in the riverbed, leading to a stretch of local minima that eventually merge back with the minima in \(T\). This corresponds to what goes on in Cerf theory--new critical points may be born, some may die.

This is all just an approximation, of course. While most of these cross-sections will be the graphs of smooth maps, there will be some that aren't. And riverbeds shift all the time, so it's not like the thalweg is a static thing. Indeed, it would be interesting to let these cross-sections vary in time and track the evolution of the thalweg. Floods and seismic events can certainly move it around.

Anyway, I like this word, thalweg, and I really like the math it made me think of. 

Another MVT sighting

This blog has lain fallow some three years now while I took a detour into writing for a commercial source. Now that that's done (it was great, but it was time to move on) it's time to plant some new seeds in my own backyard.

It's summer and I'm teaching Introduction to Complex Variables, a course I like very much. This week, after introducing the idea of an analytic function I set out to prove the following fact: if \( f\) is analytic on a domain \( D\) and if \( f'(z) = 0\) everywhere on \(D\), then \( f \) is constant.

In the calculus of a single real variable the analogous statement is a consequence of the Mean Value Theorem, but of course we don't have that in the plane. Well, we sort of do, but it's not one of those things that gets taught as a rule (I don't remember it from undergraduate analysis, but then I'm a topologist and so I've let a lot of that fade from memory). Anyway, it's not obvious how we should proceed to prove what seems to be an obvious fact--the only functions with 0 derivative are the constants--so we have to think a little bit.

I have a long-running joke (with myself, mostly) that one day I'm going to write an advanced calculus text called The Real Fundamental Theorem of Calculus, by which I mean the Mean Value Theorem. It will be a Where's Waldo-style sort of thing where the reader will need to spot the MVT hiding in the text. Maybe I'll design a little cartoon representation of it to play the role of Waldo. My argument is that when you really get down to proving things about calculus you almost always need the Mean Value Theorem. In particular, the Fundamental Theorem (students' favorite part about how to evaluate definite integrals in terms of antiderivatives) is a pretty easy consequence of the MVT, hence my assertion that we should call it the Real Fundamental Theorem.

So, in complex analysis we're dealing with functions of a single complex variable. We can think of these as functions of two real variables, of course, but the real and imaginary parts are not arbitrary. A consequence of the definition of differentiability for complex functions is the Cauchy-Riemann equations: if \(f(z) = u(x,y) + i v(x,y)\) is differentiable at \(x_0 + iy_0\) then the partial derivatives must satisfy \(u_x = v_y\) and \(u_y = -v_x\). This is a serious restriction. In fact you can use them to show that the rather innocuous-looking function \( f(z) = \overline{z}\) is not differentiable anywhere even though the real and imaginary parts have partial derivatives of all orders. Another consequence of these equations is that if \( f\) is analytic then the derivative may be computed as \(f'(z) = u_x + iv_x\). 

Now we can prove the theorem I claimed above. If \(f'(z) = 0\) everywhere in the connected open set \(D\), then all the partial derivatives \(u_x, u_y, v_x, v_y\) vanish. Suppose \(P\) and \(P'\) are two points in a small disc contained in \(D\) and let \(L\) be the line segment from \(P\) to \(P'\). Let \(\vec{w}\) be a unit vector in the direction of \(L\) and let \(s\) be the distance along \(L\) from \(P\). The function \(u(x,y)\) may be restricted to \(L\) using the parameter \(s\). We then compute \[\frac{du}{ds} = (u_x + iu_y)\cdot \vec{w} = 0.\]  But now notice that this is a function of the real variable \(s\) and so by the Mean Value Theorem we have that \(u(x,y)\) is constant on \(L\). Now, given any two points in \(D\), we can join them by a polygonal path lying entirely inside \(D\) and this argument shows that \(u\) is constant on each segment and hence on the whole path. Thus \(u(x,y)\) is constant on \(D\). A similar argument applies to \(v(x,y)\) and so \(f(z)\) is constant on \(D\).

Cool. So the MVT was hiding in there after all, just as I suspected. You should hop on my bandwagon now.

Gant, Caulfield, Wolfe, Salinger

When I was in sixth grade my class took an overnight field trip to Asheville, NC.  This would have been the winter of 1980-81 and it included the obligatory visit to the Biltmore House, and, for some reason, a stop at a K-Mart near the hotel (I think my group's chaperone needed shaving cream or something).  I think I bought a poster there, although I don't remember of what or why I thought it would be a good idea to spend some of my limited funds on it. 

Anyway, the trip also included a visit to the Thomas Wolfe House.  I remember being told that Wolfe was North Carolina's most famous writer and that this home was an important piece of American history.  Here's a picture:

the aforementioned Wolfe home, photo via random internet site

the aforementioned Wolfe home, photo via random internet site

I don't really remember much about the place except that it was kind of dark in there and that it was full of period furniture.  Maybe the stairs were steep.  This was 35 years ago, after all.

These days hardly anyone reads or remembers Wolfe, and North Carolina's most famous author is probably Nicholas Sparks (alas, and he's from Nebraska).  Here's the thing, though:  growing up there, you'd think I would have read one of Wolfe's novels at school.  I mean everyone agreed that Wolfe was amazing and the state's greatest writer, etc., etc., but none of his books ever appeared on a reading list.  To be fair, sixth grade was probably too young for it (although my teacher, Mr. Grubbs, tried to get us to read A Tale of Two Cities, a slog at any age), but you'd think that maybe in high school they would have squeezed one in between Hawthorne and Shakespeare.

I bring this up because I finally decided to rectify the situation and read Wolfe's most famous book, Look Homeward, Angel.  Subtitled A Story of the Buried Life, it is clearly autobiographical.  The book is set in the fictional town of Altamont (clearly Asheville), where the young Eugene Gant lives with his mother in her boarding house, Dixieland (clearly the house run by Wolfe's mother in real life).  As I plodded through all 500+ pages, I kept asking myself if I liked this book.  In the beginning, I certainly did not.  I mean, we get a narrative in which the novel's protagonist has a rich inner monologue as a toddler; since this is really Wolfe himself we get the sense that he thinks he's pretty special and smart and all that (as if the subtitle didn't clue us in).  He used the word phthisic waaaaay too much (isn't once too much?).  As Gant gets older we see how the school masters think he's special, his father wants him to go into law and politics, and his mother "pshaws" him constantly.  He is prone to outbursts in which he tells his family they're all just haters (not in so many words, of course).  Frankly, he comes off as a whiny brat, which would be ok if his family actually did something to make him feel bad.  Except they don't, really.  So, no I didn't like ole 'Gene and didn't care for the story much as a result.  And when he goes to the state college in, get this, Pulpit Hill (groan), I just had to decide to ride it out.

Some 25 years later, J.D. Salinger published The Catcher in the Rye, with America's most famous whiny brat protagonist, Holden Caulfield.  As I read Angel, I couldn't help thinking about Holden.  I could all but hear 'Gene calling everyone around him phonies.  Pining for girls who won't give him the time of day.  Blah, blah, blah.

Here's the problem with books like this:  you can only really identify with them when you're a teenaged boy (maybe girls can, too, dunno).  When you read them as an adult, perhaps with a teenager of your own, you have no patience for them.  I re-read Catcher a few years back and it annoyed me to no end; well, Holden annoyed me.  The book itself is well-written.

Which is what I'll say for Wolfe.  He crafts beautiful prose (when he isn't overusing obscure words).  So I think I understand why everyone went nuts over his work; as an example of how to write floridly it's great, but as a novel it falls flat.  And this latter point makes me understand why it never appeared on my high school reading lists--Salinger did it better and shorter.

But that's how it goes, I guess.  What one generation thinks is great is often slowly forgotten.  Maybe I should tackle Trollope next.

 

 

Embrace the Mystery

The final "text" for the course: the Coen Brothers' A Serious Man, the story of Larry Gopnick, a physics professor in 1967 Minnesota.  It's pretty much the Book of Job for modern times--a series of misfortunes befalls Larry and he seeks answers from his rabbis.  There are none, although the second rabbi's story about the goy's teeth is illuminating if you think about it correctly ("helping people couldn't hurt"). 

I actually don't have much to say about this film that hasn't been discussed in other contexts.  There isn't much new mathematics here. There is the obvious connection to the uncertainty principle (literally since Larry teaches quantum physics, but also figuratively as the plot unfolds).  Probability plays a role that we haven't explicitly seen before, but it's fairly minor.  Larry's brother Arthur, a (closeted) homosexual living with them, has written the Mentaculus, a probability map of the universe. 

a spread from Arthur's Mentaculus

a spread from Arthur's Mentaculus

Since this is the end of the course, I thought I'd just write about my general feelings about it, rather than hammer away at the film (we've had enough epistemania).  I taught my first class in the spring of 1991.  I was 21 years old and when I went in to give my first lecture, I was so nervous my hands were shaking as I opened the box of chalk.  I was younger than a few of my students (the ones who had put off the class, their last graduation requirement, until the final semester).  It was rough, but I got better and now I don't worry much about walking into a room of 600 to deliver a lecture.  When I set out to earn my Ph.D., my goal was to be a college professor.  Sure, I love mathematics and research, but I always pictured myself lecturing about the subject I've loved since my first grade class cheered me on when I solved a difficult problem correctly at the overhead projector (I was able to write the number 8 with tally marks).  I never tire of teaching calculus, one of the most significant intellectual achievements of the last 400 years.  Get me started talking about topology and I won't shut up.

But this class.  This has been the most rewarding and intellectually stimulating teaching experience I've ever had.  For that I have to thank my co-conspirator, Eric Kligerman, and our remarkably thoughtful, brilliant students.  I was on research leave this year, working on a book and some other projects, but I taught this class anyway because I thought it would be so fun.  It didn't even feel like work.  I love to read, of course, but this class "forced" me to read things I probably never would have picked up (Woolf's To the Lighthouse, for example).  Looking for mathematics embedded in the structure of texts got me to think deeply and critically.  I found a Cantor set in Kafka's The Great Wall of China; I'm even working on a paper about it.  I finally understand the precise mathematical statement of the Uncertainty Principle (well, sort of; if nothing else I have embraced the mystery). 

Isn't this what we all imagine when we think of a university class?  A small group of engaged individuals tackling tough material.  Conversation so stimulating you hardly notice that three hours have gone by.  A bit of sadness when the last session is over.

So, what does the future hold for us and this course?  Unclear.  The Honors Program director has asked if we'd be interested in doing it again next spring.  We are willing, provided we can work it out with our departments.  In these days of efficiency, we may be needed elsewhere.  But I can assure I will always keep it in the back of my mind, looking for connections and new pieces of literature to view through a mathematical lens. 

For now, summer school looms.  Thanks for reading the chronicles of our little experiment.

But is it literature?

I once saw a video installation at an art gallery (full disclosure:  I do not care for "video as art" so know that before reading on) which showed a fox running around a London art museum after hours.  Naturally, the poor animal was confused and slunk cautiously along the walls, often curling up under a bench to hide.  Now, is this art?  Is it Art? 

I don't know (well, I have an opinion, but you know what they say about those).  The accompanying text panel written by the artist, though, made a case.  You see, the fox represents the immigrant in a strange land, trying to find his way in an unfamiliar and often inhospitable environment.  He lives on the fringes and hides in the shadows.  Some other art speak followed.  (Aside:  if you want to generate your own artist's statement, visit artybollocks.com.)

Which brings me to OULIPO (Ouvroir de Littérature Potentielle--Workshop of Potential Literature).  This is a French literary movement, dating back to 1960, which deals with certain formal, algorithmic methods of creating literature.  Examples:  write a novel without using the letter e; write a snowball poem in which each line consists of a single word with one more letter than the previous line; take 10 sonnets, one to a page, and cut each page into 14 strips to create an exquisite corpse containing \(10^{14}\) distinct sonnets. 

Or, as we discussed in class, try the \(N+7\) method:  take a piece of writing and for each noun, look it up in a dictionary and replace it with the seventh noun following it in the dictionary.  Sounds like a lot of work, right?  Luckily, there is software to do it for you, like this site.

Let's do an example.  Here's a paragraph from the book I'm reading now, Thomas Wolfe's Look Homeward, Angel.

White-vested, a trifle paunchy, with large broad feet, a shaven moon of red face, and abundant taffy-colored hair, the Reverend John Smallwood, pastor of the First Baptist Church, walked heavily up the street, greeting his parishioners warmly, and hoping to see his Pilot face to face. Instead, however, he encountered the Honorable William Jennings Bryan, who was coming slowly out of the bookstore. The two close friends greeted each other affectionately, and, with a firm friendly laying on of hands, gave each to each the Christian aid of a benevolent exorcism.

Most paragraphs in this book are like this, by the way.  I'm still forming an opinion of it (but it's not so high right now--Eugene Gant is not the most likeable protagonist you'll ever meet).  And, since it mentions William Jennings Bryan, I feel compelled to link to this video.

Now, let's run this through the \(N+7\) generator and see what we get.

White-vested, a trim paunchy, with large brogue footmen, a shaven mop of red faction, and abundant taffy-colored hairpiece, the Reverend John Smallwood, pate of the Fissure Baptist Chutney, walked heavily up the stretcher-bearer, grief his parliaments warmly, and hoping to see his Pinch faction to faction. Instead, however, he encountered the Honorable William Jennings Bryan, who was commencement slowly out of the boot. The two close fringes greeted each other affectionately, and, with a fishmonger frippery laying on of handfuls, gave each to each the Chuckle airbrick of a benevolent expedition.

Some of these passages actually make sense, or at least they are not ungrammatical (Orwell spins in his grave).  I rather like the phrase "pate of the Fissure Baptist Chutney" and the transformation of "Christian aid" to "Chuckle airbrick" is amusing enough.  The algorithm is not perfect, though.  Notice that the program read "greeting" as a noun, replacing it with "grief," and also replacing "coming" with "commencement."  These are pretty minor, though, and can be caught easily.

But is it literature?  Is it Literature?  It's certainly an interesting exercise, and sometimes leads to new passages that could be interpreted in a literary manner, but if we are going to generate things almost at random, is it reasonable to expect meaning to emerge?  There is the old saw about a room full of monkeys eventually typing Shakespeare, and Borges teaches us that all of these passages are in an unfathomable number of books in his Library of Babel.  But does that mean that anything we write down has meaning, even if we can make some grammatical sense of it?

Or, as one student asked, "why?"  Bear in mind that this did arise in 1960s France, ground zero for postmodernist thought.  On that level, then, it is unsurprising that someone thought to perform this experiment.  And, one reason to do it is that there is "potential literature" out there, waiting to be discovered.  Do writers create or discover?  I doubt anyone seriously thinks the latter, but in mathematics this is a real argument--do we create mathematics, or is it already out there waiting for us to find it? 

How many almost-great novels have been written that are just shifted versions of some great novel?  How many great novels are waiting out there to be found by shifting some banal passages?  What if we take a paragraph from this post and transform it:

But is it livelihood? Is it Livelihood? It’s certainly an interesting exile, and sometimes leads to new pastas that could be interpreted in a literary mantel, but if we are going to generate thistles almost at random, is it reasonable to expect mechanic to emerge? There is the old saw about a rosary full of monorails eventually typing Shakespeare, and Borges teamsters us that all of these pastas are in an unfathomable nursery of bookmarks in his Lick of Babel. But doglegs that mean that anything we write dowse has mechanic, even if we can make some grammatical sentry of it?

Nope.  Not great literature.  Ah well.  I guess it's the potential that counts.

Mr. Heisenberg Goes to Copenhagen

A 1941 meeting between Werner Heisenberg and Niels Bohr is the subject of Michael Frayn's Copenhagen. The link takes you to a PBS production of the play, starring James Bond Daniel Craig as Heisenberg. The central question is why? Why did Heisenberg go to Copenhagen to meet Bohr?

The historical context is that Denmark was under Nazi occupation at the time.  Heisenberg was in charge of the nascent German nuclear program (well, everyone's nuclear program was nascent then) and naturally he would want Bohr's opinion.  Since the Gestapo was escorting Heisenberg and Bohr's home was surely wired, they took a walk.  What was said?  No one knows.  In the play, Heisenberg asks "does a physicist have a moral right to work on fission?"  Bohr responds by refusing to answer and walking away. 

Oh, I forgot to mention that this is being told via flashback; you see, the only three characters in the play are Heisenberg, Bohr, and Bohr's wife Margrethe and they are dead.  Their ghosts are having a conversation about the conversation.  Memory is a funny thing and they can't quite agree on what happened.  And why didn't Heisenberg succeed in building a bomb?  That's the really interesting aspect and he comes off as a rather sympathetic character.  In reality, other physicists refused to even shake Heisenberg's hand after the war since they assumed he had tried to build a bomb.  Did he? Frayn leads us to believe that his failure was intentional.

So, where's the math here?  Two things.  First, of course, is Heisenberg's Uncertainty Principle.  This isn't math as much as it is physics, but there is a precise mathematical statement which is fairly easy to understand.  Suppose a particle is moving along a path.  Its position \(X\) is a random variable whose probability density function is \( f(x)\) as \(x\) varies over some interval.  The momentum of the particle is another random variable \(P\).  The statement of the uncertainty principle is then \[\sigma_X\sigma_P \ge \frac{\hslash}{2},\] where \(\sigma_X\) and \(\sigma_P\) are the standard deviations of the random variables \(X\) and \(P\) and \(\hslash\) is the reduced Planck constant.  This is a very small number (\(1.054\times 10^{-34}\)), but it is positive.  What this means is that if we want to increase the precision of one of the measurements (shrink its deviation), we necessarily lose precision of the other (its deviation increases). 

Of course, this only applies at the quantum scale.  On a macroscopic level, I can obviously look out my window, see my car parked in the driveway, and know its precise position and momentum (zero mo, of course).  This quantum uncertainty, where everything is expressed as probabilities, takes some getting used to, but once it sinks in it becomes a natural way of thinking.  Einstein rather famously did not like this idea at first, leading him to quip that "God does not play dice." 

The other interesting bit of math in the play is an instance of the Prisoner's Dilemma.  During one scene, Heisenberg asks Bohr if the Allies have a nuclear program and, if so, how far along they are.  Bohr claims he doesn't know (no reason not to believe him--he was in occupied Denmark, after all).  Here is the dilemma:  if the Allies aren't working on a bomb, then perhaps Germany has no need to (Heisenberg hints), but of course if the Allies are building one then Germany should as well.  This is the classic Cold War MAD theory (Mutual Assured Destruction) in its infancy.  Here's the payoff matrix:

Germany doesn't buildGermany builds
Allies don't buildno riskGermany dominates
Allies buildAllies dominatetense stalemate

Created with the HTML Table Generator

The lower right corner, which is what happened ultimately, is a Nash equilibrium; that is, if either party changes strategy unilaterally it results in a worse payoff.  The best strategy is the upper left corner, but purely rational actors will choose the Nash equilibrium. 

Rational has a fairly precise mathematical meaning that isn't exactly how real people operate.  Like all mathematical models, two-person games are a simplification of reality, useful on some level but not the whole story.  Copenhagen is much the same: we don't know the whole story and we never will, but it gives us a lens through which to examine history, uncertain as it is.

Möbius Metaphor

A couple of hours before class last Thursday, I got a text from Eric asking if I could talk about the Möbius strip.  He had this idea, not completely worked out at the time (seriously, like two hours before class), that the structure of Aronofsky's \(\Pi\): Faith in Chaos could be modeled by a Möbius strip in some way.  OK, I said, and quickly made one out of a strip of paper right before I left for class (second week in a row that I couldn't get a spot in my "secret" parking lot; I guess it's not so secret anymore).

The film is jarring in many ways, one of which is the repetition of Max's routines.  When he feels a headache coming on his thumb twitches and he begins to panic and then he pops some pills and maybe takes an intravenous injection of some medication; all of this is edited together in rapid succession, heightening  the tension.  The background score throbs, making the viewer edgier still.  Then come the hallucinations (a brain in the sink with ants crawling on it--ewww) until we get a bright flash and then Max wakes up on the floor with a bloody nose.  Add the physical troubles to his relentless drive to find a pattern in the stock market and it's no wonder he's starting to lose grip of his sanity. 

This repetition is what led Eric to think of the Möbius strip as a metaphor for the structure, but it's not quite clear at first that it's the right one.  In case you don't remember, the Möbius strip is the simplest example of a nonorientable surface--it has only one side.  You can make one yourself by taking a strip of paper, giving one end a half-twist and then taping the ends together.  Here's a picture:

from a cylinder to a Möbius strip to a twisted cylinder--two sides to one and back to two. image from https://plus.maths.org/issue26/features/mathart/Twist.gif.

from a cylinder to a Möbius strip to a twisted cylinder--two sides to one and back to two. image from https://plus.maths.org/issue26/features/mathart/Twist.gif.

If you look closely at the arrows (on the orange side), we see that in the beginning the cylinder has two sides.  By cutting it apart, adding a half-twist, and taping it back together, we see that if we begin at a point on the cut line and move along a horizontal curve through the middle, then when we get back to the point (remember, this is a two-dimensional object; it has no thickness) where we started, the arrows point in the "wrong" direction.  This is the essence of nonorientability:  choose an outward pointing normal vector and follow it along a closed loop; if you always get back to arrows pointing in the same direction the surface is orientable, but if not the surface is nonorientable. If we go around again, then we are truly back where we started with everything pointing in the right direction. Note also that if we put another twist in the strip, we get something orientable--the arrows line up and it's two-sided again.

How is this idea manifested in \(\Pi\)?  Well, one of our brilliant students had an idea: In the beginning of the film, Max knows nothing (well, that's not exactly true, but let's go with it).  As we move along in time, he discovers a lot--a mystical \(216\)-digit number which the Hasidic Jews in the film believe is the true name of God; he can make predictions about stock prices (or can he?).  This knowledge drives him mad, however.  His headaches get worse until finally he decides not to take the medication and uses a drill to take out the portion of his brain that is torturing him (again, ewww).  He then is back where he started--he knows nothing.  See?  Möbius strip!

Well, maybe it's a bit of a stretch.  In any case, I asked the question:  Is this movie even about mathematics?  I'm not convinced.  It's a device, certainly, but it's really about unknowability and the madness that can cause.  More than anything, the film is about obsession and the idea that if you believe something is important you'll see it everywhere (Max's former Ph.D. advisor, Sol, tells him as much).  Numerology plays a big role here, and in the end that's what Max's work devolves into. 

Serious mathematicians have fallen into this trap.  In the late 1990s we got The Bible Code, in which we are told that God encrypted lots of messages into the Torah via skip codes.  The biggest, most prophetic example in it is that Yitzhak Rabin's name is crossed by the phrase "assassin that will assassinate;" this did come to pass, of course, so voila, God must be trying to tell us something.  But you can play all kinds of games like this.  Consider the following passage from the Declaration of Independence (H/T to Pat Ballew's blog for this):

When in the course of human events,
it becomes necessary for one people to
dissolve the political bands which have
connected them with another, and to
assume among the powers of the Earth,
the separate and equal station to which
the Laws of Nature and
of Nature’s God entitle them

(Not sure why the quotation marks don't line up properly, but let's forge ahead.)  Begin in the first row.  Choose any word you like.  Say you choose "course."  That word has six letters, so count to the sixth word following it; you land on "necessary."  This has nine letters, so count off nine words to get to "which" in the third line.  Lather, rinse, repeat.  Where do you land when you can't continue this process?  In this case you land on "God" in the last line.  Go ahead and try a few others.  I bet you always land on "God."  So, if I wanted to interpret this as proof that the Founders intended the United States to be a Christian nation, I could certainly do so.  I mean, this can't just be a coincidence, right?

Well, yes it can.  And the Bible Code is just a coincidence, too.  In fact, many mathematicians wrote solid refutations of the Bible Code.  For example, you can take Moby-Dick and do the same thing; you get lots of interesting "prophetic" sentences. It's all a consequence of something called the Kruskal count, discovered by the physicist Martin Kruskal in the mid-70s.  The link takes you to a discussion of a really good card trick based on the idea.  The point is that if you begin at some point and then have some algorithm for generating a sequence in your set, then no matter where you start, the sequences all coincide after a while (with high probability).  So it shouldn't be at all surprising that we can find "hidden messages" in texts, just as Max should have known that the "patterns" he was seeing were likely coincidental.  Just now, as I'm writing this in a coffeehouse, Teenage Lobotomy is playing over the speakers.  Coincidence, or is God telling me something?  I mean, I'm writing about a movie in which the main character lobotomizes himself and this song comes on.  That can't be a coincidence.

But this is what we do as humans.  We can't deal with randomness so we look for patterns or assign divine causes to random events.  The truth of course is that the universe is a random place.  God really does play dice.

One final note about the film.  The title is \(\Pi\): Faith in Chaos.  I asked the question: does Max have faith in chaos, or is he looking for faith in chaos?  I don't know.  Talk amongst yourselves.

Drills and Needles

I swear it was a coincidence.  We really didn't set out to show Darren Aronofsky's first film, \(\Pi\): Faith in Chaos so close to Pi Day; it just happened that way.  If you've never seen it, you should.  It's available on Netflix and on Amazon Prime, and on VHS (!) in the UF Library.  Remarkably, campus classrooms are equipped with dual VHS/DVD players so we went with that instead of risking buffering problems.  Side note:  the previews (remember those?) included Dee Snider's Strangeland, and a promo for the DVD version of \(\Pi\) (the format of the future!). 

I'll not editorialize about Pi Day.  Well, ok, I will a bit.  Some mathematicians despise it.  Vi Hart, internet math video maker extraordinaire (seriously, spend a few days of your time watching her stuff) has a rant about it.  Here at UF the fine folks at the science library, in conjunction with some engineering student groups, had a Pi Day celebration, complete with faculty taking pies in the face and contests for who could recite the most digits of \(\pi\).  I don't hate it, but I don't love it, either.  I tend to fall in the "there's no such thing as bad publicity" camp, but I wouldn't mind a bit more substance.  There are lots of interesting places \(\pi\) shows up, and I wish people knew more about them instead of trying to get the first \(1,000\) digits (or whatever).  I only know \(\pi\) up to \(3.141592653\), which is waaaaayyyyy more precision than you'd ever need for any practical calculation.  Hell, engineers are perfectly happy with \(22/7\) or even \(3\) for a back-of-the-envelope calculation.  The legislature of Indiana once introduced a bill that implied that \(\pi\) equals \(3.2\); luckily it didn't pass. 

Anyway, the movie.  It's a jarring film, shot in high-contrast black and white with some rapid editing and off-kilter camera angles.  It's the story of Max Cohen, a mathematician living in New York's Chinatown, who is trying to find patterns in the stock market.  His computer, Euclid, develops a bug and right before it crashes it spits out a couple of stock picks and a \(216\)-digit number.  At first glance, the stock prices seem completely implausible, but they later turn out to be correct (gasp!).  The number is another story.  We get taken on a ride into Jewish numerology via Lenny, who Max meets in a diner, and into the seamy underside of Wall Street finance via Marcy, who is hounding Max to get him to work for them and even offers him a classified processing chip to help him along.  I'll save the analysis for the next post since we were all a bit wiped out by the end of the film and needed some time to process it before having a thoughtful discussion.

After a break, I talked about \(\pi\) a bit.  We all know it's defined as the ratio of a circle's circumference to its diameter (or twice the radius); it's also equal to the ratio of a circle's area to the square of its radius.  The latter definition is actually better in some ways as it's possible to prove the area formula for a circle via simple geometry (Euclid did it in his Elements) while the circumference formula is a bit trickier (and, if we're being honest, really requires the idea of limit, which Archimedes didn't have but which he almost invented).  As for the calculation of \(\pi\), Archimedes got as far as \(3.1415\) by the method of inscribing and circumscribing polygons on the circle and calculating the resulting perimeters.   

But here's an interesting way to calculate \(\pi\), using toothpicks and a piece of posterboard.  Mark off parallel lines on the board at distances equal to the length of a toothpick.  Now ask yourself the following question: if I drop a toothpick onto the board, what is the probability that it crosses a line?  Here's a picture:

now here we go, droppin' science, droppin' it all over...

now here we go, droppin' science, droppin' it all over...

I had the class come up and drop some toothpicks.  We had \(15\) people drop \(10\) toothpicks each.  We got \(105\) hits in the \(150\) attempts for a probability of \(0.70\).  Of course, if we dropped more we would get a better estimate of the probability.  In fact, the real answer is about \(0.6366\), which you can figure out by doing a lot of simulations.  Here's a web app that will do that for you. 

Now, I'm going to do something to that number:  first, I'll invert it to get \(1.5708451\dots\); then if I multiply that by \(2\) I get \(3.14169\dots\).  That looks an awful lot like \(\pi\), which begs the question:  why would \(\pi\) show up in this context?  I mean, I don't see circles anywhere and \(\pi\) means circles, right?

But if you think about it for a minute, it shouldn't be that surprising.  Here's a schematic:

simplified schematic

simplified schematic

The toothpick has length \(1\) unit, which is the distance between the lines.  Let \(d\) be the distance from the midpoint of the toothpick to the nearest line (\(0\le d\le 1/2\)) and let \(\theta\) be the angle it makes with the horizontal (\(0\le\theta\le\pi\)).  See that \(\pi\)?  Anyway, we get a hit exactly when \(d\le (1/2)\sin\theta\).  That corresponds to the blue region in the picture below.

keep it blue

keep it blue

So the probability of a hit is then \[p = \frac{\text{area of blue region}}{\text{area of rectangle}} = \frac{\int_0^\pi 0.5\sin\theta\,d\theta}{0.5\pi} = \frac{1}{0.5\pi} = \frac{2}{\pi}.\]  I'll let you get out your calculator and check that this equals \(0.6366\dots\).

This is certainly not the only place \(\pi\) shows up unexpectedly, nor is it the most efficient way to calculate \(\pi\).  Archimedes' method of exhaustion is, well, exhausting to carry out in practice and until a couple hundred years ago it was the way to go.  The discovery of infinite series that sum to things involving \(\pi\) has made the calculation of \(\pi\) much more tractable.  For example \[\frac{\pi}{4} = \sum_{n=0}^\infty \frac{(-1)^n}{2n+1}.\]  Or \[\frac{\pi^2}{6} = \sum_{n=1}^\infty \frac{1}{n^2}.\]  Or, (thanks Ramanujan) \[\frac{1}{\pi} = \frac{2\sqrt{2}}{9801} \sum_{n=0}^\infty\frac{(4n)!(1103+26390n)}{(n!)^4 396^{4n}}.\]

OK.  That's a lot of formulas for computing a number that is only special because it's related to circles.  There are plenty of interesting numbers \(e,\sqrt{2},\dots\) that are just as (more?) fascinating than \(\pi\) but which don't get the same slavish devotion.  Why?  Probably just because of the circle thing--it's defined as a ratio but it's an irrational number (transcendental, even).  But sometimes, as the movie infers, this devotion pushes dangerously close to insanity.  It at least often devolves into numerology.  Superstition.  Finding patterns when they aren't there.  Something wicked this way comes... 

Epistemania

I remember sitting in eleventh grade English class one morning, second period after a late night flipping burgers at work, half-asleep with my head against the wall, discussing poetry.  This would have been American literature, and I have no idea what poem we were discussing, but at one point my teacher asked what the meaning of the poem was, and I, in full 16-year-old jackassery, said something like, "Who cares?  Maybe he didn't mean anything.  Maybe he just wrote it."

"Nice attitude, Kevin."

Yeah, well, I was 16.  But I think we can sometimes be guilty of "beating it with a hose to find out what it really means" (as former poet laureate Billy Collins put it).  And as we delved further into Borges this past week I began to wonder if we weren't doing just that.  I love Borges and his application of mathematics, but after a few hours of unraveling his use of the infinite many of us had a glazed look.  You know that 1,000 yard stare you get after flying from Seoul to Atlanta?  Not quite that bad, but close enough.

So, let's talk a bit more about The Library of Babel, and then maybe a little about The Aleph, and then move on to other things.  Putting aside the structure of the Library, which we never did settle on, and the number of distinct books in it, which is easy to calculate but impossible to comprehend, it remains to ask what it all means.  Even then, it is easy to get lost in infinite mathematical loops.  For example, there is talk of The Book, a catalog of all the books in the Library.  Let's denote this book by \({\mathbb B}\).  Here's a question: is \({\mathbb B}\) listed in \({\mathbb B}\)?  If \({\mathbb B}\) is a complete catalog of the books, and if \({\mathbb B}\) is in the library, then it must be listed in it. But there are too many books in the Library to be listed in a single book; that is, even if each book were represented by a single character in \({\mathbb B}\), it would follow that \({\mathbb B}\) must be broken into almost as many volumes as there are books in the library.  Meaning, almost every book in the library is part of The Book, and so what's the point of \({\mathbb B}\)?  This smacks of Russell's Paradox, which led to the development of the set of axioms we now use for standard set theory.  

So maybe \({\mathbb B}\) isn't in the Library, but then who can access it?  The first sentence tells us that the Library is the Universe, so is \({\mathbb B}\) God?  Can we ever find it?  How would we know?  At this point I'm reminded of the following passage from Kafka's Great Wall of China:   

Try with all your powers to understand the orders of the leadership, but only up to a certain limit—then stop thinking about them.
— Franz Kafka, The Great Wall of China

I will take Franz's advice and stop thinking about \({\mathbb B}\).  One final remark about The Library of Babel:  we really only need one book.  In fact, we only need this blog post, for it is every possible book in some language.  We may not know these languages because no one speaks them, but in some strange tongue this blog post is Moby-Dick, and in another it is The Hunt for Red October.  So perhaps we should give up our epistemania and simply take things for what they are.

As for The Aleph, the other Borges story we discussed, we see the same theme:  infinite regress as a subject of confusion.  The Aleph is a point in a Buenos Aires basement that contains all other points in the universe.  But then it also contains The Aleph which contains the universe which contains The Aleph which contains...  You get it. For me this story is more one of melancholy: the narrator (whose name is Borges) was in love with Beatriz, who died, and the narrative is more a reflection on how his memory of her is fading.  Personally, I think Beatriz is The Aleph.  Haven't we all seen the whole universe in another?  Isn't that the hope, anyway?  Melancholy gives way to hope gives way to melancholy gives way to... 

Borges y yo (y tú también)

Jorge Luis Borges, perhaps more than any other writer of his stature, weaves mathematics into the structure of his stories so completely that it can take an immense amount of analysis to unravel them.  I'm not entirely sure that Borges thought this was worthwhile; indeed, during interviews he often took gentle jabs at literary analysts who spent so much time and hand-wringing over his work.  But it's so difficult to resist.  I mean, I dare you to read The Library of Babel and not get sucked into trying to figure out the structure of the thing.  At the beginning of class last week, I asked the students to spend a few minutes sketching what they thought the Library looked like.  Here are a few of their renditions (click on them to scroll).

You see lots of hexagons because Borges spends some time telling us the structure of the rooms in the Library:  each gallery is hexagonal, bookshelves line four walls, there are two free walls.  The following passage in the story says a lot, but leaves open plenty of room for interpretation:  One of the hexagon's free sides opens onto a narrow sort of vestibule, which in turn opens onto another gallery, identical to the first--identical in fact to all.  To the left and the right of the vestibule are two tiny compartments.  One is for sleeping, upright; the other is for satisfying one's fecal necessities.  Through this space, too, there passes a spiral staircase, which winds upward and downward into the remotest distance.

At first read, then, I immediately conclude that each of the two free sides opens to another hexagon.  Even this was disputed by some students.  Maybe there's nothing on the other free side, or maybe there's a bench for sitting to read, and all the hexagons wrap around the staircase, forming a sort of Tower of Babel shaped library.  Maybe.  If this is indeed the case, then each floor of the library would contain only finitely many cells, and I don't really think this is what Borges had in mind (or maybe he did--you never know). The sentence in Spanish isn't any clearer: Una de las caras libres da a un angosto zaguán, que desemboca en otra galería, idéntica a la primera y a todas. 

Even if you accept the premise that each of the free sides leads to another gallery, there's still a lot of ambiguity.  Just how "identical" are these cells?  If we mean the free sides are always opposite each other, then we get a particular structure:  the cells line up, extending infinitely along a line in each floor and then these rows stack on each other vertically.  Maybe.  But what if there is a staircase in only one of the corridors joining two cells?  Note that Borges isn't clear on this point--una de las caras libres...  He doesn't say solamente una, which would mean exactly one staircase.  If there is a staircase in each passage, then the geometry of the Library is fixed--each floor must look like all the other floors.  But if there is only one staircase for each pair of cells, then more interesting things can happen--we could have a different layout for each floor.

Also, if we don't insist that the free sides are always in the same positions in each cell, then we can get all sorts of labyrinthine structures on each floor.  And, these labyrinths can be so elaborate that two cells that share a wall can be arbitrarily far apart in the sense that a librarian would have to walk through a huge number of galleries to get from one to the other (here, "huge" means that for any positive integer \( n\), there are adjacent galleries which require a librarian to pass through at least \( n\) cells to get from one to the other).  This can be ok if we are in the one staircase to a pair model because we may then be able to go up or down a few floors to make our way to an adjacent cell, thereby skipping the labyrinth on a particular floor.

Wait a minute.  We haven't even begun to discuss what this story is about.  We are arguing about the structure of the damn Library.  I later read this on Twitter from one of the students in the class:

when you spend over an hour talking about hexagons in a class and it turns into a heated discussion...
— @studentin2+2=5

Yeah, we did just that.  Hexagons \(\Longrightarrow\) intense discussion.  Could a mathematician ask for more?

We'll get around to the meaning of this story in class this week.  For now, let's think about how many books are in the Library.  Borges tells us that each book has \(410\) pages, each of which has \(40\) lines of \(80\) characters.  He also tells us that the alphabet consists of \(22\) characters along with a space, period, and comma.  That makes \(25\) orthographic characters.  We are told the Library is complete; that is, every possible book is in it.  This is a finite number.  In fact, each book consists of \(410\times 40\times 80 = 1,312,000\) characters, and since each of these may be any of the \(25\) possibilities, there are \[N=25^{1,312,000} \approx 10^{1,834,097}\] distinct books.  This is an enormous number (although, next to infinity it is effectively zero).  To give you some perspective on how large \(N\) is, if the known universe were filled with nothing but protons (and nothing else, no blank space) it would only contain about \(10^{126}\) of them. So the Library can't exist in our universe; there just isn't room.

There are all sorts of odd books in the Library.  There is a completely blank book.  There is a book that is blank except that it has a period in the middle of page 193.  There is a book with nothing but the letter x.  There are \(1,312,000\) books that have a single letter x.  The tweet quoted above appears exactly as it is in \(25^{1,311,898}\) of the books in the Library.  This blog post appears in a huge number of the books (if we write out the numbers and ignore the improper punctuation), in every language spoken on Earth (if transcribed into the alphabet), and in any language spoken on any other planet (do you really think we're alone in the universe?). 

Question: how would you find a particular book in the Library?  Is there any hope?  Maybe it's enough to know it's there, just like mathematicians are often satisfied with existence proofs.  In any case, it's not hard to see that a given librarian may not be able to reach a particular book in his lifetime, even if he knows where it is.  Is this cause for despair?

I'll save the philosophy for next time.  For now, one final remark.  William Goldbloom Bloch has written a wonderful book, The Unimaginable Mathematics of Borges' Library of Babel, that talks about a lot of this mathematics in far greater detail.  I suggest picking it up if you are so inclined.  Or you can walk the Library for yourself, seeking out its meaning.

Franz and Georg

As far as I know, Kafka and Cantor never met, and there is no reason to believe they did.  Still, I can't help wondering if Franz knew about Georg's work, even though he claimed to have great difficulties with all things scientific and mathematical.  Here's why:  Kafka's Great Wall of China, which in typical Kafka fashion is about all sorts of things and kind of goes nowhere, has elements that immediately make me think of Cantor's work, particularly the so-called Cantor set.

The Cantor set \(C\) is one of those mathematical curiosities that we like to trot out to blow our students' minds.  It is constructed as follows.  Start with the closed unit interval \([0,1]\).  First remove the open middle third \( (1/3,2/3)\).  Then remove the open middle thirds from the remaining two intervals: \((1/9,2/9)\) and \((7/9,8/9)\).  Then remove the open middle thirds from the remaining four intervals.  Iterate this process, at the \(n\)th stage removing \(2^{n-1}\) intervals of length \(1/3^n\).  The set \(C\) is what remains at the "end." 

The first claim about \(C\) is that it is, remarkably, uncountable.  The way to prove this is to use Cantor's diagonal argument (I wrote about this in the previous entry).  Here goes:  let's first abandon decimal notation and instead represent each number \(x\) in the interval \([0,1]\) using its ternary expansion:  \[ x=\frac{a_1}{3} + \frac{a_2}{3^2} + \frac{a_3}{3^3} + \cdots\] where each \(a_i = 0,1,\,\text{or}\, 2\). Now, observe that the elements of \(C\) are precisely those real numbers in the interval \([0,1]\) whose ternary expansions have all \(a_i = 0\,\text{or}\, 2\). (Aside: note that \(1/3\) is in \(C\).  Its ternary expansion is \(0.1000\dots\), so you might think that I've told you a lie.  But note that we also have \(1/3 = 0.02222\dots\), just like in decimal notation \( 0.9999999\dots = 1\).) If we have a bijection \(f:{\mathbb N}\to C\), then we construct a number \(x\) by taking the \(i\)th digit of \(x\) to be \(2\) if the \(i\)th digit of \(f(i)\) is \(0\) and \(0\) if the \(i\)th digit is \(2\).  Then \(x\) isn't in the image of \(f\), contradiction.

But, in typical Cantorian fashion, \(C\) has another weird property.  Let's add up the lengths of the intervals we remove from \([0,1]\) to get \(C\): \[\frac{1}{3} + \frac{2}{9} + \frac{4}{27} +\cdots +\frac{2^{n-1}}{3^n} +\cdots = \frac{1/3}{1-(2/3)} = 1.\]  You read that correctly:  we've removed "everything" yet what remains is an uncountable dust scattered throughout the unit interval.  

Compare this with Kafka's description of how the Great Wall was built:

The Great Wall of China was finished at its northernmost location. The construction work moved up from the south-east and south-west and joined at this point. The system of building in sections was also followed on a small scale within the two great armies of workers, the eastern and western. It was carried out in the following manner: groups of about twenty workers were formed, each of which had to take on a section of the wall, about five hundred metres. A neighbouring group then built a wall of similar length to meet it. But afterwards, when the sections were fully joined, construction was not continued on any further at the end of this thousand-metre section. Instead the groups of workers were shipped off again to build the wall in completely different regions. Naturally, with this method many large gaps arose, which were filled in only gradually and slowly, many of them not until after it had already been reported that the building of the wall was complete. In fact, there are said to be gaps which have never been built in at all, although that’s merely an assertion which probably belongs among the many legends which have arisen about the structure and which, for individual people at least, are impossible to prove with their own eyes and according to their own standards, because the structure is so immense.

Imagine then, how this would look from space (you can see the Wall from there, or so the fraudsters at NASA would have us believe).  In the early days of construction, you wouldn't be able to see it at all--it would be scattered, barely-visible segments, much like the Cantor set.  In fact, it's possible to build the Cantor set in this way, via the following process. Define two functions \(F_0\) and \(F_1\) on the unit interval \([0,1]\) by \[ F_0(x) = \frac{1}{3}x\] \[F_1(x) = \frac{1}{3}x + \frac{2}{3}.\]  Now, start with a number \(x_0\) in the open interval \((0,1)\) and iteratively apply one of the functions \(F_0\) or \(F_1\) randomly.  The map \(F_0\) takes a point two-thirds of the way toward \(0\) and \(F_1\) takes a point two-thirds of the way toward \(1\).  So, if a point is in, say \((1/3,2/3)\), then both \(F_0\) and \(F_1\) take it into one of the complementary intervals.  If a point is in \((1/9,2/9)\) then it maps to either \((1/27,2/27)\) or \((19/27,20/27)\), and so on.  No matter what we do, by iterating these maps indefinitely we end up at a point in \(C\). 

Now, this isn't really the right metaphor since the Wall is getting filled in, while the Cantor set is built by chipping away, but it sure feels like the same idea.  The workers getting shipped from one location to another, seemingly at random, to build this thing that no one individual can verify the existence of; points moving around the interval until they settle at points in \(C\).  Beautiful and unimaginable all at once.

There Is An Infinite Amount of Hope, Just Not For Us

The fact is that every writer creates his own precursors.
— Jorge Luis Borges, in Kafka and His Precursors

Borges points out that Zeno's parable of Achilles and the tortoise, which neatly encapsulates his (Zeno's) paradox, is Kafkaesque.  That is, without Kafka, there is no Zeno.  Even the title of this post, a direct quote from Kafka, is Kafkaesque.  Does he mean that there is an infinite amount of "hope" in the world, but we can't have it?  Or does he mean that there's plenty of hope in the world, but that we are hopelessly doomed as a species?  Or both? Or neither?

Kafka's work has also been referred to as the "poetics of non-arrival."  Many of his characters fail to reach a destination (or wake up as giant cockroaches and get starved by their families--same diff).  In class this week we read Before the Law, whose very title launches a series of questions.  "Before" in what sense?  Temporally? Spatially? Both? Neither? And which "law" does he mean?  Religious? Secular? Moral? Scientific? All? None?  You get the point. 

In case you haven't read the story, and you should since it's only a page long, here's a summary.  A man comes from the country to see the law.  He encounters a gatekeeper who tells him he can't enter at this time but it might be possible later.  He also implies that even if the man does pass through this door that there is a succession of doors and gatekeepers, each more terrible than the last, so that in some sense he may as well not bother.  Well, the man just sits there for years.  He tries bribes.  He asks the gatekeeper lots of questions.  He grows old and even resorts to imploring the fleas in the gatekeeper's fur collar to answer his pleas.  In the end, he dies having never passed the first door (and there's a version of Zeno's paradox that goes this way--being unable to take the first step).  Non-arrival, indeed.

This class is about mathematics as metaphor in literature, and many of Kafka's works make use of the infinite in one form or another (more on that next week).  But what about literature as metaphor for mathematics?  If ever there were a Kafkaesque branch of math it would have to be Cantor's work on the infinite.  Before Cantor, everyone more or less assumed that infinity is infinity; that is, there is only one level of infinity, or more accurately that all infinite sets have the same cardinality.  Cantor demonstrated rather dramatically that this is false.  In fact, a consequence of his work is that there is an infinity of infinities, each larger than the last.

If you've never thought about this before it can be really counterintuitive and difficult to accept, but I imagine Kafka, who claimed to have great difficulties with all things scientific, would have appreciated the mathematical abyss Cantor opened up for us.  I do not use the term abyss lightly--Cantor was attacked and mocked by his contemporaries, often viciously, and this fueled his depression and ultimately led to multiple hospitalizations for treatment; he died poor and malnourished in a sanatorium in 1918.  Poincare referred to Cantor's work as a "grave disease" infecting mathematics; Wittgenstein dismissed it as "utter nonsense."  But Cantor's ideas survived and are considered fundamental to mathematics today.

So just how weird are we talking here?  First a question: What is an infinite set? A set that isn't finite, right?  OK.  Definition 1. A function \(f:A\to B\) is a bijection if the following two conditions hold: (a) \( f\) is injective; that is, if \(a_1\ne a_2\) are distinct elements of \(A\) then \(f(a_1)\ne f(a_2)\); and (b) \(f\) is surjective; that is, if \(b\in B\) there is some \(a\in A\) with \(f(a)=b\).  Definition 2.  A set \(S\) is finite if there is a bijection \(f:S\to\{1,2,\dots ,n\}\) for some \(n\ge 0\).  In this case, we say that \(n\) is the cardinality of \(S\).  This notion of size is well-defined, but that requires (a simple) proof.

Now, let's denote by \({\mathbb N}\) the set of natural numbers, \(\{0,1,2,3,\dots\}\), where the ... means go on forever.  These are the numbers we use to count and we know there are infinitely many.  In some sense, this is the simplest infinite set there is.  Definition 3.  A set \(S\) is countably infinite if there is a bijection \(f:S\to {\mathbb N}\).  You might think that every infinite set is countable, because, you know, infinity is infinity, but you'd be wrong (more on that below).  For now, here are some examples of countably infinite sets.  The whole set of integers \({\mathbb Z} = \{\dots ,-2,-1,0,1,2,\dots \}\) is countable.  Now, wait, you say, there are clearly more integers than natural numbers, twice as many in fact.  But all I have to do is produce a bijection.  Here's one:  \(f:{\mathbb Z}\to {\mathbb N}\) defined by \(f(n) = 2n\) for \(n\ge 0\) and \(f(n) = -2n-1\) for \(n<0\).  You can check that this works.  The set \(E\) of even natural numbers is countable:  take \(f(n) = n/2\) for \(n\ge 0\).  Huh?  There are only half as many even numbers as there are all numbers.  So, we already see that infinity can be weird.

It gets weirder.  Let \({\mathbb Q}\) be the set of rational numbers; that is, fractions of the form \(a/b\) where \(a,b\in {\mathbb Z}\), \(b\ne 0\).  Of course, there are duplicates when we write them in this form, but we could insist that \(a\) and \(b\) are relatively prime.  This set is countable, too. There are clever diagrams that prove this (try looking here, for example), but I will simply list the rationals:  \[0,1,-1,\frac{1}{2},-\frac{1}{2},2,-2,\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3},3,-3,\dots\]  It should be reasonably clear how to continue this pattern in such a way that every rational number ends up on the list, and so this is a bijection between \({\mathbb Q}\) and \({\mathbb N}\).  Weirder still:  the set of algebraic numbers is countable.  These are the numbers which are solutions to polynomial equations with integer coefficients.  You might think there are a lot of these (well, yeah, there are infinitely many), but they're countable.

OK.  So, what about an uncountable set?  I claim the set of real numbers \({\mathbb R}\) is uncountable.  To prove this, I will show (a) \({\mathbb R}\) has the same cardinality as the open interval \( (0,1)\), and (b) \( (0,1)\) is uncountable.  The first one is easy; here is a bijection between \( (0,1)\) and \({\mathbb R}\): \[f(x) = \tan\biggl(\pi\biggl(x-\frac{1}{2}\biggr)\biggr).\]  To prove that the interval \((0,1)\) is uncountable, we use Cantor's Diagonalization Argument.

Suppose we had a bijection \(f:{\mathbb N}\to (0,1)\) (we can run our bijections in either direction).  That would mean we could put the numbers in \((0,1)\) in a list (using decimal expansions of the numbers): \[0.a_1a_2a_3a_4\dots \] \[0.b_1b_2b_3b_4\dots \] \[0.c_1c_2c_3c_4\dots \] \[0.d_1d_2d_3d_4\dots \] \[\vdots\]  Consider the following number \( x\):  the \(i\)th digit of \(x\) is \(1 + \text{the}\, i\text{th digit of}\, f(i)\) (here if the \(i\)th digit of \(f(i)\) is \(9\), this means \(0\)).  Now, ask yourself:  is \(x\) on this list?  It can't be the first number since it differs in the first digit; it can't be the second number since it differs in the second digit; it can't be the third or the fourth or the \(i\)th for any \(i\) since it differs from \(f(i)\) in at least the \(i\)th spot.  So \(x\) is not on the list; that is, our function \(f\) is not surjective, a contradiction.  So no such bijection exists and \((0,1)\) is uncountable.

Now, you might say, well, we can fix that.  Just bump everything on the list down one spot and add \(x\) at the beginning.  But then we could just do it again to construct a new number that isn't on the list.  And so on, and so on, and so on.  So there's an infinite amount of hope (to solve this), just not for us.

Cantor constructed all sorts of weird stuff, and I'll say more about that next week in relation to Kafka's Building the Great Wall of China.  For now, though, let me end by showing how there is an infinity of infinities.  This idea has been around for a long time: recall the Hindu story of the earth being held up by an elephant who is standing on a turtle.  But what's the turtle standing on?  Well, it's turtles all the way down.  Or Bertrand Russell's arguments against the existence of God:  a standard logical argument is that everything that exists has a cause; the earth exists so it has a cause; that cause is God.  But Russell pointed out that God would then have to have a cause, a meta-God of sorts, which would also have a cause (a meta-meta-God) and so on, producing an infinite string of \(\text{meta}^n\)-Gods, each more powerful than the last (Kafka squeals with delight).  The trick for producing ever larger sets is the power set construction.  It goes like this:  let \(A\) be any set.  Denote by \(P(A)\) the set of all subsets of \(A\).  It is clear that the cardinality of \(P(A)\) is at least that of \(A\) since we may find an injection of \(A\) into \(P(A)\) (the function \(f(a) = \{a\}\) will do).  But any such map cannot be a surjection.  The trick is to assume you have a bijection \(f:A\to P(A)\) and then build a subset of \(A\) which can't be in the image of \(f\), just like Cantor's Diagonalization Argument.  Since I've assigned this as a homework problem, I won't divulge the answer here, but I will say there is some relation to Russell's Paradox.

Anyway, assuming this, we now see how we can get bigger and bigger infinite sets.  Start with the natural numbers \({\mathbb N}\) and then iterate the power set construction.  The set \(P({\mathbb N})\) must be uncountable and the set \(P(P({\mathbb N}))\) is larger still.  This leads to the whole area of transfinite arithmetic, which I don't know much about and won't try to explain, but I think you'd agree must be pretty wild. 

If Borges is right that each writer creates his precursors, then I think we have to count Cantor among them.

 

 

 

Zeno, Limits, and Arguing About Numbers

One of my favorite things about mathematics is that it's its own insular world in many ways (note the correct it's-its usage there; as an aside I think passing an it's/its, your/you're, and their/they're/there test should be a high school graduation requirement, but I digress).  As I mentioned to my colleague and fabulous co-instructor Eric, we make choices in mathematics all the time.  They are not arbitrary, but we do make them, and we try to do so in a way that's as intuitive and clear as possible.  The first example is Euclid's axioms for plane geometry, which we've already seen and which we know cause some trouble once you try to use the parallel postulate.  It just gets more exotic from there, but at all times it is important to remember that mathematics is based on axioms and definitions.  Once we define a concept, we then try to prove things about it.  Then we might worry about whether it has a practical application or not (I said might; G.H. Hardy  famously abhorred applications of mathematics).

Zeno's Paradox, the one where you can never get where you're going because you first have to go halfway and then half the remaining distance and then half that remaining distance and so on forever, is evident in all sorts of literary works.  Woolf's To the Lighthouse has it embedded in there a bit--will they ever get to the lighthouse? Will Lily finish her painting? (Yes, and yes, as it turns out.)  But it's more blatant in Kafka's Before the Law, which we read last week in class.  The man comes from the country to see the "law," whatever that is.  There is a gatekeeper who will not let him pass at the moment, but he informs the man that beyond the gate there is another, with its own gatekeeper, and that beyond that gate is another whose gatekeeper is so fearsome that even he (the gatekeeper) cannot bear to look at him.  So, we are led to conclude that there are an infinite number of gates and gatekeepers, each more powerful than his predecessor. What would such a set up look like?  An infinite string of gates like this?

an infinite string of doors?

an infinite string of doors?

Or maybe it's more like an infinite collection of concentric circles:

bullseye!

bullseye!

 

Question:  can we ever reach the law?  Which law are we even talking about?  Does it even exist?  Of course, the man never even gets past the first gate (this is Kafka, after all) and dies waiting, so we never discover the structure of the building which houses the law.

So where's the math here?  Well, it's all in the question of how to resolve Zeno's Paradox.  This leads to the idea of limit, developed by Bolzano, Cauchy, and Weierstrass in the early 1800s.  Finding the limit of a sequence \(a_1,a_2,a_3,\dots\) amounts to playing the following adversarial game:  I claim the sequence converges to some number \(L\).  You then tell me how close to \(L\) you need the terms of the sequence to get.  Then I find a positive integer \(N\) so that if I go beyond the \(N\)th term of the sequence I'm within your tolerance.  In math: \[ \lim_{n\to\infty} a_n = L\] if for every \(\varepsilon >0\) there exists an \(N\) so that if \(n\ge N\), we have \[ |a_n-L| < \varepsilon.\]  If you imagine plotting the values of the sequence (after all, a sequence is just a real-valued function with domain the set of natural numbers), then this definition says that if I go far enough out, all the plotted points live inside the horizontal strip \(L-\varepsilon < y < L+\varepsilon\). 

But we still haven't gotten to Zeno (will we get there?).  What we are trying to do there is add up an infinite string of numbers \[\frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2^n}+\cdots\] and the problem is that we don't know how to do that.  Can we?  This is where the mathematician gets to make a choice.  Here's how we deal with infinite sums:  we can definitely add up a finite collection of numbers,  so given an infinite sum \(a_1+a_2+\cdots +a_n+\cdots\) we define the \(k\)th partial sum to be \[s_k = a_1+a_2+\cdots + a_k\] and then say \[ \sum_{n=1}^\infty a_n = S\quad \text{if} \quad \lim_{k\to\infty} s_k = S.\]  So, in the case of Zeno's sum, we have \[s_k = \frac{1}{2} + \frac{1}{4}+\cdots + \frac{1}{2^k} = 1-\frac{1}{2^k}\] (the last equality should be pretty obvious to you--think about how far you are from the end if you've gone \(k\) steps).  This sequence clearly has limit \( 1\) et voila we've resolved the paradox. 

Or have we?  Our students weren't so sure.  What we've really done is define the paradox away.  That is, by defining what we mean by an infinite sum, we are able to demonstrate that it makes sense to add these powers of \( 2\) and that the answer is \(1\).  But we haven't really resolved it philosophically, have we?  Alas.

But that's not what mathematicians do.  The definition above is extremely useful and allows us to make sense of all sorts of interesting things like the natural exponential function, trig functions, Fourier series, etc.  We'll trade philosophical quandaries for useful mathematics any day.

But here's one more fun thing to talk about, one which invariably spawns arguments.  A geometric series is an infinite series of the form \[ a+ ar +ar^2 + \cdots + ar^n +\cdots = \sum_{n=1}^\infty ar^{n-1}.\]  The number \(r\) is called the ratio of the series.  We can actually find a formula for the sum of such a series.  The trick is to consider the \(k\)th partial sum \(s_k = a+ar+\cdots +ar^{k-1}\), then multiply it by \(r\) to get \(rs_k = ar+ar^2+\cdots +ar^{k-1} + ar^k\).  Subtracting the latter from the former and then dividing by \(1-r\) we get \[s_k = \frac{a(1-r^k)}{1-r}.\]  Now, if \(|r|>1\), this sequence has no limit since the term \(r^k\) goes off to infinity.  If \(r=1\) this series clearly diverges since I'm just adding \(a\) to itself infinitely many times (assume \(a\ne 0\)).  But, if \(|r|<1\), the term \(r^k\to 0\) and so we get the formula \[\sum_{n=1}^\infty ar^{n-1} = \frac{a}{1-r}.\] We'll come back to this later in the course when we talk more about infinity and Cantor's work, but for now, let's have an argument.

What number is this: \[0.99999999\dots\]  Note that any repeating decimal represents a geometric series.  In this case, we have \[0.99999999\dots = \frac{9}{10} + \frac{9}{10^2} +\cdots +\frac{9}{10^n} +\cdots\] and this is a geometric series with first term \(9/10\) and \(r=1/10\).  The sum is then \[\frac{9/10}{1-1/10} = \frac{9/10}{9/10} = 1.\]  Thus, we see that \[0.99999999\dots = 1.\]

Wait.  How can that be?  This is where the fight begins, and if you think about it, this is just a rephrasing of Zeno's paradox, where instead of going half the distance at each step, we go \(9/10\) the distance (same difference, just different sized steps).  Well, I just proved to you that the infinite sum is \(1\).  But wait, you say, that's just in the limit; it never actually equals \(1\).  But, I say, that's the definition of an infinite sum and the calculation is correct.  But, you say, that number has to be less than \(1\).  And round and round we go.  OK, I say, here's another proof.  Let \(x=0.9999999\dots\). Then \(10x = 9.99999999\dots\) and then we see that \[9x= 10x - x = 9.999999999\dots - 0.999999999\dots = 9\] from which it follows that \(x=1\).  You can't really argue with this logic.  I didn't use limits or the definition of an infinite sum.  I just did some algebra.  I don't know, you say, something still seems fishy...

Well, ok, how about this one, which I learned from my high school math teacher, Mrs. Ruth Helton. Note the following pattern \[\frac{1}{9} = 0.111111111\dots \]   \[\frac{2}{9} = 0.222222222\dots \] \[\vdots\] \[\frac{8}{9} = 0.8888888888\dots\]  So we must have \[\frac{9}{9} = 0.9999999999\dots,\] right?  I'm being facetious, but you have to admit that it's a good heuristic.

These two numbers really are the same, but it comes down to what we mean by "number."  We all understand what a natural number is because we use them to count.  It's then not too hard to get to rational numbers because we understand how to divide wholes up into parts.  We understand negative numbers because we have all owed someone money at some point.  But then we reach the question of what an arbitrary real number is, say \(\sqrt{2}\).  It is not a rational number (the fact of which allegedly got its discoverer killed by the Pythagoreans), yet we know it exists since we can construct an isosceles right triangle.  More generally, how do we define the real numbers?  That's a rather complicated question, one which we won't discuss here, but which more or less comes down to approximating any number by rationals via a sequence (truncate the decimal expansion of the number at each place; these are all rational).  

So, that's that for this week.  Up next, more Kafka and more infinity.

 

 

"Women Can't Write; Women Can't Paint"

How many times do you think Virginia Woolf heard that? Sexism was rampant enough in the early 20th century (luckily, we're past all that now, right?) that it was difficult for a woman to have a career as a novelist.  Add in the modernist style she used and it's a wonder that Woolf's work saw the light of day.

First, a confession.  Before picking up To the Lighthouse, I had never read any of Woolf's novels and, frankly, I was never a fan of modernist literature (Joyce, Faulkner, etc.).  I've read Dubliners, and in a fit of youthful bravado tried to read Ulysses once (I think I finished 20 pages).  About ten years ago I gave The Sound and the Fury a shot (read the first chapter, I think).  So my track record here is spotty at best and my initial impression as I waded through the first few pages of Lighthouse was one of, let's say, skepticism.  The nonlinear narrative, the near stream-of-consciousness language, the lack of action--where's the story? 

Which leads us to the question of what the point of literature is.  And by "literature" I don't mean mere fiction.  The point of, say, a Tom Clancy novel is entertainment.  It's fine to read as a way to pass time on airplanes, but we don't really learn anything about the human condition from it.  Capital-L literature, however, reveals deep truths about humanity and its place in the world.  As such, it demands more from its readers.  As I slogged through the opening scene--Mrs. Ramsay knitting socks for the lighthouse keeper's son, Lily Briscoe working on her painting, Mr. Ramsay lost in thought and grumpy as usual--I found myself drifting.  Losing my place.  Working hard to see what was even happening (answer:  not much).  Will they go to the lighthouse tomorrow?  No, says Mr. Ramsay, the weather will be no good and the sea will be rough.  James, sitting at his mother's knee disappointed, wanting to put a knife in his father's back.  Lily getting her painting critiqued by Mr. Barnes, who uses his pen knife to point at things on the canvas condescendingly.  Andrew and Minta: where are they?  Why haven't they come back?  Then, hey, here they are.  But they're late for dinner, which we get through Mrs. Ramsay's view, with idle conversation and a lot of talk about the bowl of fruit on the table.  And man, Mr. Ramsay is quite the needy sensitive academic, isn't he?

But wait.  Maybe I'm a bit like Mr. Ramsay.  Not in the needing people to tell me how important my work is, and not in the obsessed with leaving a legacy way, but in the hyper-aware of mortality, taking myself too seriously way.  And then I see that, yes, this is Capital-L literature and I am learning something about the human condition, and I've spent days just like this one, at the sea even, with my wife's family and not much happening but yet it's everything that life is; the children playing in the surf; the adults sitting on the porch reading, watching the waves, playing the guitar; and at night after dinner watching the moon rise on the horizon, drinking a cold beer; running with the dog in the sand.  No lighthouse, but maybe tomorrow we will go to the inlet to look for shells and shark teeth.

So, at some point I decided that I do like this book.  In class, feelings were mixed.  One student hated it and said so.  Others were tepid at best.  Before class I overhead a student saying that she had heard that this book is better when you're older, and I can see that.  I'm not sure how much I would have liked or understood To the Lighthouse when I was 21.  Or 25.  Or 35, even.  Which leads to another question:  do we have to read it all when we're so young?  I didn't read Moby Dick until I was past 40, and maybe that's right. 

Anyway, this is supposed to be a class about mathematics and literature, so let's get to that.  Obviously, there's a lot of nonlinearity and chaos in this book's narrative structure.  There's the uncertainty of measurement--Mrs. Ramsay is constantly checking the length of the sock she's knitting, for example.  Lily's painting will embody some of this eventually; by the end, it has gone from a fairly standard impressionist landscape to a cubist work in which Mrs. Ramsay is a blurry triangle.  There's also the trip to the lighthouse as a metaphor for the infinite, a sort of Zeno's Paradox made concrete.  But what we spent most of the math time on was the Principle of Mathematical Induction (PMI). 

Question: Can you knock down an infinite row of dominoes?  In essence, this is what the PMI is about.  There are all sorts of philosophical problems with the question, but induction is a useful proof technique when one wants to make a claim about a statement being true for all integers.  After telling the class the (probably apocryphal) story about Gauss and adding up the first hundred positive integers (answer: \( 5050 \)), I gave an induction proof for the formula for adding up the first \( n \) squares:  \[ 1^2 + 2^2 +\cdots + n^2 = \frac{n(n+1)(2n+1)}{6}.\] Induction works like this:  first prove that your proposed statement holds in some base case, usually \( n=1\) but it could be any integer; then, assuming the result is true for \( n\) prove it holds for \( n+1\).  What this amounts to, using the domino analogy, is that you can knock down the first domino, and assuming you can knock down the first \(n\) dominoes you can show that you knock down the (\( n+1\))st domino.  You may then conclude that the result is true for all positive integers; that is, you knock down all the dominoes.

Why bring up induction?  Well, Mr. Ramsay is a philosopher and there is a stretch in the narrative where he is thinking about his accomplishments. 

For if thought is like the keyboard of a piano,
divided into so many notes, or like the alphabet is ranged in twenty-six
letters all in order, then his splendid mind had no sort of difficulty
in running over those letters one by one, firmly and accurately, until
it had reached, say, the letter Q. He reached Q. Very few people in
the whole of England ever reach Q. Here, stopping for one moment
by the stone urn which held the geraniums, he saw, but now far, far
away, like children picking up shells, divinely innocent and occupied with
little trifles at their feet and somehow entirely defenceless against a
doom which he perceived, his wife and son, together, in the window. They
needed his protection; he gave it them. But after Q? What comes next?
After Q there are a number of letters the last of which is scarcely
visible to mortal eyes, but glimmers red in the distance. Z is only
reached once by one man in a generation. Still, if he could reach R it
would be something. Here at least was Q. He dug his heels in at Q. Q he
was sure of. Q he could demonstrate. If Q then is Q—R—. Here he
knocked his pipe out, with two or three resonant taps on the handle of the
urn, and proceeded. “Then R ...” He braced himself. He clenched
himself.

Qualities that would have saved a ship’s company exposed on a broiling
sea with six biscuits and a flask of water—endurance and justice,
foresight, devotion, skill, came to his help. R is then—what is R?

A shutter, like the leathern eyelid of a lizard, flickered over the
intensity of his gaze and obscured the letter R. In that flash of
darkness he heard people saying—he was a failure—that R was beyond him.
He would never reach R. On to R, once more. R—

So, he's trying to knock down dominoes, and he can't get to the \(18\)th (Hebrew numerology fact pointed out by a student in the class:  R is the eighteenth letter of the alphabet, and \(18\) means "life"; why did Woolf choose "R"? Ramsay? Reality?).  This also opened up a discussion of symbolic logic and how these systems are built.  I even drew a truth table on the board.  Good stuff.

But, we're not done.  More discussion of To the Lighthouse in the next installment.

Et in Arcadia Ego

Et in Arcadia Ego, by Nicolas Poussin

Et in Arcadia Ego, by Nicolas Poussin

Tom Stoppard's Arcadia: a play that alternates between 1809 and the present (well, 1993 present), begins with a mention of Fermat's Last Theorem (which had not yet been proved--Wiles finally got it a year later) and ends as a metaphor for the Second Law of Thermodynamics, and whose structure itself can be modeled (loosely) as a discrete dynamical system.  It skewers academia.  It is a postmodernist work that jabs at postmodernism.  There's sex, Romantic poetry, tortoises, waltzing.  So, yeah, lots to talk about.

Eric and I really geeked out on this one. The more you read it, the more you find, and the more interesting it becomes.  The story is actually not that complicated, but the structure of the play can make it seem that way.  Arcadia opens in the English countryside in 1809 at the home of the Earl and Lady Croom (we never meet the Earl).  The garden is being completely redesigned in the new Romantic style by a Mr. Noakes, who is using the only Improved Newcomen Steam engine in England to drain the pond. All the action in the play takes place in the drawing room of the home; the table in the center contains an assortment of objects that gets more cluttered as the play progresses.  The Croom daughter Thomasina is being tutored by one Septimus Hodge, a friend (acquaintance) of Lord Byron who is quite the Lothario, having seduced one of the house guests, Mrs. Chater.  Mr. Chater is a poet (we are led to believe) whose first major poem was skewered in the Picadilly Review by an anonymous reviewer (but guess who it is) and whose recent work, Couch of Eros, is being read by Septimus in the opening scene.  Thomasina is quite gifted at mathematics and Septimus has given her an assignment for the morning:  prove Fermat's Last Theorem.  Of course she cannot, but she begins doodling in her notebook by iterating a certain function (we don't know which).  This is an explicit reference to discrete dynamical systems, which were not at all understood (or even much thought about) then, and even if they had been there was not enough (any?) computing power available to run thousands of iterates.  Note that when Stoppard was writing the play, "chaos" and all the pretty pictures had seized the popular imagination thanks to the caffeine and nicotine-fueled work of Benoit B. Mandelbrot (math joke:  what does the B. in Benoit B. Mandelbrot stand for?  answer: Benoit B. Mandelbrot.)

Scene 2 takes place in the modern era.  We meet Hannah, who is writing a book about the transformation of the garden at the Croom estate.  I forgot to mention that part of Mr. Noakes's plan included a hermitage.  Lady Croom wants to know who the hermit will be; after all, Mr. Noakes should supply one.  Hannah has a theory about it, which proves to be correct in the final page of the play.  We also meet Bernard Nightingale, an English scholar always on the lookout for fame and academic bragging rights.  In conversation with Hannah, he deduces, via some of the materials in the library, that (a) Lord Byron had been at the estate; (b) had seduced Mrs. Chater; and (c) had killed Mr. Chater in a duel, prompting him to flee England for the continent.  We also meet Valentine, who is trying to understand the grouse population on the estate.  The records of how many grouse were shot is extensive, stretching back more than 200 years, but he can't find the pattern ("There's too much noise in the system.  The noise!").  Of course there isn't much of a pattern, as we know from studying the logistic equation--populations can exhibit chaotic behavior, even when the inputs are known completely.  Upon stumbling on Thomasina's notebook on the shelf, though, he is astounded to find that she was experimenting with just such an equation; at first he dismisses it--"She couldn't have discovered it."  Academic snobbery at its finest.

Scenes 3 and 4 are in the past and present, respectively.  Act Two, whose first scene is Scene 5, begins in the modern period, then moves to the past in Scene 6.  Scene 7 is where all hell breaks loose; more on that below.  So, here's how the play is modeled like a discrete dynamical system:  the end of each scene provides the foundation for the beginning of the next.  That is, we learn something at the end of the scene and this gives the impetus for how the next scene begins.  Back and forth in time, this iteration proceeds as we move along.  Bernard makes a lot of assumptions, which may or may not be reasonable, and writes a paper claiming that Byron engaged in a duel, killing Chater.  When we go back in time, we find out the truth: that Chater was really a botanist who died after being bitten by a monkey on an expedition in Martinique; his wife then marries Lady Croom's brother, the captain of the expedition, who had brought the Chaters along only because he was in love with Mrs. Chater.  Hannah discovers the truth and tells Bernard that she will expose him as a fraud, humiliating him.  Back in the past, the final scene shows that Thomasina is in love with Septimus (and he tries to pretend he does not feel the same way).  We know that she dies in a fire that very night as the play ends with them dancing a waltz on stage at the same time Hannah and Gus (who I haven't mentioned before now, but he never speaks; he does find all the relevant documents which disprove Bernard's theory and prove Hannah's theory about the hermit correct) are clumsily waltzing as well. 

The table in the center becomes cluttered with objects--increasing entropy.  In fact, since there are two systems contributing to the disorder, the total entropy is greater than the sum of the two individual entropies (this is one of the fundamental properties of entropy).  The Second Law of Thermodynamics is sometimes referred to as "heat death"--the universe will eventually be a completely disordered mass at room temperature.  The action of the play behaves this way a bit, but there are also obvious references to heat death; Thomasina literally dies from heat. 

Because I couldn't help myself, here's a plot of the play as it bounces back and forth in time.  There's not really a time scale to measure, but generally, the scenes have varying length, tending to get shorter as the play goes on (as if the function being iterated were converging on some fixed point).  Scene 7 is chaotic in nature, fluctuating wildly between the past and present, with some dialogue lasting only one or two lines in each time period before bouncing back to the other.  It's difficult to visualize this, but the graph below is one attempt.

A rough graph of the action.

A rough graph of the action.

There is really too much mathematics and satire to summarize, so I'll stop here.  Up next, Virginia Woolf's To the Lighthouse.

2+2=5: Reframing Literature through Mathematics

Yes, I'm on sabbatical, and yes, I'm teaching a class anyway.  UF's Center for the Humanities and the Public Sphere has a team-teaching initiative.  My friend and colleague Eric Kligerman and I submitted a proposal a year ago for a course with the above title; the selection committee liked it, and here we are. The title of course references Orwell's 1984 and Winston Smith's final submission to the state, but it also refers to this great Radiohead song.  My plan is to blog about this weekly; maybe we'll turn it into an article.  Maybe not.

Our first class was Thursday, January 8.  We meet once a week for three hours.  That's intense and I'm not used to it (math is usually done in smaller chunks).  The class is not just about instances of mathematics in literature (like the coin flipping in Rosencrantz and Guilderstern are Dead), although we will point them out as they arise.  The real focus is on various authors' use of mathematics as metaphor and structure in their works.  Up first:  Book VII of Plato's Republic, which contains the famous Allegory of the Cave.  This is also the book in which Socrates is discussing which subjects are suitable for the education of his philosopher kings.  The first subject, after gymnastics, is arithmetic.  Socrates points out that Agamemnon was a horrible general, mostly because he didn't know his figures, but there's a bigger reason he's interested in it.  Namely, he argues that rulers need to understand the higher logical functions that come along with learning about numbers (he argues for geometry after arithmetic).  Indeed, there's a reason we still teach plane geometry in high school--it's not just its utility in describing things, but it's the first introduction to a rigorous logic system.  The skills learned in geometry apply to other fields and make the king fit to rule (once he reaches 50, of course).

To the Greeks, "geometry" meant Euclidean geometry and so we spent some time discussing this.  We introduced Euclid's five postulates, the first four of which are entirely obvious.  The fifth, often called the Parallel Postulate, was the subject of some controversy, even to Euclid.  Indeed, he avoided using it in proofs in the Elements until Proposition XXIX, which you can probably recite in its modern form: when parallel lines are cut by a transversal, alternate interior angles are congruent.  For 2,000 years, mathematicians tried to prove that the Parallel Postulate is a consequence of the others, to no avail.  It wasn't until the 1800s that someone asked the question of what happens if you negate it. (More accurately, it's easier to work with Playfair's Axiom, which is equivalent.) It turns out that it is possible to construct interesting, naturally occurring geometries in which the Parallel Postulate does not hold.  The first of these should have been obvious, even to Euclid, since the Greeks knew the Earth is a sphere.  On the surface of a sphere, given any "line" \(\ell\) and a point \(P\) not on the line, every line through \(P\) intersects \(\ell\).  Of course, "line" here means a great circle (think of longitudes) since they are the shortest paths between points on the surface of a sphere. (Ever wonder why flights to Europe pass over Newfoundland and then swing by Iceland? They're following a great circle, more or less.)  But let's be honest, it's a bit unfair to use our 21st Century hindsight to criticize the ancients for missing this one.

The other interesting non-Euclidean geometry is the hyperbolic plane.  In hyperbolic space, there are infinitely many lines through \(P\) that miss \(\ell\).  A model for this is the unit disc in the plane (not including the boundary circle) where "lines" are circular arcs orthogonal to the boundary circle, along with diameters.  Here's a picture of a point and infinitely many lines missing another line:

Got this from wikipedia: http://commons.wikimedia.org/wiki/File:Poincare_disc_hyperbolic_parallel_lines.svg

Got this from wikipedia: http://commons.wikimedia.org/wiki/File:Poincare_disc_hyperbolic_parallel_lines.svg

You've seen this before.  M.C. Escher famously used the hyperbolic plane to make pieces like this:

Got this from this site: http://euler.slu.edu/escher/upload/thumb/0/06/Circle-limit-IV.jpg/300px-Circle-limit-IV.jpg

Got this from this site: http://euler.slu.edu/escher/upload/thumb/0/06/Circle-limit-IV.jpg/300px-Circle-limit-IV.jpg

And, if you've ever eaten green leaf lettuce, then you've digested hyperbolic space thoroughly.  In fact, hyperbolic structures show up when an object needs to curl up to conserve space.  Coral reefs behave this way for example.

So, with some non-Euclidean ideas in hand we're ready to proceed.  We ended class with this passage from Dostoyevsky's Brothers Karamazov:

My task is to explain to you as quickly as possible my essence, that is, what sort of man I am, what I believe in, and what I hope for, is that right? And therefore I declare that I accept God pure and simple. But this, however, needs to be noted: if God exists and if he indeed created the earth, then, as we know perfectly well, he created it in accordance with Euclidean geometry, and he created human reason with a conception of only three dimensions of space. At the same time there were and are even now geometers and philosophers, even some of the most outstanding among them, who doubt that the whole universe, or, even more broadly, the whole of being, was created purely in accordance with Euclidean geometry; they even dare to dream that two parallel lines, which according to Euclid cannot possibly meet on earth, may perhaps meet somewhere in infinity. I, my dear, have come to the conclusion that if I cannot understand even that, then it is not for me to understand about God. I humbly confess that I do not have any ability to resolve such questions, I have a Euclidean mind, an earthly mind, and therefore it is not for us to resolve things that are not of this world. And I advise you never to think about it, Alyosha my friend, and most especially about whether God exists or not. All such questions are completely unsuitable to a mind created with a concept of only three dimensions. And so, I accept God, not only willingly, but moreover I also accept his wisdom and his purpose, which are completely unknown to us; I believe in order, in the meaning of life, I believe in eternal harmony, in which we are all supposed to merge, I believe in the Word for whom the universe is yearning, and who himself was ‘with God,’ who himself is God, and so on and so forth, to infinity. Many words have been invented on the subject. It seems I’m already on a good path, eh? And now imagine that in the final outcome I do not accept this world of God’s, created by God, that I do not accept and cannot agree to accept. With one reservation: I have a childlike conviction that the sufferings will be healed and smoothed over, that the whole offensive comedy of human contradictions will disappear like a pitiful mirage, a vile concoction of man’s Euclidean mind, feeble and puny as an atom, and that ultimately, at the world’s finale, in the moment of eternal harmony, there will occur and be revealed something so precious that it will suffice for all hearts, to allay all indignation, to redeem all human villainy, all bloodshed; it will suffice not only to make forgiveness possible, but also to justify everything that has happened with men—let this, let all of this come true and be revealed, but I do not accept it and do not want to accept it! Let the parallel lines even meet before my own eyes: I shall look and say, yes, they meet, and still I will not accept it.

I'll leave it to you to decide whether or not this argument is valid.

Up next: Tom Stoppard's Arcadia, which includes references to discrete dynamical systems, Fermat's Last Theorem, and the second law of thermodynamics.  Tune in next time.