Gant, Caulfield, Wolfe, Salinger

When I was in sixth grade my class took an overnight field trip to Asheville, NC.  This would have been the winter of 1980-81 and it included the obligatory visit to the Biltmore House, and, for some reason, a stop at a K-Mart near the hotel (I think my group's chaperone needed shaving cream or something).  I think I bought a poster there, although I don't remember of what or why I thought it would be a good idea to spend some of my limited funds on it. 

Anyway, the trip also included a visit to the Thomas Wolfe House.  I remember being told that Wolfe was North Carolina's most famous writer and that this home was an important piece of American history.  Here's a picture:

the aforementioned Wolfe home, photo via random internet site

the aforementioned Wolfe home, photo via random internet site

I don't really remember much about the place except that it was kind of dark in there and that it was full of period furniture.  Maybe the stairs were steep.  This was 35 years ago, after all.

These days hardly anyone reads or remembers Wolfe, and North Carolina's most famous author is probably Nicholas Sparks (alas, and he's from Nebraska).  Here's the thing, though:  growing up there, you'd think I would have read one of Wolfe's novels at school.  I mean everyone agreed that Wolfe was amazing and the state's greatest writer, etc., etc., but none of his books ever appeared on a reading list.  To be fair, sixth grade was probably too young for it (although my teacher, Mr. Grubbs, tried to get us to read A Tale of Two Cities, a slog at any age), but you'd think that maybe in high school they would have squeezed one in between Hawthorne and Shakespeare.

I bring this up because I finally decided to rectify the situation and read Wolfe's most famous book, Look Homeward, Angel.  Subtitled A Story of the Buried Life, it is clearly autobiographical.  The book is set in the fictional town of Altamont (clearly Asheville), where the young Eugene Gant lives with his mother in her boarding house, Dixieland (clearly the house run by Wolfe's mother in real life).  As I plodded through all 500+ pages, I kept asking myself if I liked this book.  In the beginning, I certainly did not.  I mean, we get a narrative in which the novel's protagonist has a rich inner monologue as a toddler; since this is really Wolfe himself we get the sense that he thinks he's pretty special and smart and all that (as if the subtitle didn't clue us in).  He used the word phthisic waaaaay too much (isn't once too much?).  As Gant gets older we see how the school masters think he's special, his father wants him to go into law and politics, and his mother "pshaws" him constantly.  He is prone to outbursts in which he tells his family they're all just haters (not in so many words, of course).  Frankly, he comes off as a whiny brat, which would be ok if his family actually did something to make him feel bad.  Except they don't, really.  So, no I didn't like ole 'Gene and didn't care for the story much as a result.  And when he goes to the state college in, get this, Pulpit Hill (groan), I just had to decide to ride it out.

Some 25 years later, J.D. Salinger published The Catcher in the Rye, with America's most famous whiny brat protagonist, Holden Caulfield.  As I read Angel, I couldn't help thinking about Holden.  I could all but hear 'Gene calling everyone around him phonies.  Pining for girls who won't give him the time of day.  Blah, blah, blah.

Here's the problem with books like this:  you can only really identify with them when you're a teenaged boy (maybe girls can, too, dunno).  When you read them as an adult, perhaps with a teenager of your own, you have no patience for them.  I re-read Catcher a few years back and it annoyed me to no end; well, Holden annoyed me.  The book itself is well-written.

Which is what I'll say for Wolfe.  He crafts beautiful prose (when he isn't overusing obscure words).  So I think I understand why everyone went nuts over his work; as an example of how to write floridly it's great, but as a novel it falls flat.  And this latter point makes me understand why it never appeared on my high school reading lists--Salinger did it better and shorter.

But that's how it goes, I guess.  What one generation thinks is great is often slowly forgotten.  Maybe I should tackle Trollope next.



Embrace the Mystery

The final "text" for the course: the Coen Brothers' A Serious Man, the story of Larry Gopnick, a physics professor in 1967 Minnesota.  It's pretty much the Book of Job for modern times--a series of misfortunes befalls Larry and he seeks answers from his rabbis.  There are none, although the second rabbi's story about the goy's teeth is illuminating if you think about it correctly ("helping people couldn't hurt"). 

I actually don't have much to say about this film that hasn't been discussed in other contexts.  There isn't much new mathematics here. There is the obvious connection to the uncertainty principle (literally since Larry teaches quantum physics, but also figuratively as the plot unfolds).  Probability plays a role that we haven't explicitly seen before, but it's fairly minor.  Larry's brother Arthur, a (closeted) homosexual living with them, has written the Mentaculus, a probability map of the universe. 

a spread from Arthur's Mentaculus

a spread from Arthur's Mentaculus

Since this is the end of the course, I thought I'd just write about my general feelings about it, rather than hammer away at the film (we've had enough epistemania).  I taught my first class in the spring of 1991.  I was 21 years old and when I went in to give my first lecture, I was so nervous my hands were shaking as I opened the box of chalk.  I was younger than a few of my students (the ones who had put off the class, their last graduation requirement, until the final semester).  It was rough, but I got better and now I don't worry much about walking into a room of 600 to deliver a lecture.  When I set out to earn my Ph.D., my goal was to be a college professor.  Sure, I love mathematics and research, but I always pictured myself lecturing about the subject I've loved since my first grade class cheered me on when I solved a difficult problem correctly at the overhead projector (I was able to write the number 8 with tally marks).  I never tire of teaching calculus, one of the most significant intellectual achievements of the last 400 years.  Get me started talking about topology and I won't shut up.

But this class.  This has been the most rewarding and intellectually stimulating teaching experience I've ever had.  For that I have to thank my co-conspirator, Eric Kligerman, and our remarkably thoughtful, brilliant students.  I was on research leave this year, working on a book and some other projects, but I taught this class anyway because I thought it would be so fun.  It didn't even feel like work.  I love to read, of course, but this class "forced" me to read things I probably never would have picked up (Woolf's To the Lighthouse, for example).  Looking for mathematics embedded in the structure of texts got me to think deeply and critically.  I found a Cantor set in Kafka's The Great Wall of China; I'm even working on a paper about it.  I finally understand the precise mathematical statement of the Uncertainty Principle (well, sort of; if nothing else I have embraced the mystery). 

Isn't this what we all imagine when we think of a university class?  A small group of engaged individuals tackling tough material.  Conversation so stimulating you hardly notice that three hours have gone by.  A bit of sadness when the last session is over.

So, what does the future hold for us and this course?  Unclear.  The Honors Program director has asked if we'd be interested in doing it again next spring.  We are willing, provided we can work it out with our departments.  In these days of efficiency, we may be needed elsewhere.  But I can assure I will always keep it in the back of my mind, looking for connections and new pieces of literature to view through a mathematical lens. 

For now, summer school looms.  Thanks for reading the chronicles of our little experiment.

But is it literature?

I once saw a video installation at an art gallery (full disclosure:  I do not care for "video as art" so know that before reading on) which showed a fox running around a London art museum after hours.  Naturally, the poor animal was confused and slunk cautiously along the walls, often curling up under a bench to hide.  Now, is this art?  Is it Art? 

I don't know (well, I have an opinion, but you know what they say about those).  The accompanying text panel written by the artist, though, made a case.  You see, the fox represents the immigrant in a strange land, trying to find his way in an unfamiliar and often inhospitable environment.  He lives on the fringes and hides in the shadows.  Some other art speak followed.  (Aside:  if you want to generate your own artist's statement, visit

Which brings me to OULIPO (Ouvroir de Littérature Potentielle--Workshop of Potential Literature).  This is a French literary movement, dating back to 1960, which deals with certain formal, algorithmic methods of creating literature.  Examples:  write a novel without using the letter e; write a snowball poem in which each line consists of a single word with one more letter than the previous line; take 10 sonnets, one to a page, and cut each page into 14 strips to create an exquisite corpse containing \(10^{14}\) distinct sonnets. 

Or, as we discussed in class, try the \(N+7\) method:  take a piece of writing and for each noun, look it up in a dictionary and replace it with the seventh noun following it in the dictionary.  Sounds like a lot of work, right?  Luckily, there is software to do it for you, like this site.

Let's do an example.  Here's a paragraph from the book I'm reading now, Thomas Wolfe's Look Homeward, Angel.

White-vested, a trifle paunchy, with large broad feet, a shaven moon of red face, and abundant taffy-colored hair, the Reverend John Smallwood, pastor of the First Baptist Church, walked heavily up the street, greeting his parishioners warmly, and hoping to see his Pilot face to face. Instead, however, he encountered the Honorable William Jennings Bryan, who was coming slowly out of the bookstore. The two close friends greeted each other affectionately, and, with a firm friendly laying on of hands, gave each to each the Christian aid of a benevolent exorcism.

Most paragraphs in this book are like this, by the way.  I'm still forming an opinion of it (but it's not so high right now--Eugene Gant is not the most likeable protagonist you'll ever meet).  And, since it mentions William Jennings Bryan, I feel compelled to link to this video.

Now, let's run this through the \(N+7\) generator and see what we get.

White-vested, a trim paunchy, with large brogue footmen, a shaven mop of red faction, and abundant taffy-colored hairpiece, the Reverend John Smallwood, pate of the Fissure Baptist Chutney, walked heavily up the stretcher-bearer, grief his parliaments warmly, and hoping to see his Pinch faction to faction. Instead, however, he encountered the Honorable William Jennings Bryan, who was commencement slowly out of the boot. The two close fringes greeted each other affectionately, and, with a fishmonger frippery laying on of handfuls, gave each to each the Chuckle airbrick of a benevolent expedition.

Some of these passages actually make sense, or at least they are not ungrammatical (Orwell spins in his grave).  I rather like the phrase "pate of the Fissure Baptist Chutney" and the transformation of "Christian aid" to "Chuckle airbrick" is amusing enough.  The algorithm is not perfect, though.  Notice that the program read "greeting" as a noun, replacing it with "grief," and also replacing "coming" with "commencement."  These are pretty minor, though, and can be caught easily.

But is it literature?  Is it Literature?  It's certainly an interesting exercise, and sometimes leads to new passages that could be interpreted in a literary manner, but if we are going to generate things almost at random, is it reasonable to expect meaning to emerge?  There is the old saw about a room full of monkeys eventually typing Shakespeare, and Borges teaches us that all of these passages are in an unfathomable number of books in his Library of Babel.  But does that mean that anything we write down has meaning, even if we can make some grammatical sense of it?

Or, as one student asked, "why?"  Bear in mind that this did arise in 1960s France, ground zero for postmodernist thought.  On that level, then, it is unsurprising that someone thought to perform this experiment.  And, one reason to do it is that there is "potential literature" out there, waiting to be discovered.  Do writers create or discover?  I doubt anyone seriously thinks the latter, but in mathematics this is a real argument--do we create mathematics, or is it already out there waiting for us to find it? 

How many almost-great novels have been written that are just shifted versions of some great novel?  How many great novels are waiting out there to be found by shifting some banal passages?  What if we take a paragraph from this post and transform it:

But is it livelihood? Is it Livelihood? It’s certainly an interesting exile, and sometimes leads to new pastas that could be interpreted in a literary mantel, but if we are going to generate thistles almost at random, is it reasonable to expect mechanic to emerge? There is the old saw about a rosary full of monorails eventually typing Shakespeare, and Borges teamsters us that all of these pastas are in an unfathomable nursery of bookmarks in his Lick of Babel. But doglegs that mean that anything we write dowse has mechanic, even if we can make some grammatical sentry of it?

Nope.  Not great literature.  Ah well.  I guess it's the potential that counts.

Mr. Heisenberg Goes to Copenhagen

A 1941 meeting between Werner Heisenberg and Niels Bohr is the subject of Michael Frayn's Copenhagen. The link takes you to a PBS production of the play, starring James Bond Daniel Craig as Heisenberg. The central question is why? Why did Heisenberg go to Copenhagen to meet Bohr?

The historical context is that Denmark was under Nazi occupation at the time.  Heisenberg was in charge of the nascent German nuclear program (well, everyone's nuclear program was nascent then) and naturally he would want Bohr's opinion.  Since the Gestapo was escorting Heisenberg and Bohr's home was surely wired, they took a walk.  What was said?  No one knows.  In the play, Heisenberg asks "does a physicist have a moral right to work on fission?"  Bohr responds by refusing to answer and walking away. 

Oh, I forgot to mention that this is being told via flashback; you see, the only three characters in the play are Heisenberg, Bohr, and Bohr's wife Margrethe and they are dead.  Their ghosts are having a conversation about the conversation.  Memory is a funny thing and they can't quite agree on what happened.  And why didn't Heisenberg succeed in building a bomb?  That's the really interesting aspect and he comes off as a rather sympathetic character.  In reality, other physicists refused to even shake Heisenberg's hand after the war since they assumed he had tried to build a bomb.  Did he? Frayn leads us to believe that his failure was intentional.

So, where's the math here?  Two things.  First, of course, is Heisenberg's Uncertainty Principle.  This isn't math as much as it is physics, but there is a precise mathematical statement which is fairly easy to understand.  Suppose a particle is moving along a path.  Its position \(X\) is a random variable whose probability density function is \( f(x)\) as \(x\) varies over some interval.  The momentum of the particle is another random variable \(P\).  The statement of the uncertainty principle is then \[\sigma_X\sigma_P \ge \frac{\hslash}{2},\] where \(\sigma_X\) and \(\sigma_P\) are the standard deviations of the random variables \(X\) and \(P\) and \(\hslash\) is the reduced Planck constant.  This is a very small number (\(1.054\times 10^{-34}\)), but it is positive.  What this means is that if we want to increase the precision of one of the measurements (shrink its deviation), we necessarily lose precision of the other (its deviation increases). 

Of course, this only applies at the quantum scale.  On a macroscopic level, I can obviously look out my window, see my car parked in the driveway, and know its precise position and momentum (zero mo, of course).  This quantum uncertainty, where everything is expressed as probabilities, takes some getting used to, but once it sinks in it becomes a natural way of thinking.  Einstein rather famously did not like this idea at first, leading him to quip that "God does not play dice." 

The other interesting bit of math in the play is an instance of the Prisoner's Dilemma.  During one scene, Heisenberg asks Bohr if the Allies have a nuclear program and, if so, how far along they are.  Bohr claims he doesn't know (no reason not to believe him--he was in occupied Denmark, after all).  Here is the dilemma:  if the Allies aren't working on a bomb, then perhaps Germany has no need to (Heisenberg hints), but of course if the Allies are building one then Germany should as well.  This is the classic Cold War MAD theory (Mutual Assured Destruction) in its infancy.  Here's the payoff matrix:

Germany doesn't buildGermany builds
Allies don't buildno riskGermany dominates
Allies buildAllies dominatetense stalemate

Created with the HTML Table Generator

The lower right corner, which is what happened ultimately, is a Nash equilibrium; that is, if either party changes strategy unilaterally it results in a worse payoff.  The best strategy is the upper left corner, but purely rational actors will choose the Nash equilibrium. 

Rational has a fairly precise mathematical meaning that isn't exactly how real people operate.  Like all mathematical models, two-person games are a simplification of reality, useful on some level but not the whole story.  Copenhagen is much the same: we don't know the whole story and we never will, but it gives us a lens through which to examine history, uncertain as it is.

Möbius Metaphor

A couple of hours before class last Thursday, I got a text from Eric asking if I could talk about the Möbius strip.  He had this idea, not completely worked out at the time (seriously, like two hours before class), that the structure of Aronofsky's \(\Pi\): Faith in Chaos could be modeled by a Möbius strip in some way.  OK, I said, and quickly made one out of a strip of paper right before I left for class (second week in a row that I couldn't get a spot in my "secret" parking lot; I guess it's not so secret anymore).

The film is jarring in many ways, one of which is the repetition of Max's routines.  When he feels a headache coming on his thumb twitches and he begins to panic and then he pops some pills and maybe takes an intravenous injection of some medication; all of this is edited together in rapid succession, heightening  the tension.  The background score throbs, making the viewer edgier still.  Then come the hallucinations (a brain in the sink with ants crawling on it--ewww) until we get a bright flash and then Max wakes up on the floor with a bloody nose.  Add the physical troubles to his relentless drive to find a pattern in the stock market and it's no wonder he's starting to lose grip of his sanity. 

This repetition is what led Eric to think of the Möbius strip as a metaphor for the structure, but it's not quite clear at first that it's the right one.  In case you don't remember, the Möbius strip is the simplest example of a nonorientable surface--it has only one side.  You can make one yourself by taking a strip of paper, giving one end a half-twist and then taping the ends together.  Here's a picture:

from a cylinder to a Möbius strip to a twisted cylinder--two sides to one and back to two.  image from

from a cylinder to a Möbius strip to a twisted cylinder--two sides to one and back to two.  image from

If you look closely at the arrows (on the orange side), we see that in the beginning the cylinder has two sides.  By cutting it apart, adding a half-twist, and taping it back together, we see that if we begin at a point on the cut line and move along a horizontal curve through the middle, then when we get back to the point (remember, this is a two-dimensional object; it has no thickness) where we started, the arrows point in the "wrong" direction.  This is the essence of nonorientability:  choose an outward pointing normal vector and follow it along a closed loop; if you always get back to arrows pointing in the same direction the surface is orientable, but if not the surface is nonorientable. If we go around again, then we are truly back where we started with everything pointing in the right direction. Note also that if we put another twist in the strip, we get something orientable--the arrows line up and it's two-sided again.

How is this idea manifested in \(\Pi\)?  Well, one of our brilliant students had an idea: In the beginning of the film, Max knows nothing (well, that's not exactly true, but let's go with it).  As we move along in time, he discovers a lot--a mystical \(216\)-digit number which the Hasidic Jews in the film believe is the true name of God; he can make predictions about stock prices (or can he?).  This knowledge drives him mad, however.  His headaches get worse until finally he decides not to take the medication and uses a drill to take out the portion of his brain that is torturing him (again, ewww).  He then is back where he started--he knows nothing.  See?  Möbius strip!

Well, maybe it's a bit of a stretch.  In any case, I asked the question:  Is this movie even about mathematics?  I'm not convinced.  It's a device, certainly, but it's really about unknowability and the madness that can cause.  More than anything, the film is about obsession and the idea that if you believe something is important you'll see it everywhere (Max's former Ph.D. advisor, Sol, tells him as much).  Numerology plays a big role here, and in the end that's what Max's work devolves into. 

Serious mathematicians have fallen into this trap.  In the late 1990s we got The Bible Code, in which we are told that God encrypted lots of messages into the Torah via skip codes.  The biggest, most prophetic example in it is that Yitzhak Rabin's name is crossed by the phrase "assassin that will assassinate;" this did come to pass, of course, so voila, God must be trying to tell us something.  But you can play all kinds of games like this.  Consider the following passage from the Declaration of Independence (H/T to Pat Ballew's blog for this):

When in the course of human events,
it becomes necessary for one people to
dissolve the political bands which have
connected them with another, and to
assume among the powers of the Earth,
the separate and equal station to which
the Laws of Nature and
of Nature’s God entitle them

(Not sure why the quotation marks don't line up properly, but let's forge ahead.)  Begin in the first row.  Choose any word you like.  Say you choose "course."  That word has six letters, so count to the sixth word following it; you land on "necessary."  This has nine letters, so count off nine words to get to "which" in the third line.  Lather, rinse, repeat.  Where do you land when you can't continue this process?  In this case you land on "God" in the last line.  Go ahead and try a few others.  I bet you always land on "God."  So, if I wanted to interpret this as proof that the Founders intended the United States to be a Christian nation, I could certainly do so.  I mean, this can't just be a coincidence, right?

Well, yes it can.  And the Bible Code is just a coincidence, too.  In fact, many mathematicians wrote solid refutations of the Bible Code.  For example, you can take Moby-Dick and do the same thing; you get lots of interesting "prophetic" sentences. It's all a consequence of something called the Kruskal count, discovered by the physicist Martin Kruskal in the mid-70s.  The link takes you to a discussion of a really good card trick based on the idea.  The point is that if you begin at some point and then have some algorithm for generating a sequence in your set, then no matter where you start, the sequences all coincide after a while (with high probability).  So it shouldn't be at all surprising that we can find "hidden messages" in texts, just as Max should have known that the "patterns" he was seeing were likely coincidental.  Just now, as I'm writing this in a coffeehouse, Teenage Lobotomy is playing over the speakers.  Coincidence, or is God telling me something?  I mean, I'm writing about a movie in which the main character lobotomizes himself and this song comes on.  That can't be a coincidence.

But this is what we do as humans.  We can't deal with randomness so we look for patterns or assign divine causes to random events.  The truth of course is that the universe is a random place.  God really does play dice.

One final note about the film.  The title is \(\Pi\): Faith in Chaos.  I asked the question: does Max have faith in chaos, or is he looking for faith in chaos?  I don't know.  Talk amongst yourselves.

Drills and Needles

I swear it was a coincidence.  We really didn't set out to show Darren Aronofsky's first film, \(\Pi\): Faith in Chaos so close to Pi Day; it just happened that way.  If you've never seen it, you should.  It's available on Netflix and on Amazon Prime, and on VHS (!) in the UF Library.  Remarkably, campus classrooms are equipped with dual VHS/DVD players so we went with that instead of risking buffering problems.  Side note:  the previews (remember those?) included Dee Snider's Strangeland, and a promo for the DVD version of \(\Pi\) (the format of the future!). 

I'll not editorialize about Pi Day.  Well, ok, I will a bit.  Some mathematicians despise it.  Vi Hart, internet math video maker extraordinaire (seriously, spend a few days of your time watching her stuff) has a rant about it.  Here at UF the fine folks at the science library, in conjunction with some engineering student groups, had a Pi Day celebration, complete with faculty taking pies in the face and contests for who could recite the most digits of \(\pi\).  I don't hate it, but I don't love it, either.  I tend to fall in the "there's no such thing as bad publicity" camp, but I wouldn't mind a bit more substance.  There are lots of interesting places \(\pi\) shows up, and I wish people knew more about them instead of trying to get the first \(1,000\) digits (or whatever).  I only know \(\pi\) up to \(3.141592653\), which is waaaaayyyyy more precision than you'd ever need for any practical calculation.  Hell, engineers are perfectly happy with \(22/7\) or even \(3\) for a back-of-the-envelope calculation.  The legislature of Indiana once introduced a bill that implied that \(\pi\) equals \(3.2\); luckily it didn't pass. 

Anyway, the movie.  It's a jarring film, shot in high-contrast black and white with some rapid editing and off-kilter camera angles.  It's the story of Max Cohen, a mathematician living in New York's Chinatown, who is trying to find patterns in the stock market.  His computer, Euclid, develops a bug and right before it crashes it spits out a couple of stock picks and a \(216\)-digit number.  At first glance, the stock prices seem completely implausible, but they later turn out to be correct (gasp!).  The number is another story.  We get taken on a ride into Jewish numerology via Lenny, who Max meets in a diner, and into the seamy underside of Wall Street finance via Marcy, who is hounding Max to get him to work for them and even offers him a classified processing chip to help him along.  I'll save the analysis for the next post since we were all a bit wiped out by the end of the film and needed some time to process it before having a thoughtful discussion.

After a break, I talked about \(\pi\) a bit.  We all know it's defined as the ratio of a circle's circumference to its diameter (or twice the radius); it's also equal to the ratio of a circle's area to the square of its radius.  The latter definition is actually better in some ways as it's possible to prove the area formula for a circle via simple geometry (Euclid did it in his Elements) while the circumference formula is a bit trickier (and, if we're being honest, really requires the idea of limit, which Archimedes didn't have but which he almost invented).  As for the calculation of \(\pi\), Archimedes got as far as \(3.1415\) by the method of inscribing and circumscribing polygons on the circle and calculating the resulting perimeters.   

But here's an interesting way to calculate \(\pi\), using toothpicks and a piece of posterboard.  Mark off parallel lines on the board at distances equal to the length of a toothpick.  Now ask yourself the following question: if I drop a toothpick onto the board, what is the probability that it crosses a line?  Here's a picture:

now here we go, droppin' science, droppin' it all over...

now here we go, droppin' science, droppin' it all over...

I had the class come up and drop some toothpicks.  We had \(15\) people drop \(10\) toothpicks each.  We got \(105\) hits in the \(150\) attempts for a probability of \(0.70\).  Of course, if we dropped more we would get a better estimate of the probability.  In fact, the real answer is about \(0.6366\), which you can figure out by doing a lot of simulations.  Here's a web app that will do that for you. 

Now, I'm going to do something to that number:  first, I'll invert it to get \(1.5708451\dots\); then if I multiply that by \(2\) I get \(3.14169\dots\).  That looks an awful lot like \(\pi\), which begs the question:  why would \(\pi\) show up in this context?  I mean, I don't see circles anywhere and \(\pi\) means circles, right?

But if you think about it for a minute, it shouldn't be that surprising.  Here's a schematic:

simplified schematic

simplified schematic

The toothpick has length \(1\) unit, which is the distance between the lines.  Let \(d\) be the distance from the midpoint of the toothpick to the nearest line (\(0\le d\le 1/2\)) and let \(\theta\) be the angle it makes with the horizontal (\(0\le\theta\le\pi\)).  See that \(\pi\)?  Anyway, we get a hit exactly when \(d\le (1/2)\sin\theta\).  That corresponds to the blue region in the picture below.

keep it blue

keep it blue

So the probability of a hit is then \[p = \frac{\text{area of blue region}}{\text{area of rectangle}} = \frac{\int_0^\pi 0.5\sin\theta\,d\theta}{0.5\pi} = \frac{1}{0.5\pi} = \frac{2}{\pi}.\]  I'll let you get out your calculator and check that this equals \(0.6366\dots\).

This is certainly not the only place \(\pi\) shows up unexpectedly, nor is it the most efficient way to calculate \(\pi\).  Archimedes' method of exhaustion is, well, exhausting to carry out in practice and until a couple hundred years ago it was the way to go.  The discovery of infinite series that sum to things involving \(\pi\) has made the calculation of \(\pi\) much more tractable.  For example \[\frac{\pi}{4} = \sum_{n=0}^\infty \frac{(-1)^n}{2n+1}.\]  Or \[\frac{\pi^2}{6} = \sum_{n=1}^\infty \frac{1}{n^2}.\]  Or, (thanks Ramanujan) \[\frac{1}{\pi} = \frac{2\sqrt{2}}{9801} \sum_{n=0}^\infty\frac{(4n)!(1103+26390n)}{(n!)^4 396^{4n}}.\]

OK.  That's a lot of formulas for computing a number that is only special because it's related to circles.  There are plenty of interesting numbers \(e,\sqrt{2},\dots\) that are just as (more?) fascinating than \(\pi\) but which don't get the same slavish devotion.  Why?  Probably just because of the circle thing--it's defined as a ratio but it's an irrational number (transcendental, even).  But sometimes, as the movie infers, this devotion pushes dangerously close to insanity.  It at least often devolves into numerology.  Superstition.  Finding patterns when they aren't there.  Something wicked this way comes... 


I remember sitting in eleventh grade English class one morning, second period after a late night flipping burgers at work, half-asleep with my head against the wall, discussing poetry.  This would have been American literature, and I have no idea what poem we were discussing, but at one point my teacher asked what the meaning of the poem was, and I, in full 16-year-old jackassery, said something like, "Who cares?  Maybe he didn't mean anything.  Maybe he just wrote it."

"Nice attitude, Kevin."

Yeah, well, I was 16.  But I think we can sometimes be guilty of "beating it with a hose to find out what it really means" (as former poet laureate Billy Collins put it).  And as we delved further into Borges this past week I began to wonder if we weren't doing just that.  I love Borges and his application of mathematics, but after a few hours of unraveling his use of the infinite many of us had a glazed look.  You know that 1,000 yard stare you get after flying from Seoul to Atlanta?  Not quite that bad, but close enough.

So, let's talk a bit more about The Library of Babel, and then maybe a little about The Aleph, and then move on to other things.  Putting aside the structure of the Library, which we never did settle on, and the number of distinct books in it, which is easy to calculate but impossible to comprehend, it remains to ask what it all means.  Even then, it is easy to get lost in infinite mathematical loops.  For example, there is talk of The Book, a catalog of all the books in the Library.  Let's denote this book by \({\mathbb B}\).  Here's a question: is \({\mathbb B}\) listed in \({\mathbb B}\)?  If \({\mathbb B}\) is a complete catalog of the books, and if \({\mathbb B}\) is in the library, then it must be listed in it. But there are too many books in the Library to be listed in a single book; that is, even if each book were represented by a single character in \({\mathbb B}\), it would follow that \({\mathbb B}\) must be broken into almost as many volumes as there are books in the library.  Meaning, almost every book in the library is part of The Book, and so what's the point of \({\mathbb B}\)?  This smacks of Russell's Paradox, which led to the development of the set of axioms we now use for standard set theory.  

So maybe \({\mathbb B}\) isn't in the Library, but then who can access it?  The first sentence tells us that the Library is the Universe, so is \({\mathbb B}\) God?  Can we ever find it?  How would we know?  At this point I'm reminded of the following passage from Kafka's Great Wall of China:   

Try with all your powers to understand the orders of the leadership, but only up to a certain limit—then stop thinking about them.
— Franz Kafka, The Great Wall of China

I will take Franz's advice and stop thinking about \({\mathbb B}\).  One final remark about The Library of Babel:  we really only need one book.  In fact, we only need this blog post, for it is every possible book in some language.  We may not know these languages because no one speaks them, but in some strange tongue this blog post is Moby-Dick, and in another it is The Hunt for Red October.  So perhaps we should give up our epistemania and simply take things for what they are.

As for The Aleph, the other Borges story we discussed, we see the same theme:  infinite regress as a subject of confusion.  The Aleph is a point in a Buenos Aires basement that contains all other points in the universe.  But then it also contains The Aleph which contains the universe which contains The Aleph which contains...  You get it. For me this story is more one of melancholy: the narrator (whose name is Borges) was in love with Beatriz, who died, and the narrative is more a reflection on how his memory of her is fading.  Personally, I think Beatriz is The Aleph.  Haven't we all seen the whole universe in another?  Isn't that the hope, anyway?  Melancholy gives way to hope gives way to melancholy gives way to... 

Borges y yo (y tú también)

Jorge Luis Borges, perhaps more than any other writer of his stature, weaves mathematics into the structure of his stories so completely that it can take an immense amount of analysis to unravel them.  I'm not entirely sure that Borges thought this was worthwhile; indeed, during interviews he often took gentle jabs at literary analysts who spent so much time and hand-wringing over his work.  But it's so difficult to resist.  I mean, I dare you to read The Library of Babel and not get sucked into trying to figure out the structure of the thing.  At the beginning of class last week, I asked the students to spend a few minutes sketching what they thought the Library looked like.  Here are a few of their renditions (click on them to scroll).

You see lots of hexagons because Borges spends some time telling us the structure of the rooms in the Library:  each gallery is hexagonal, bookshelves line four walls, there are two free walls.  The following passage in the story says a lot, but leaves open plenty of room for interpretation:  One of the hexagon's free sides opens onto a narrow sort of vestibule, which in turn opens onto another gallery, identical to the first--identical in fact to all.  To the left and the right of the vestibule are two tiny compartments.  One is for sleeping, upright; the other is for satisfying one's fecal necessities.  Through this space, too, there passes a spiral staircase, which winds upward and downward into the remotest distance.

At first read, then, I immediately conclude that each of the two free sides opens to another hexagon.  Even this was disputed by some students.  Maybe there's nothing on the other free side, or maybe there's a bench for sitting to read, and all the hexagons wrap around the staircase, forming a sort of Tower of Babel shaped library.  Maybe.  If this is indeed the case, then each floor of the library would contain only finitely many cells, and I don't really think this is what Borges had in mind (or maybe he did--you never know). The sentence in Spanish isn't any clearer: Una de las caras libres da a un angosto zaguán, que desemboca en otra galería, idéntica a la primera y a todas. 

Even if you accept the premise that each of the free sides leads to another gallery, there's still a lot of ambiguity.  Just how "identical" are these cells?  If we mean the free sides are always opposite each other, then we get a particular structure:  the cells line up, extending infinitely along a line in each floor and then these rows stack on each other vertically.  Maybe.  But what if there is a staircase in only one of the corridors joining two cells?  Note that Borges isn't clear on this point--una de las caras libres...  He doesn't say solamente una, which would mean exactly one staircase.  If there is a staircase in each passage, then the geometry of the Library is fixed--each floor must look like all the other floors.  But if there is only one staircase for each pair of cells, then more interesting things can happen--we could have a different layout for each floor.

Also, if we don't insist that the free sides are always in the same positions in each cell, then we can get all sorts of labyrinthine structures on each floor.  And, these labyrinths can be so elaborate that two cells that share a wall can be arbitrarily far apart in the sense that a librarian would have to walk through a huge number of galleries to get from one to the other (here, "huge" means that for any positive integer \( n\), there are adjacent galleries which require a librarian to pass through at least \( n\) cells to get from one to the other).  This can be ok if we are in the one staircase to a pair model because we may then be able to go up or down a few floors to make our way to an adjacent cell, thereby skipping the labyrinth on a particular floor.

Wait a minute.  We haven't even begun to discuss what this story is about.  We are arguing about the structure of the damn Library.  I later read this on Twitter from one of the students in the class:

when you spend over an hour talking about hexagons in a class and it turns into a heated discussion...
— @studentin2+2=5

Yeah, we did just that.  Hexagons \(\Longrightarrow\) intense discussion.  Could a mathematician ask for more?

We'll get around to the meaning of this story in class this week.  For now, let's think about how many books are in the Library.  Borges tells us that each book has \(410\) pages, each of which has \(40\) lines of \(80\) characters.  He also tells us that the alphabet consists of \(22\) characters along with a space, period, and comma.  That makes \(25\) orthographic characters.  We are told the Library is complete; that is, every possible book is in it.  This is a finite number.  In fact, each book consists of \(410\times 40\times 80 = 1,312,000\) characters, and since each of these may be any of the \(25\) possibilities, there are \[N=25^{1,312,000} \approx 10^{1,834,097}\] distinct books.  This is an enormous number (although, next to infinity it is effectively zero).  To give you some perspective on how large \(N\) is, if the known universe were filled with nothing but protons (and nothing else, no blank space) it would only contain about \(10^{126}\) of them. So the Library can't exist in our universe; there just isn't room.

There are all sorts of odd books in the Library.  There is a completely blank book.  There is a book that is blank except that it has a period in the middle of page 193.  There is a book with nothing but the letter x.  There are \(1,312,000\) books that have a single letter x.  The tweet quoted above appears exactly as it is in \(25^{1,311,898}\) of the books in the Library.  This blog post appears in a huge number of the books (if we write out the numbers and ignore the improper punctuation), in every language spoken on Earth (if transcribed into the alphabet), and in any language spoken on any other planet (do you really think we're alone in the universe?). 

Question: how would you find a particular book in the Library?  Is there any hope?  Maybe it's enough to know it's there, just like mathematicians are often satisfied with existence proofs.  In any case, it's not hard to see that a given librarian may not be able to reach a particular book in his lifetime, even if he knows where it is.  Is this cause for despair?

I'll save the philosophy for next time.  For now, one final remark.  William Goldbloom Bloch has written a wonderful book, The Unimaginable Mathematics of Borges' Library of Babel, that talks about a lot of this mathematics in far greater detail.  I suggest picking it up if you are so inclined.  Or you can walk the Library for yourself, seeking out its meaning.

Franz and Georg

As far as I know, Kafka and Cantor never met, and there is no reason to believe they did.  Still, I can't help wondering if Franz knew about Georg's work, even though he claimed to have great difficulties with all things scientific and mathematical.  Here's why:  Kafka's Great Wall of China, which in typical Kafka fashion is about all sorts of things and kind of goes nowhere, has elements that immediately make me think of Cantor's work, particularly the so-called Cantor set.

The Cantor set \(C\) is one of those mathematical curiosities that we like to trot out to blow our students' minds.  It is constructed as follows.  Start with the closed unit interval \([0,1]\).  First remove the open middle third \( (1/3,2/3)\).  Then remove the open middle thirds from the remaining two intervals: \((1/9,2/9)\) and \((7/9,8/9)\).  Then remove the open middle thirds from the remaining four intervals.  Iterate this process, at the \(n\)th stage removing \(2^{n-1}\) intervals of length \(1/3^n\).  The set \(C\) is what remains at the "end." 

The first claim about \(C\) is that it is, remarkably, uncountable.  The way to prove this is to use Cantor's diagonal argument (I wrote about this in the previous entry).  Here goes:  let's first abandon decimal notation and instead represent each number \(x\) in the interval \([0,1]\) using its ternary expansion:  \[ x=\frac{a_1}{3} + \frac{a_2}{3^2} + \frac{a_3}{3^3} + \cdots\] where each \(a_i = 0,1,\,\text{or}\, 2\). Now, observe that the elements of \(C\) are precisely those real numbers in the interval \([0,1]\) whose ternary expansions have all \(a_i = 0\,\text{or}\, 2\). (Aside: note that \(1/3\) is in \(C\).  Its ternary expansion is \(0.1000\dots\), so you might think that I've told you a lie.  But note that we also have \(1/3 = 0.02222\dots\), just like in decimal notation \( 0.9999999\dots = 1\).) If we have a bijection \(f:{\mathbb N}\to C\), then we construct a number \(x\) by taking the \(i\)th digit of \(x\) to be \(2\) if the \(i\)th digit of \(f(i)\) is \(0\) and \(0\) if the \(i\)th digit is \(2\).  Then \(x\) isn't in the image of \(f\), contradiction.

But, in typical Cantorian fashion, \(C\) has another weird property.  Let's add up the lengths of the intervals we remove from \([0,1]\) to get \(C\): \[\frac{1}{3} + \frac{2}{9} + \frac{4}{27} +\cdots +\frac{2^{n-1}}{3^n} +\cdots = \frac{1/3}{1-(2/3)} = 1.\]  You read that correctly:  we've removed "everything" yet what remains is an uncountable dust scattered throughout the unit interval.  

Compare this with Kafka's description of how the Great Wall was built:

The Great Wall of China was finished at its northernmost location. The construction work moved up from the south-east and south-west and joined at this point. The system of building in sections was also followed on a small scale within the two great armies of workers, the eastern and western. It was carried out in the following manner: groups of about twenty workers were formed, each of which had to take on a section of the wall, about five hundred metres. A neighbouring group then built a wall of similar length to meet it. But afterwards, when the sections were fully joined, construction was not continued on any further at the end of this thousand-metre section. Instead the groups of workers were shipped off again to build the wall in completely different regions. Naturally, with this method many large gaps arose, which were filled in only gradually and slowly, many of them not until after it had already been reported that the building of the wall was complete. In fact, there are said to be gaps which have never been built in at all, although that’s merely an assertion which probably belongs among the many legends which have arisen about the structure and which, for individual people at least, are impossible to prove with their own eyes and according to their own standards, because the structure is so immense.

Imagine then, how this would look from space (you can see the Wall from there, or so the fraudsters at NASA would have us believe).  In the early days of construction, you wouldn't be able to see it at all--it would be scattered, barely-visible segments, much like the Cantor set.  In fact, it's possible to build the Cantor set in this way, via the following process. Define two functions \(F_0\) and \(F_1\) on the unit interval \([0,1]\) by \[ F_0(x) = \frac{1}{3}x\] \[F_1(x) = \frac{1}{3}x + \frac{2}{3}.\]  Now, start with a number \(x_0\) in the open interval \((0,1)\) and iteratively apply one of the functions \(F_0\) or \(F_1\) randomly.  The map \(F_0\) takes a point two-thirds of the way toward \(0\) and \(F_1\) takes a point two-thirds of the way toward \(1\).  So, if a point is in, say \((1/3,2/3)\), then both \(F_0\) and \(F_1\) take it into one of the complementary intervals.  If a point is in \((1/9,2/9)\) then it maps to either \((1/27,2/27)\) or \((19/27,20/27)\), and so on.  No matter what we do, by iterating these maps indefinitely we end up at a point in \(C\). 

Now, this isn't really the right metaphor since the Wall is getting filled in, while the Cantor set is built by chipping away, but it sure feels like the same idea.  The workers getting shipped from one location to another, seemingly at random, to build this thing that no one individual can verify the existence of; points moving around the interval until they settle at points in \(C\).  Beautiful and unimaginable all at once.

There Is An Infinite Amount of Hope, Just Not For Us

The fact is that every writer creates his own precursors.
— Jorge Luis Borges, in Kafka and His Precursors

Borges points out that Zeno's parable of Achilles and the tortoise, which neatly encapsulates his (Zeno's) paradox, is Kafkaesque.  That is, without Kafka, there is no Zeno.  Even the title of this post, a direct quote from Kafka, is Kafkaesque.  Does he mean that there is an infinite amount of "hope" in the world, but we can't have it?  Or does he mean that there's plenty of hope in the world, but that we are hopelessly doomed as a species?  Or both? Or neither?

Kafka's work has also been referred to as the "poetics of non-arrival."  Many of his characters fail to reach a destination (or wake up as giant cockroaches and get starved by their families--same diff).  In class this week we read Before the Law, whose very title launches a series of questions.  "Before" in what sense?  Temporally? Spatially? Both? Neither? And which "law" does he mean?  Religious? Secular? Moral? Scientific? All? None?  You get the point. 

In case you haven't read the story, and you should since it's only a page long, here's a summary.  A man comes from the country to see the law.  He encounters a gatekeeper who tells him he can't enter at this time but it might be possible later.  He also implies that even if the man does pass through this door that there is a succession of doors and gatekeepers, each more terrible than the last, so that in some sense he may as well not bother.  Well, the man just sits there for years.  He tries bribes.  He asks the gatekeeper lots of questions.  He grows old and even resorts to imploring the fleas in the gatekeeper's fur collar to answer his pleas.  In the end, he dies having never passed the first door (and there's a version of Zeno's paradox that goes this way--being unable to take the first step).  Non-arrival, indeed.

This class is about mathematics as metaphor in literature, and many of Kafka's works make use of the infinite in one form or another (more on that next week).  But what about literature as metaphor for mathematics?  If ever there were a Kafkaesque branch of math it would have to be Cantor's work on the infinite.  Before Cantor, everyone more or less assumed that infinity is infinity; that is, there is only one level of infinity, or more accurately that all infinite sets have the same cardinality.  Cantor demonstrated rather dramatically that this is false.  In fact, a consequence of his work is that there is an infinity of infinities, each larger than the last.

If you've never thought about this before it can be really counterintuitive and difficult to accept, but I imagine Kafka, who claimed to have great difficulties with all things scientific, would have appreciated the mathematical abyss Cantor opened up for us.  I do not use the term abyss lightly--Cantor was attacked and mocked by his contemporaries, often viciously, and this fueled his depression and ultimately led to multiple hospitalizations for treatment; he died poor and malnourished in a sanatorium in 1918.  Poincare referred to Cantor's work as a "grave disease" infecting mathematics; Wittgenstein dismissed it as "utter nonsense."  But Cantor's ideas survived and are considered fundamental to mathematics today.

So just how weird are we talking here?  First a question: What is an infinite set? A set that isn't finite, right?  OK.  Definition 1. A function \(f:A\to B\) is a bijection if the following two conditions hold: (a) \( f\) is injective; that is, if \(a_1\ne a_2\) are distinct elements of \(A\) then \(f(a_1)\ne f(a_2)\); and (b) \(f\) is surjective; that is, if \(b\in B\) there is some \(a\in A\) with \(f(a)=b\).  Definition 2.  A set \(S\) is finite if there is a bijection \(f:S\to\{1,2,\dots ,n\}\) for some \(n\ge 0\).  In this case, we say that \(n\) is the cardinality of \(S\).  This notion of size is well-defined, but that requires (a simple) proof.

Now, let's denote by \({\mathbb N}\) the set of natural numbers, \(\{0,1,2,3,\dots\}\), where the ... means go on forever.  These are the numbers we use to count and we know there are infinitely many.  In some sense, this is the simplest infinite set there is.  Definition 3.  A set \(S\) is countably infinite if there is a bijection \(f:S\to {\mathbb N}\).  You might think that every infinite set is countable, because, you know, infinity is infinity, but you'd be wrong (more on that below).  For now, here are some examples of countably infinite sets.  The whole set of integers \({\mathbb Z} = \{\dots ,-2,-1,0,1,2,\dots \}\) is countable.  Now, wait, you say, there are clearly more integers than natural numbers, twice as many in fact.  But all I have to do is produce a bijection.  Here's one:  \(f:{\mathbb Z}\to {\mathbb N}\) defined by \(f(n) = 2n\) for \(n\ge 0\) and \(f(n) = -2n-1\) for \(n<0\).  You can check that this works.  The set \(E\) of even natural numbers is countable:  take \(f(n) = n/2\) for \(n\ge 0\).  Huh?  There are only half as many even numbers as there are all numbers.  So, we already see that infinity can be weird.

It gets weirder.  Let \({\mathbb Q}\) be the set of rational numbers; that is, fractions of the form \(a/b\) where \(a,b\in {\mathbb Z}\), \(b\ne 0\).  Of course, there are duplicates when we write them in this form, but we could insist that \(a\) and \(b\) are relatively prime.  This set is countable, too. There are clever diagrams that prove this (try looking here, for example), but I will simply list the rationals:  \[0,1,-1,\frac{1}{2},-\frac{1}{2},2,-2,\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3},3,-3,\dots\]  It should be reasonably clear how to continue this pattern in such a way that every rational number ends up on the list, and so this is a bijection between \({\mathbb Q}\) and \({\mathbb N}\).  Weirder still:  the set of algebraic numbers is countable.  These are the numbers which are solutions to polynomial equations with integer coefficients.  You might think there are a lot of these (well, yeah, there are infinitely many), but they're countable.

OK.  So, what about an uncountable set?  I claim the set of real numbers \({\mathbb R}\) is uncountable.  To prove this, I will show (a) \({\mathbb R}\) has the same cardinality as the open interval \( (0,1)\), and (b) \( (0,1)\) is uncountable.  The first one is easy; here is a bijection between \( (0,1)\) and \({\mathbb R}\): \[f(x) = \tan\biggl(\pi\biggl(x-\frac{1}{2}\biggr)\biggr).\]  To prove that the interval \((0,1)\) is uncountable, we use Cantor's Diagonalization Argument.

Suppose we had a bijection \(f:{\mathbb N}\to (0,1)\) (we can run our bijections in either direction).  That would mean we could put the numbers in \((0,1)\) in a list (using decimal expansions of the numbers): \[0.a_1a_2a_3a_4\dots \] \[0.b_1b_2b_3b_4\dots \] \[0.c_1c_2c_3c_4\dots \] \[0.d_1d_2d_3d_4\dots \] \[\vdots\]  Consider the following number \( x\):  the \(i\)th digit of \(x\) is \(1 + \text{the}\, i\text{th digit of}\, f(i)\) (here if the \(i\)th digit of \(f(i)\) is \(9\), this means \(0\)).  Now, ask yourself:  is \(x\) on this list?  It can't be the first number since it differs in the first digit; it can't be the second number since it differs in the second digit; it can't be the third or the fourth or the \(i\)th for any \(i\) since it differs from \(f(i)\) in at least the \(i\)th spot.  So \(x\) is not on the list; that is, our function \(f\) is not surjective, a contradiction.  So no such bijection exists and \((0,1)\) is uncountable.

Now, you might say, well, we can fix that.  Just bump everything on the list down one spot and add \(x\) at the beginning.  But then we could just do it again to construct a new number that isn't on the list.  And so on, and so on, and so on.  So there's an infinite amount of hope (to solve this), just not for us.

Cantor constructed all sorts of weird stuff, and I'll say more about that next week in relation to Kafka's Building the Great Wall of China.  For now, though, let me end by showing how there is an infinity of infinities.  This idea has been around for a long time: recall the Hindu story of the earth being held up by an elephant who is standing on a turtle.  But what's the turtle standing on?  Well, it's turtles all the way down.  Or Bertrand Russell's arguments against the existence of God:  a standard logical argument is that everything that exists has a cause; the earth exists so it has a cause; that cause is God.  But Russell pointed out that God would then have to have a cause, a meta-God of sorts, which would also have a cause (a meta-meta-God) and so on, producing an infinite string of \(\text{meta}^n\)-Gods, each more powerful than the last (Kafka squeals with delight).  The trick for producing ever larger sets is the power set construction.  It goes like this:  let \(A\) be any set.  Denote by \(P(A)\) the set of all subsets of \(A\).  It is clear that the cardinality of \(P(A)\) is at least that of \(A\) since we may find an injection of \(A\) into \(P(A)\) (the function \(f(a) = \{a\}\) will do).  But any such map cannot be a surjection.  The trick is to assume you have a bijection \(f:A\to P(A)\) and then build a subset of \(A\) which can't be in the image of \(f\), just like Cantor's Diagonalization Argument.  Since I've assigned this as a homework problem, I won't divulge the answer here, but I will say there is some relation to Russell's Paradox.

Anyway, assuming this, we now see how we can get bigger and bigger infinite sets.  Start with the natural numbers \({\mathbb N}\) and then iterate the power set construction.  The set \(P({\mathbb N})\) must be uncountable and the set \(P(P({\mathbb N}))\) is larger still.  This leads to the whole area of transfinite arithmetic, which I don't know much about and won't try to explain, but I think you'd agree must be pretty wild. 

If Borges is right that each writer creates his precursors, then I think we have to count Cantor among them.




Zeno, Limits, and Arguing About Numbers

One of my favorite things about mathematics is that it's its own insular world in many ways (note the correct it's-its usage there; as an aside I think passing an it's/its, your/you're, and their/they're/there test should be a high school graduation requirement, but I digress).  As I mentioned to my colleague and fabulous co-instructor Eric, we make choices in mathematics all the time.  They are not arbitrary, but we do make them, and we try to do so in a way that's as intuitive and clear as possible.  The first example is Euclid's axioms for plane geometry, which we've already seen and which we know cause some trouble once you try to use the parallel postulate.  It just gets more exotic from there, but at all times it is important to remember that mathematics is based on axioms and definitions.  Once we define a concept, we then try to prove things about it.  Then we might worry about whether it has a practical application or not (I said might; G.H. Hardy  famously abhorred applications of mathematics).

Zeno's Paradox, the one where you can never get where you're going because you first have to go halfway and then half the remaining distance and then half that remaining distance and so on forever, is evident in all sorts of literary works.  Woolf's To the Lighthouse has it embedded in there a bit--will they ever get to the lighthouse? Will Lily finish her painting? (Yes, and yes, as it turns out.)  But it's more blatant in Kafka's Before the Law, which we read last week in class.  The man comes from the country to see the "law," whatever that is.  There is a gatekeeper who will not let him pass at the moment, but he informs the man that beyond the gate there is another, with its own gatekeeper, and that beyond that gate is another whose gatekeeper is so fearsome that even he (the gatekeeper) cannot bear to look at him.  So, we are led to conclude that there are an infinite number of gates and gatekeepers, each more powerful than his predecessor. What would such a set up look like?  An infinite string of gates like this?

an infinite string of doors?

an infinite string of doors?

Or maybe it's more like an infinite collection of concentric circles:




Question:  can we ever reach the law?  Which law are we even talking about?  Does it even exist?  Of course, the man never even gets past the first gate (this is Kafka, after all) and dies waiting, so we never discover the structure of the building which houses the law.

So where's the math here?  Well, it's all in the question of how to resolve Zeno's Paradox.  This leads to the idea of limit, developed by Bolzano, Cauchy, and Weierstrass in the early 1800s.  Finding the limit of a sequence \(a_1,a_2,a_3,\dots\) amounts to playing the following adversarial game:  I claim the sequence converges to some number \(L\).  You then tell me how close to \(L\) you need the terms of the sequence to get.  Then I find a positive integer \(N\) so that if I go beyond the \(N\)th term of the sequence I'm within your tolerance.  In math: \[ \lim_{n\to\infty} a_n = L\] if for every \(\varepsilon >0\) there exists an \(N\) so that if \(n\ge N\), we have \[ |a_n-L| < \varepsilon.\]  If you imagine plotting the values of the sequence (after all, a sequence is just a real-valued function with domain the set of natural numbers), then this definition says that if I go far enough out, all the plotted points live inside the horizontal strip \(L-\varepsilon < y < L+\varepsilon\). 

But we still haven't gotten to Zeno (will we get there?).  What we are trying to do there is add up an infinite string of numbers \[\frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2^n}+\cdots\] and the problem is that we don't know how to do that.  Can we?  This is where the mathematician gets to make a choice.  Here's how we deal with infinite sums:  we can definitely add up a finite collection of numbers,  so given an infinite sum \(a_1+a_2+\cdots +a_n+\cdots\) we define the \(k\)th partial sum to be \[s_k = a_1+a_2+\cdots + a_k\] and then say \[ \sum_{n=1}^\infty a_n = S\quad \text{if} \quad \lim_{k\to\infty} s_k = S.\]  So, in the case of Zeno's sum, we have \[s_k = \frac{1}{2} + \frac{1}{4}+\cdots + \frac{1}{2^k} = 1-\frac{1}{2^k}\] (the last equality should be pretty obvious to you--think about how far you are from the end if you've gone \(k\) steps).  This sequence clearly has limit \( 1\) et voila we've resolved the paradox. 

Or have we?  Our students weren't so sure.  What we've really done is define the paradox away.  That is, by defining what we mean by an infinite sum, we are able to demonstrate that it makes sense to add these powers of \( 2\) and that the answer is \(1\).  But we haven't really resolved it philosophically, have we?  Alas.

But that's not what mathematicians do.  The definition above is extremely useful and allows us to make sense of all sorts of interesting things like the natural exponential function, trig functions, Fourier series, etc.  We'll trade philosophical quandaries for useful mathematics any day.

But here's one more fun thing to talk about, one which invariably spawns arguments.  A geometric series is an infinite series of the form \[ a+ ar +ar^2 + \cdots + ar^n +\cdots = \sum_{n=1}^\infty ar^{n-1}.\]  The number \(r\) is called the ratio of the series.  We can actually find a formula for the sum of such a series.  The trick is to consider the \(k\)th partial sum \(s_k = a+ar+\cdots +ar^{k-1}\), then multiply it by \(r\) to get \(rs_k = ar+ar^2+\cdots +ar^{k-1} + ar^k\).  Subtracting the latter from the former and then dividing by \(1-r\) we get \[s_k = \frac{a(1-r^k)}{1-r}.\]  Now, if \(|r|>1\), this sequence has no limit since the term \(r^k\) goes off to infinity.  If \(r=1\) this series clearly diverges since I'm just adding \(a\) to itself infinitely many times (assume \(a\ne 0\)).  But, if \(|r|<1\), the term \(r^k\to 0\) and so we get the formula \[\sum_{n=1}^\infty ar^{n-1} = \frac{a}{1-r}.\] We'll come back to this later in the course when we talk more about infinity and Cantor's work, but for now, let's have an argument.

What number is this: \[0.99999999\dots\]  Note that any repeating decimal represents a geometric series.  In this case, we have \[0.99999999\dots = \frac{9}{10} + \frac{9}{10^2} +\cdots +\frac{9}{10^n} +\cdots\] and this is a geometric series with first term \(9/10\) and \(r=1/10\).  The sum is then \[\frac{9/10}{1-1/10} = \frac{9/10}{9/10} = 1.\]  Thus, we see that \[0.99999999\dots = 1.\]

Wait.  How can that be?  This is where the fight begins, and if you think about it, this is just a rephrasing of Zeno's paradox, where instead of going half the distance at each step, we go \(9/10\) the distance (same difference, just different sized steps).  Well, I just proved to you that the infinite sum is \(1\).  But wait, you say, that's just in the limit; it never actually equals \(1\).  But, I say, that's the definition of an infinite sum and the calculation is correct.  But, you say, that number has to be less than \(1\).  And round and round we go.  OK, I say, here's another proof.  Let \(x=0.9999999\dots\). Then \(10x = 9.99999999\dots\) and then we see that \[9x= 10x - x = 9.999999999\dots - 0.999999999\dots = 9\] from which it follows that \(x=1\).  You can't really argue with this logic.  I didn't use limits or the definition of an infinite sum.  I just did some algebra.  I don't know, you say, something still seems fishy...

Well, ok, how about this one, which I learned from my high school math teacher, Mrs. Ruth Helton. Note the following pattern \[\frac{1}{9} = 0.111111111\dots \]   \[\frac{2}{9} = 0.222222222\dots \] \[\vdots\] \[\frac{8}{9} = 0.8888888888\dots\]  So we must have \[\frac{9}{9} = 0.9999999999\dots,\] right?  I'm being facetious, but you have to admit that it's a good heuristic.

These two numbers really are the same, but it comes down to what we mean by "number."  We all understand what a natural number is because we use them to count.  It's then not too hard to get to rational numbers because we understand how to divide wholes up into parts.  We understand negative numbers because we have all owed someone money at some point.  But then we reach the question of what an arbitrary real number is, say \(\sqrt{2}\).  It is not a rational number (the fact of which allegedly got its discoverer killed by the Pythagoreans), yet we know it exists since we can construct an isosceles right triangle.  More generally, how do we define the real numbers?  That's a rather complicated question, one which we won't discuss here, but which more or less comes down to approximating any number by rationals via a sequence (truncate the decimal expansion of the number at each place; these are all rational).  

So, that's that for this week.  Up next, more Kafka and more infinity.



"Women Can't Write; Women Can't Paint"

How many times do you think Virginia Woolf heard that? Sexism was rampant enough in the early 20th century (luckily, we're past all that now, right?) that it was difficult for a woman to have a career as a novelist.  Add in the modernist style she used and it's a wonder that Woolf's work saw the light of day.

First, a confession.  Before picking up To the Lighthouse, I had never read any of Woolf's novels and, frankly, I was never a fan of modernist literature (Joyce, Faulkner, etc.).  I've read Dubliners, and in a fit of youthful bravado tried to read Ulysses once (I think I finished 20 pages).  About ten years ago I gave The Sound and the Fury a shot (read the first chapter, I think).  So my track record here is spotty at best and my initial impression as I waded through the first few pages of Lighthouse was one of, let's say, skepticism.  The nonlinear narrative, the near stream-of-consciousness language, the lack of action--where's the story? 

Which leads us to the question of what the point of literature is.  And by "literature" I don't mean mere fiction.  The point of, say, a Tom Clancy novel is entertainment.  It's fine to read as a way to pass time on airplanes, but we don't really learn anything about the human condition from it.  Capital-L literature, however, reveals deep truths about humanity and its place in the world.  As such, it demands more from its readers.  As I slogged through the opening scene--Mrs. Ramsay knitting socks for the lighthouse keeper's son, Lily Briscoe working on her painting, Mr. Ramsay lost in thought and grumpy as usual--I found myself drifting.  Losing my place.  Working hard to see what was even happening (answer:  not much).  Will they go to the lighthouse tomorrow?  No, says Mr. Ramsay, the weather will be no good and the sea will be rough.  James, sitting at his mother's knee disappointed, wanting to put a knife in his father's back.  Lily getting her painting critiqued by Mr. Barnes, who uses his pen knife to point at things on the canvas condescendingly.  Andrew and Minta: where are they?  Why haven't they come back?  Then, hey, here they are.  But they're late for dinner, which we get through Mrs. Ramsay's view, with idle conversation and a lot of talk about the bowl of fruit on the table.  And man, Mr. Ramsay is quite the needy sensitive academic, isn't he?

But wait.  Maybe I'm a bit like Mr. Ramsay.  Not in the needing people to tell me how important my work is, and not in the obsessed with leaving a legacy way, but in the hyper-aware of mortality, taking myself too seriously way.  And then I see that, yes, this is Capital-L literature and I am learning something about the human condition, and I've spent days just like this one, at the sea even, with my wife's family and not much happening but yet it's everything that life is; the children playing in the surf; the adults sitting on the porch reading, watching the waves, playing the guitar; and at night after dinner watching the moon rise on the horizon, drinking a cold beer; running with the dog in the sand.  No lighthouse, but maybe tomorrow we will go to the inlet to look for shells and shark teeth.

So, at some point I decided that I do like this book.  In class, feelings were mixed.  One student hated it and said so.  Others were tepid at best.  Before class I overhead a student saying that she had heard that this book is better when you're older, and I can see that.  I'm not sure how much I would have liked or understood To the Lighthouse when I was 21.  Or 25.  Or 35, even.  Which leads to another question:  do we have to read it all when we're so young?  I didn't read Moby Dick until I was past 40, and maybe that's right. 

Anyway, this is supposed to be a class about mathematics and literature, so let's get to that.  Obviously, there's a lot of nonlinearity and chaos in this book's narrative structure.  There's the uncertainty of measurement--Mrs. Ramsay is constantly checking the length of the sock she's knitting, for example.  Lily's painting will embody some of this eventually; by the end, it has gone from a fairly standard impressionist landscape to a cubist work in which Mrs. Ramsay is a blurry triangle.  There's also the trip to the lighthouse as a metaphor for the infinite, a sort of Zeno's Paradox made concrete.  But what we spent most of the math time on was the Principle of Mathematical Induction (PMI). 

Question: Can you knock down an infinite row of dominoes?  In essence, this is what the PMI is about.  There are all sorts of philosophical problems with the question, but induction is a useful proof technique when one wants to make a claim about a statement being true for all integers.  After telling the class the (probably apocryphal) story about Gauss and adding up the first hundred positive integers (answer: \( 5050 \)), I gave an induction proof for the formula for adding up the first \( n \) squares:  \[ 1^2 + 2^2 +\cdots + n^2 = \frac{n(n+1)(2n+1)}{6}.\] Induction works like this:  first prove that your proposed statement holds in some base case, usually \( n=1\) but it could be any integer; then, assuming the result is true for \( n\) prove it holds for \( n+1\).  What this amounts to, using the domino analogy, is that you can knock down the first domino, and assuming you can knock down the first \(n\) dominoes you can show that you knock down the (\( n+1\))st domino.  You may then conclude that the result is true for all positive integers; that is, you knock down all the dominoes.

Why bring up induction?  Well, Mr. Ramsay is a philosopher and there is a stretch in the narrative where he is thinking about his accomplishments. 

For if thought is like the keyboard of a piano,
divided into so many notes, or like the alphabet is ranged in twenty-six
letters all in order, then his splendid mind had no sort of difficulty
in running over those letters one by one, firmly and accurately, until
it had reached, say, the letter Q. He reached Q. Very few people in
the whole of England ever reach Q. Here, stopping for one moment
by the stone urn which held the geraniums, he saw, but now far, far
away, like children picking up shells, divinely innocent and occupied with
little trifles at their feet and somehow entirely defenceless against a
doom which he perceived, his wife and son, together, in the window. They
needed his protection; he gave it them. But after Q? What comes next?
After Q there are a number of letters the last of which is scarcely
visible to mortal eyes, but glimmers red in the distance. Z is only
reached once by one man in a generation. Still, if he could reach R it
would be something. Here at least was Q. He dug his heels in at Q. Q he
was sure of. Q he could demonstrate. If Q then is Q—R—. Here he
knocked his pipe out, with two or three resonant taps on the handle of the
urn, and proceeded. “Then R ...” He braced himself. He clenched

Qualities that would have saved a ship’s company exposed on a broiling
sea with six biscuits and a flask of water—endurance and justice,
foresight, devotion, skill, came to his help. R is then—what is R?

A shutter, like the leathern eyelid of a lizard, flickered over the
intensity of his gaze and obscured the letter R. In that flash of
darkness he heard people saying—he was a failure—that R was beyond him.
He would never reach R. On to R, once more. R—

So, he's trying to knock down dominoes, and he can't get to the \(18\)th (Hebrew numerology fact pointed out by a student in the class:  R is the eighteenth letter of the alphabet, and \(18\) means "life"; why did Woolf choose "R"? Ramsay? Reality?).  This also opened up a discussion of symbolic logic and how these systems are built.  I even drew a truth table on the board.  Good stuff.

But, we're not done.  More discussion of To the Lighthouse in the next installment.

Et in Arcadia Ego

Et in Arcadia Ego, by Nicolas Poussin

Et in Arcadia Ego, by Nicolas Poussin

Tom Stoppard's Arcadia: a play that alternates between 1809 and the present (well, 1993 present), begins with a mention of Fermat's Last Theorem (which had not yet been proved--Wiles finally got it a year later) and ends as a metaphor for the Second Law of Thermodynamics, and whose structure itself can be modeled (loosely) as a discrete dynamical system.  It skewers academia.  It is a postmodernist work that jabs at postmodernism.  There's sex, Romantic poetry, tortoises, waltzing.  So, yeah, lots to talk about.

Eric and I really geeked out on this one. The more you read it, the more you find, and the more interesting it becomes.  The story is actually not that complicated, but the structure of the play can make it seem that way.  Arcadia opens in the English countryside in 1809 at the home of the Earl and Lady Croom (we never meet the Earl).  The garden is being completely redesigned in the new Romantic style by a Mr. Noakes, who is using the only Improved Newcomen Steam engine in England to drain the pond. All the action in the play takes place in the drawing room of the home; the table in the center contains an assortment of objects that gets more cluttered as the play progresses.  The Croom daughter Thomasina is being tutored by one Septimus Hodge, a friend (acquaintance) of Lord Byron who is quite the Lothario, having seduced one of the house guests, Mrs. Chater.  Mr. Chater is a poet (we are led to believe) whose first major poem was skewered in the Picadilly Review by an anonymous reviewer (but guess who it is) and whose recent work, Couch of Eros, is being read by Septimus in the opening scene.  Thomasina is quite gifted at mathematics and Septimus has given her an assignment for the morning:  prove Fermat's Last Theorem.  Of course she cannot, but she begins doodling in her notebook by iterating a certain function (we don't know which).  This is an explicit reference to discrete dynamical systems, which were not at all understood (or even much thought about) then, and even if they had been there was not enough (any?) computing power available to run thousands of iterates.  Note that when Stoppard was writing the play, "chaos" and all the pretty pictures had seized the popular imagination thanks to the caffeine and nicotine-fueled work of Benoit B. Mandelbrot (math joke:  what does the B. in Benoit B. Mandelbrot stand for?  answer: Benoit B. Mandelbrot.)

Scene 2 takes place in the modern era.  We meet Hannah, who is writing a book about the transformation of the garden at the Croom estate.  I forgot to mention that part of Mr. Noakes's plan included a hermitage.  Lady Croom wants to know who the hermit will be; after all, Mr. Noakes should supply one.  Hannah has a theory about it, which proves to be correct in the final page of the play.  We also meet Bernard Nightingale, an English scholar always on the lookout for fame and academic bragging rights.  In conversation with Hannah, he deduces, via some of the materials in the library, that (a) Lord Byron had been at the estate; (b) had seduced Mrs. Chater; and (c) had killed Mr. Chater in a duel, prompting him to flee England for the continent.  We also meet Valentine, who is trying to understand the grouse population on the estate.  The records of how many grouse were shot is extensive, stretching back more than 200 years, but he can't find the pattern ("There's too much noise in the system.  The noise!").  Of course there isn't much of a pattern, as we know from studying the logistic equation--populations can exhibit chaotic behavior, even when the inputs are known completely.  Upon stumbling on Thomasina's notebook on the shelf, though, he is astounded to find that she was experimenting with just such an equation; at first he dismisses it--"She couldn't have discovered it."  Academic snobbery at its finest.

Scenes 3 and 4 are in the past and present, respectively.  Act Two, whose first scene is Scene 5, begins in the modern period, then moves to the past in Scene 6.  Scene 7 is where all hell breaks loose; more on that below.  So, here's how the play is modeled like a discrete dynamical system:  the end of each scene provides the foundation for the beginning of the next.  That is, we learn something at the end of the scene and this gives the impetus for how the next scene begins.  Back and forth in time, this iteration proceeds as we move along.  Bernard makes a lot of assumptions, which may or may not be reasonable, and writes a paper claiming that Byron engaged in a duel, killing Chater.  When we go back in time, we find out the truth: that Chater was really a botanist who died after being bitten by a monkey on an expedition in Martinique; his wife then marries Lady Croom's brother, the captain of the expedition, who had brought the Chaters along only because he was in love with Mrs. Chater.  Hannah discovers the truth and tells Bernard that she will expose him as a fraud, humiliating him.  Back in the past, the final scene shows that Thomasina is in love with Septimus (and he tries to pretend he does not feel the same way).  We know that she dies in a fire that very night as the play ends with them dancing a waltz on stage at the same time Hannah and Gus (who I haven't mentioned before now, but he never speaks; he does find all the relevant documents which disprove Bernard's theory and prove Hannah's theory about the hermit correct) are clumsily waltzing as well. 

The table in the center becomes cluttered with objects--increasing entropy.  In fact, since there are two systems contributing to the disorder, the total entropy is greater than the sum of the two individual entropies (this is one of the fundamental properties of entropy).  The Second Law of Thermodynamics is sometimes referred to as "heat death"--the universe will eventually be a completely disordered mass at room temperature.  The action of the play behaves this way a bit, but there are also obvious references to heat death; Thomasina literally dies from heat. 

Because I couldn't help myself, here's a plot of the play as it bounces back and forth in time.  There's not really a time scale to measure, but generally, the scenes have varying length, tending to get shorter as the play goes on (as if the function being iterated were converging on some fixed point).  Scene 7 is chaotic in nature, fluctuating wildly between the past and present, with some dialogue lasting only one or two lines in each time period before bouncing back to the other.  It's difficult to visualize this, but the graph below is one attempt.

A rough graph of the action.

A rough graph of the action.

There is really too much mathematics and satire to summarize, so I'll stop here.  Up next, Virginia Woolf's To the Lighthouse.

2+2=5: Reframing Literature through Mathematics

Yes, I'm on sabbatical, and yes, I'm teaching a class anyway.  UF's Center for the Humanities and the Public Sphere has a team-teaching initiative.  My friend and colleague Eric Kligerman and I submitted a proposal a year ago for a course with the above title; the selection committee liked it, and here we are. The title of course references Orwell's 1984 and Winston Smith's final submission to the state, but it also refers to this great Radiohead song.  My plan is to blog about this weekly; maybe we'll turn it into an article.  Maybe not.

Our first class was Thursday, January 8.  We meet once a week for three hours.  That's intense and I'm not used to it (math is usually done in smaller chunks).  The class is not just about instances of mathematics in literature (like the coin flipping in Rosencrantz and Guilderstern are Dead), although we will point them out as they arise.  The real focus is on various authors' use of mathematics as metaphor and structure in their works.  Up first:  Book VII of Plato's Republic, which contains the famous Allegory of the Cave.  This is also the book in which Socrates is discussing which subjects are suitable for the education of his philosopher kings.  The first subject, after gymnastics, is arithmetic.  Socrates points out that Agamemnon was a horrible general, mostly because he didn't know his figures, but there's a bigger reason he's interested in it.  Namely, he argues that rulers need to understand the higher logical functions that come along with learning about numbers (he argues for geometry after arithmetic).  Indeed, there's a reason we still teach plane geometry in high school--it's not just its utility in describing things, but it's the first introduction to a rigorous logic system.  The skills learned in geometry apply to other fields and make the king fit to rule (once he reaches 50, of course).

To the Greeks, "geometry" meant Euclidean geometry and so we spent some time discussing this.  We introduced Euclid's five postulates, the first four of which are entirely obvious.  The fifth, often called the Parallel Postulate, was the subject of some controversy, even to Euclid.  Indeed, he avoided using it in proofs in the Elements until Proposition XXIX, which you can probably recite in its modern form: when parallel lines are cut by a transversal, alternate interior angles are congruent.  For 2,000 years, mathematicians tried to prove that the Parallel Postulate is a consequence of the others, to no avail.  It wasn't until the 1800s that someone asked the question of what happens if you negate it. (More accurately, it's easier to work with Playfair's Axiom, which is equivalent.) It turns out that it is possible to construct interesting, naturally occurring geometries in which the Parallel Postulate does not hold.  The first of these should have been obvious, even to Euclid, since the Greeks knew the Earth is a sphere.  On the surface of a sphere, given any "line" \(\ell\) and a point \(P\) not on the line, every line through \(P\) intersects \(\ell\).  Of course, "line" here means a great circle (think of longitudes) since they are the shortest paths between points on the surface of a sphere. (Ever wonder why flights to Europe pass over Newfoundland and then swing by Iceland? They're following a great circle, more or less.)  But let's be honest, it's a bit unfair to use our 21st Century hindsight to criticize the ancients for missing this one.

The other interesting non-Euclidean geometry is the hyperbolic plane.  In hyperbolic space, there are infinitely many lines through \(P\) that miss \(\ell\).  A model for this is the unit disc in the plane (not including the boundary circle) where "lines" are circular arcs orthogonal to the boundary circle, along with diameters.  Here's a picture of a point and infinitely many lines missing another line:

Got this from wikipedia:

Got this from wikipedia:

You've seen this before.  M.C. Escher famously used the hyperbolic plane to make pieces like this:

Got this from this site:

Got this from this site:

And, if you've ever eaten green leaf lettuce, then you've digested hyperbolic space thoroughly.  In fact, hyperbolic structures show up when an object needs to curl up to conserve space.  Coral reefs behave this way for example.

So, with some non-Euclidean ideas in hand we're ready to proceed.  We ended class with this passage from Dostoyevsky's Brothers Karamazov:

My task is to explain to you as quickly as possible my essence, that is, what sort of man I am, what I believe in, and what I hope for, is that right? And therefore I declare that I accept God pure and simple. But this, however, needs to be noted: if God exists and if he indeed created the earth, then, as we know perfectly well, he created it in accordance with Euclidean geometry, and he created human reason with a conception of only three dimensions of space. At the same time there were and are even now geometers and philosophers, even some of the most outstanding among them, who doubt that the whole universe, or, even more broadly, the whole of being, was created purely in accordance with Euclidean geometry; they even dare to dream that two parallel lines, which according to Euclid cannot possibly meet on earth, may perhaps meet somewhere in infinity. I, my dear, have come to the conclusion that if I cannot understand even that, then it is not for me to understand about God. I humbly confess that I do not have any ability to resolve such questions, I have a Euclidean mind, an earthly mind, and therefore it is not for us to resolve things that are not of this world. And I advise you never to think about it, Alyosha my friend, and most especially about whether God exists or not. All such questions are completely unsuitable to a mind created with a concept of only three dimensions. And so, I accept God, not only willingly, but moreover I also accept his wisdom and his purpose, which are completely unknown to us; I believe in order, in the meaning of life, I believe in eternal harmony, in which we are all supposed to merge, I believe in the Word for whom the universe is yearning, and who himself was ‘with God,’ who himself is God, and so on and so forth, to infinity. Many words have been invented on the subject. It seems I’m already on a good path, eh? And now imagine that in the final outcome I do not accept this world of God’s, created by God, that I do not accept and cannot agree to accept. With one reservation: I have a childlike conviction that the sufferings will be healed and smoothed over, that the whole offensive comedy of human contradictions will disappear like a pitiful mirage, a vile concoction of man’s Euclidean mind, feeble and puny as an atom, and that ultimately, at the world’s finale, in the moment of eternal harmony, there will occur and be revealed something so precious that it will suffice for all hearts, to allay all indignation, to redeem all human villainy, all bloodshed; it will suffice not only to make forgiveness possible, but also to justify everything that has happened with men—let this, let all of this come true and be revealed, but I do not accept it and do not want to accept it! Let the parallel lines even meet before my own eyes: I shall look and say, yes, they meet, and still I will not accept it.

I'll leave it to you to decide whether or not this argument is valid.

Up next: Tom Stoppard's Arcadia, which includes references to discrete dynamical systems, Fermat's Last Theorem, and the second law of thermodynamics.  Tune in next time.


more multiplication

So I wrote about that Chinese multiplication video and how it's not really that great a method if you have large digits.  I gave the example of \(78\times 89\) to illustrate why.  Here's a nice way to do it that is perhaps a little more visually pleasing than the standard algorithm we all learned in school.  I'll explain it in a second, but here's an animation of it.



Here's what you do.  Draw a box and divide it into rows and columns for each digit in your factors.  In this case, it's a \(2\times 2\) grid. Then draw diagonals passing southwest to northeast in each box.  For each pair of digits, multiply them and place the answer in the corresponding box, one on each side of the diagonal line.  For example, the \(8\) in \(78\) times the \(9\) in \(89\) gives a \(72\) in the lower right corner.  Then add along the diagonals.  If a sum is more than \(9\), carry the appropriate number to the next diagonal and add it along with the numbers you find there. 

In this example, we get the product \(78\times 89 = 6,942\).  Neat, huh?

Really, this is the same algorithm we use, but it has the advantage of not requiring you to keep up with shifting things over in the successive rows when you stack up the various intermediate products.  I try to use this method when I multiply, but to be honest I am so used to the old-timey way I learned in elementary school that I usually default to that.  But I encourage you to give this a try the next time you find yourself calculatorless.

Chinese (?) multiplication

OK.  So my wife posted this video to my facebook page with the question "true or false?"

It's cute and it has the advantage of appearing to make the "complicated" act of multiplication more visual by reducing it to drawing some lines and counting their intersection points.  But did you notice that the examples all had small digits?  As in, \( 123\times 321 \).  This one trick makes the whole thing appear simpler.

But let's try this one:  \(78 \times 89\).  Here's the picture of the calculation I did using this method:



First note that I made a mistake, getting the wrong answer on the first try.  That was because I confused which number of lines was crossing another in each of the four blocks.  Only after I checked my answer with the "western" method did I realize I had done something wrong.  To see just how inefficient this method is, consider the block on the right side of the diamond.  It has \(8\times 9 = 72\) intersection points.  Let's assume I can count that many things in a small area correctly.  Then I have to do it three more times.  Then I have to add up the numbers in the top and bottom columns and get that correct.  Then I have to carry the \( 7\) from the right block and add it to the \(127\) in the middle and then carry the resulting \(13\) to the left to get the final answer \(6,942\).  Whew.

And, this is really the same as the algorithm you learned in elementary school.  It's just in disguise.  I suppose this is a useful instructional tool to help motivate the general algorithm, but as a practical computational tool I'd say it falls short.  We now return you to your everyday lives in which you're probably using a calculator to do this anyway, if you are multiplying numbers at all.

there's a 14% chance of rain...

My first blog post of this calendar year.  I've made some changes, such as stepping down from my administrative position at the university.  That means I'm on sabbatical this year, or as the provost calls it, "time off for good behavior."  I could get used to this, but just as soon as I do, it'll be over.

Anyway, I've noticed a subtle change to the Weather Channel app on my phone.  Here's a screenshot:

Notice anything unusual?  What does it mean to say there's a 14% chance of rain?  Or a 30% chance? Or a 90% chance? 

I've been noticing a lot of statistical misinterpretation lately.  Malaysian Airlines has two accidents in the span of a week (one of them not their fault, to be sure) and people stop booking flights.  If only everyone knew about Poisson processes...

But what's up with the Weather Channel?  I'm sure they think that putting more exact percentages, like 47% at 5:00 pm, will inspire confidence.  "Wow, they must be getting really good at forecasting!"  I ask again:  what does it mean to say there's a 30% chance of rain?  It really means that over time, looking at instances where similar atmospheric conditions were in place, it rained about 30% of the time.  That's all.  Reporting such a percentage to one more significant digit does not make me think it's a more reliable forecast, because weather is much too complicated for an accurate forecast more than a few minutes into the future (maybe a couple of hours).  After that, the possible trajectories from the given initial conditions may diverge too wildly to make an accurate prediction. 

So, nice try, Weather Channel, but I'm not convinced.  I've never taken a 20% chance or rain as a serious threat, so I'm not about to consider a 22% chance any other way.  

Christmas calendar


You've probably seen one of these.  This one was on display at my mother-in-law's house this year.  I've never really looked closely at one to figure out how they work, and I decided not to look at this one either.  Rather, I determined the distribution of the numbers on the two cubes while driving around my hometown during the holidays.  My parents live at one end; my in-laws live at the other.  So, lots of opportunity.

The first observation to make is that because of the 11th and 22nd, both cubes must have a 1 and a 2.  If you don't think deeply at all beyond that you leap to an incorrect conclusion: since there are 8 faces remaining and you have 8 numbers left to distribute (0 and 3-9) you can just put half of them on one cube and half on the other.  But that's a problem.  If 0 is on only one face then you can get only 6 of the single digit dates (the 6 numbers on the cube without a 0).  It follows that you must put a 0 on both cubes along with the 1 and the 2. 

But wait, that leaves 7 digits to place on the remaining 6 faces.  If we weren't using Arabic numerals we'd be in trouble, but luckily the numbers we use have the (in this case) useful typographical property that the 9 is an upside-down 6 (or vice versa, if you prefer).  And, since we don't need the 6 and the 9 at the same time for this application, we can safely put only the digits 3 through 8 on the remaining 6 faces.  Voila! Calendar cubes.

I realize this is utterly trivial, but it's about as much brain power as I can muster while on vacation.  A more interesting question is: how many such cubes can there be?  That leads to all kinds of interesting combinatorics.  Note, though, that determining which three digits you place on the first cube determines those on the second cube.  Choose three digits from the set of six possibilities; there are \(\binom{6}{3} = 20 \) ways to do this.  We then have a set of six numbers: \( \{0,1,2,a,b,c\} \).  How many different cubes can we make labeled with these digits?  This is a classical combinatorics question, usually phrased in terms of paint colors.  There are \( 6! \) ways to place the digits on the cube, but we need to consider rotational symmetry: there are 24 rotations of a cube and so there are 720/24 = 30 such cubes.  So, for each selection of three digits we get 30 distinct first cubes and 30 distinct second cubes, making 900 pairs of cubes for each of the 20, giving us a grand total of 18,000 pairs of calendar cubes.  But wait, I counted each pair twice, so there are only 9,000.  Still quite a few, but I'm sure they only come in one variety.

Maybe next time I'll just look at the cubes.  

solitaire's the only game in town

Everywhere I turn I see people playing Candy Crush Saga on their phones.  My wife did for a while (she's gotten tired of it, I think); my in-laws play a lot.  I've never once tried it.  I like video games as much as anyone, and indeed I spent my early teens shoving quarters into the Galaga machine at the convenience store (high score over 2,000,000--now you think I'm cool).  These days I'm methodically working my way through Super Mario 3D World on the Wii U.  But games on the phone?  Not so much.

A quick inventory reveals the following games on my iPhone (still a 3GS--I know, how do I survive?):

  1. Set
  2. BClassicLite
  3. Eliss
  4. Doodle Fit
  5. Jelly Car
  6. Paper Toss
  7. Mr. AahH!!
  8. BiiPlane
  9. Paper Glider
  10. Galaga Remix
  11. Flight Control
  12. Sol Free Solitaire

Most of these were installed as restaurant distractions for my son when he was younger.  I've never played BClassicLite, Eliss, Paper Toss, Mr. AahH!!, BiiPlane, Paper Glider, or even Galaga Remix (screen is too small).  The others I've played only a few times.

Except for Sol Free Solitaire.

I'm about to reveal an embarrassing screenshot--the stats the game keeps:

I'll gloss over the fact that this is the second iPhone I've had...

The only really embarrassing thing here is the Total Time Played field.  I realize that 55 hours seems like a lot, but I don't really sit around playing this game all the time. I mostly play when I'm waiting for something--marching band practice to end (lots of time there), doctor's office waiting room--so this could be viewed as a commentary on the drudgery of everyday life.  And I've had this phone for nearly two years, so that works out to only about 5 or 6 minutes a day on average.

I think I'm a pretty good solitaire player.  I play quickly, so it's possible I could improve my win percentage if I were a bit more careful.  I doubt it could be much better though.  I win roughly 1 game in 6 on average, and if you've ever played much yourself you know that's not bad.  But I did get to thinking--what kind of win percentage would a really good solitaire player have? 

To answer that, it would be helpful to know the probability of winning any particular game.  Some games are unwinnable from the start--you can't move any of the initial up cards and passing through the waste (as the pile remaining after the deal is called) yields no moves.  Supposedly these are rare, but they seem to happen with an alarming frequency to me.  The number of possible hands is over 7,000 trillion.  That's way too many to analyze all the possibilities, so we can only estimate the probability of winning.

So I looked around the interwebz a bit and found some interesting stuff.  Using Monte Carlo methods, some have estimated the following probabilities:

  • fraction of winnable games: 82%
  • probability of winning with "good" play: 43%

So, it seems that I'm not a very "good" player, even though I think I am.  More discouraging, however, is that in theory I should be able to win about 8 times in 10.  So, why the discrepancy?  The problem is that knowing that a game is theoretically winnable requires complete knowledge of all the positions of the 52 cards.  So, when I lose a game of solitaire, the chances are good that I could have won had I not made a particular move.  The problem, of course, is that I don't which move was wrong.  Alas.

On the plus side, though, the app on my phone makes this all a much more efficient waste of time than it was when I was a kid and had to physically deal an entire deck of cards.  Losing in that scenario was a real drag because it meant gathering, shuffling, and redealing the deck.  I kind of miss that, though.

(N.B. if you don't get the reference in the title, listen to this.)

spherical trigonometry

So I just finished reading Heavenly Mathematics, by Glen Van Brummelen because, you know, that's the sort of thing I do for fun.  I didn't go full nerd on it, though: I didn't do the exercises.  I'll save that for another time.

Man, did I learn a lot from this book.  Spherical trigonometry was a fairly standard part of the mathematics curriculum until the middle of the 20th century, but it is all but forgotten these days.  Its primary use is for navigation at sea and modern technology has rendered it obsolete.  Well, not obsolete, exactly, but the technicalities have been incorporated into our modern devices that do our navigating for us.  No doubt logarithms will be the next thing to be dropped (and indeed, no student today learns what logarithms are really for).

My favorite things: in the first chapter, we learn how to estimate the Earth's circumference, construct a sine table with our bare hands, and determine the distance from the Earth to the Moon.  Here's the basic idea of the circumference calculation from the medieval Pakistani mathematician Biruni.  First, you need the height of a mountain you can climb.  To do so you make a big square out of wood or something and measure the distance \( E'G\) and the distance from \( F_1' \) to the point where \( BF_1' \) crosses the left side of the square.  Then you use similar triangles to get the height \( AB\).  In Biruni's example, he found the peak to be 305.1 meters.

I've been to the mountaintop...

I've been to the mountaintop...

The next thing to do is to climb the mountain.  If you've stood on top of a big enough hill, you notice that the horizon curves down from where you are.  You can then measure the angle of declination; that is the angle \( \theta \) in the diagram below which is made by taking a tangent to the Earth's surface at the horizon passing through the peak and the horizontal ray from the peak.  Biruni measured this as \( \theta = 34' = 0.56667^\circ \).  Then use the fact that \( \Delta LJB \) is a right triangle to get

the earth, not to scale.

the earth, not to scale.

\[\cos\theta = \frac{r}{r+305.1}\]

where \( r\) is the radius of the Earth.  Provided we have a sine table (which we can build with our bare hands, as the author describes, and which Biruni had available), we know \( \cos\theta \).  Solving for \( r\), we get \( r=\) 6,238 km.  The actual value, accepted by scientists today, is 6,371 km.  Not bad for having only primitive tools.

Once you know the radius of the Earth, it's actually not too difficult to figure out the distance to the moon, provided you can make some measurements in two locations during a lunar eclipse, which Ptolemy was able to do nearly 2,000 years ago.  He missed on the distance to the sun by a factor of 19 or so (too small), but I suppose we can let that slide.

The other thing I really enjoyed was the proof of Euler's Formula \( V - E + F = 2 \) for convex polyhedra.  There are lots of proofs of this, the simplest using graph theory and more high-powered ones via topology.  It turns out that Legendre came up with a wonderful proof using spherical trigonometry.  The idea is this:  take a convex polyhedron, like, say, a cube, and scale things so that the unit sphere fits around it comfortably.  Then project the polyhedron out onto the sphere (imagine a light source in the center of the polyhedron).  You then have a spherical polyhedron.  Now, we know the surface area of the sphere is \(4\pi \).  We need only then add up the areas of the spherical polygons covering the surface.  This is where the spherical trig comes in:  a spherical triangle \(\Delta ABC \) has area \[\frac{\pi}{180}(A + B + C - 180^\circ)\]  where \(A, B, C\) are the triangle's angles measured in degrees.  By dividing a polygon with \( n\) sides into \( n \) triangles we find that the area of a polygon is \[ \frac{\pi}{180} ((\text{sum of polygon's angles}) - (n-2)\cdot 180)\]  Putting all this together we get
\[ \sum \frac{\pi}{180} ((\text{sum of polygon's angles}) - (n-2)\cdot 180) = 4\pi \] and cleaning up a bit we see \[ (\text{sum of all angles}) - \sum n\cdot 180 + 2F\cdot 180 = 720.\]  But the angles go around the vertices, so their sum is \( V\cdot 360\) and since each edge gets counted twice, \( 2E = \sum n\).  So the last equation reduces to \[ V\cdot 360 - 2E\cdot 180 +2F\cdot 180 = 720,\] and dividing by 360 gives us the result.  Cool.

I also learned why Napier was really interested in logarithms and why you used to see tables of logarithms of sines.  It was all about efficient navigation and the resulting cost savings.  Money does indeed make the world go around.

I recommend this book heartily.  You can pick it up and read a chapter in 30 minutes and learn something interesting.