Episode 85 - Matthew Kahle

Kevin Knudson: Welcome to My Favorite Theorem, the math podcast with no quiz at the end. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined today as always by my fabulous co-host.

Evelyn Lamb: Well, thank you. I'm Evelyn Lamb, a freelance math and science writer in Salt Lake City. And anyone who's on this Zoom, which is only us and our guest, can see that I am bragging with my Zoom background right now. We just got back from a trip to southern Utah, and I took possibly the best picture I've ever taken in my life. And 95% of the credit goes to the clouds because they just — above these red rock hoodoos outside of Bryce Canyon, I turned around and looked at it while we were hiking, and I was like, Oh, my gosh, I have to capture this.

KK: It is quite the picture.

EL: My little iPhone managed.

KK: Yeah. Well, they're pretty good now. Yeah. Anyway, so yeah, I'm getting ready to — I have three trips in the next three weeks. So lots and lots of travel, and I'm gonna make sure I mask up and hopefully I don't come home with COVID, but we'll see.

EL: Yes.

KK: Anyway. So today, we are pleased to welcome Matthew Kahle. Matt, why don’t you introduce yourself?

Matthew Kahle: Hi, everyone. Thanks for having me, Kevin and Evelyn. I'm a mathematician here at The Ohio State University in Columbus, Ohio. I've been here for 11 or 12 years now, and before that, I spent a good part of my life in the western United States. So those clouds look familiar to me, Evelyn. I miss the Colorado sky sometimes.

EL: Yeah, just amazing here.

KK: You did your degree in Seattle, right?

MK: I did. I did my PhD at the University of Washington.

KK: Yep. Yep.

EL: Great. And what is your general research field?

MK: I work a little bit between fields. My main interests are topology, combinatorics, and also probability and statistical physics. And I think I usually feel most comfortable, or maybe I should say most excited mathematically, when there's sort of more than one thing going on, or when it's in the intersection of more than one field.

KK: Yeah, lots of randomness in your work. He’s got this very cool stuff with random topology. And I remember, some paper you had few years ago, I remember really sort of blew my mind, where you had some, you're just computing homology of these random simplicial complexes, and, like, some four- or five-complex had torsion of order, you know, I don't know, 10 to the 12th, or some crazy torsion coefficient. Yeah.

MK: Yeah. So we were really surprised by this too, and we still don't really have any way to prove it, or really understand it very deeply. Kevin was mentioning some experimental work I did with some collaborators a few years ago. But yeah, that is the gist of a lot of what I think about, is random topology, which I sometimes try to sum up as the study of random shapes. And one of the original motivations for this was as sort of a null hypothesis for topological data analysis, that if you want to do statistical methods — if you want to use topological and geometric methods, and statistics and data science, you need a probabilistic foundation. But one of the things we've discovered over the last 15 years or so is that these random shapes are interesting for their own sake as well. And sometimes they have very interesting, even bizarre, properties, where we don't even know how to construct shapes that have these properties at all, but they're they are there. And we know they exist, because of the probabilistic method. Yeah.

EL: So let me be the very naive person who asks, like, how do you, I guess, come up with — like, what do you randomize about shapes? Or you know, if I think about, I don't know, randomly drawing from from some sort of, I don't know, bucket of properties, is it that or is it… Just what is random? What quantity or quality is being randomized?

MK: Right. So a lot of the random shapes or spaces that I've studied have have been on the combinatorial or discrete side. So for example, there are lots of different types of random simplicial complex that people have studied by now. And typically, you have just some probability distribution, some way of making a random simplicial complex on n vertices. And n can be anything, but then the yoga of the subject is that typically n goes to infinity. And then we're interested in sort of the asymptotic properties as your random shape grows. So one of the early motivations, or early inspirations, for the subject of random, simplicial complexes was random graph theory. So you can create random networks various ways, and people have been studying that for for a bit longer, probably at least 60 years or so now, with new models and new interesting ideas coming along all the time. For example, there was originally the Erdős–Rényi model of random graph where the edges all have equal probability, and they're all independent. This is a beautiful model mathematically, and it's been studied extensively. We really know lots and lots about that model of random graph now, although surprisingly, people can continue to discover new things about it as well. But in today's world, some people have studied other models of random graphs that they say may have made better model real world networks, for example, social networks, or what we see in epidemiology, and so on. The Erdős–Rényi model is something that's tractable, and that we can prove deep math theorems about, but it might not be the best model for real world networks. But, you know, I think of the random simplicial complexes that I study sometimes as just higher-dimensional versions of random graphs.

EL: Okay.

MK: So as well as as well as vertices and edges, we can have higher-dimensional cells in there, and and that starts to sort of enrich the space. It's not just one-dimensional now, it could be two-dimensional, or it could be any dimension.

EL: So you might not know. You've got some large number n, and you might not know what dimension this random — you’re, like ,attaching with edges with some sort of probability between any two things. And so you might not know what dimension your simplicial complex is going to be until after you randomly assign all of these edges and faces and, you know, and n- whatever the word is for that, n-things. [Editor’s note: it’s n-simplex.]

MK: Yeah, absolutely. That's right. It could be that the dimension of the random simplicial complex is itself a random variable. And you know, that we don't ahead of time even know what the dimension of it is.

EL: Cool!

KK: So there's lots to do here. This is why Matt has lots of students and lots of lots of good projects to work on. But anyway, we invited you on not just to talk about this really interesting mathematics, but to find out what your favorite theorem is. So what is it?

MK: Okay, so I've been thinking about this. Well, I have to admit, I think I asked myself this just knowing of your podcast in case I ever got invited on. And then I've been thinking about it since you invited me. I would say my favorite math theorem, probably the one I've thought about the most, the one maybe that affects me the most, is Euler’s polyhedral formula, which is V−E+F=2. Right? So let's just start out saying, well, you know, what do we mean by this? I think my understanding of the history of it is that it was something that as far as we know, the Greeks didn't observe even though they were interested in convex polyhedra. And sometimes people consider the classification of the perfect Platonic solids is one of the peak contributions of Euclid’s Elements. But we don't know that they recognized this pattern that Euler noticed thousands of years later. If you take any convex polyhedron, a cube or an icosahedron, or a pyramid, a bi-pyramid, any kind of three dimensional polyhedral shape that you can imagine that's convex, V, the vertices is the sort of number of corners of the shape and E is the number of edges. And then F is the faces. It always is the case that V−E+F=2. So Euler noticed this. And it's not clear if he gave a rigorous proof or not. I don't even know if he felt like anything needed to be proved, maybe it was obvious to him. And nowadays, we have many, many beautiful proofs of this fact. But one of the things that strikes me about it is that, it’s sort of in hindsight, is that this is just sort of the tip of a very big iceberg. There's a much more general fact that we are just kind of getting our first glimpses of, and nowadays, we would think of this as not just a phenomenon about convex polyhedron, 3-dimensional space, that it’s just a general phenomenon in algebraic topology, or you can say =more generally, in homological algebra. It's just sort of a feature of nature somehow.

KK: Right, right.

EL: I think something that I really enjoy about this fact is you can present it at first as a theorem or as a fact. But then this fact kind of leads you to this new definition that you can observe about all sorts of different shapes, you know, this number that is the vertices minus the edges plus the faces, hopefully, I got it in the right order, yes. Then you can assign that, you know, you can say, like, what does, you know, if you've got a torus, like a polyhedral torus, or, you know, a higher-genus object or a higher-dimensional thing, you can sort of use this, and so it's like a fact becomes a definition or a new thing to observe.

MK: That’s right. Are you saying, for example, you know, we have the Euler characteristic is an invariant of a space?

EL: Right.

MK: And that might, if you're introduced to a new topological space, that might be one of the first things you might like to know about it. And so yeah, it becomes its own invariant. It’s a way of telling some different spaces apart, for example.

EL: Yeah. So do you have a favorite proof of this favorite theorem?

MK: I do. I present it and the graduate combinatorics and graph theory course when I teach this course. So already, we're looking at a little bit more general formulation than what Euler looked at. We don't just have a convex polyhedron in 3-dimensional space, what we have is a connected planar graph. So we have some kind of network with nodes and connections between them, and it's one that you can draw on the plane without any of the edges or connections crossing. And in this case, the faces now are just going to be the connected components, or the regions, in the complement of the graph that then comes with an embedding into the plane. And then V is the number of vertices of the graph, and E is the number of edges. So V−E+F=2 in this case, so for for just any connected planar graph, this might seem totally unrelated, but it's actually a more general version than what we just saw with convex polyhedra because you could take any convex polyhedron and unwrap it, or stereographically project it into the plane and get a planar graph. But planar graphs could have lots of other features. So when I present this in class, I tend to give three or four different proofs of it. There's a beautiful proof that I've heard attributed to John Conway, where he says something about, like, letting in the ocean or something. So your graph is connected, but there may be some cycles in it. And anytime you have cycles, the Jordan curve theorem tells us there's an inside and outside. So John Conway wants to let the ocean in. The ocean is the sea, is the outside of the graph, let it in until it touches. So what he's saying is if there's any cycle, delete one edge from it, and so what this does is it reduces the number of edges by one because you deleted an edge, but it also reduces the number of faces by one because that two regions that were inside and outside of that cycle are now the same region, so V−E+F has stayed the same.

KK: Right.

MK: And then eventually, you've just got a tree. There's no more cycles left, but your graph is still connected, so it must be a tree. And we know that every finite tree with at least two vertices has a leaf, has a vertex of degree one. And again, you can prune away that, and then you've reduced the number of vertices by one and the number of edges by one. And V−E+F is again not changed. So at the very, very end, we're just left with a single vertex in the plane. There's one vertex and there's one region, which is everything except that vertex. So at the very end, V−E+F=2. But through all those steps, we know that it never changed. So it must have been V−E+F, it must have been 2 at the very beginning. So I love that proof.

EL: Yeah.

MK: There's another proof that I think I like even better, which is that you consider the dual graph and a spanning tree. You pick a spanning tree on the original graph and a spanning tree and a dual graph at the same time.

EL: So the dual graph being where you replace, you swap vertices and faces.

KK: Yes. For every face there’s a vertex and you join two when the two faces share an edge.

MK: That’s right, exactly. So one thing that's tricky about that is that now the dual of even just a nice planar graph might be a multi-graph. So just imagine a triangle in the plane. The dual graph has two vertices, one inside the triangle and one outside, but there's three edges connecting those two vertices, because there's three edges in the original graph. And the edges in the dual graph have to correspond to edges in the original graph, and that's important. They cross them transversely. So then you choose a spanning tree on each one, and you and you realize that — you count the number of edges in each and you somehow — now I'm getting a little stuck remembering the proof, but the punch line is in the original graph, I guess the number of edges is V−1. And then the dual graph, the spanning tree, the number of edges is F−1. And these have to be in correspondence. So you just immediately just write down V−1=, sorry, no, I don't remember exactly how that the end of that proof goes. But there was something about it I liked. It seemed like the other proofs, you're kind of doing induction on either the number of vertices or the number of edges or the number of faces, and that you have to make some arbitrary choices. And this proof by duality doesn't use any induction and doesn't require any choices. It just kind of comes for free. And you sort of immediately see where the 2 comes from, because there's a V−1 on one side and an F−1 on the other side, so the 2 just sort of pops out immediately from the proof. There’s — I think it’s Eppstein? — some mathematician collects proofs of Euler’s polyhedra formula on his website, and he has at least 10 or 20 different proofs. And when you read them all, some of them start to remind you of each other, and who knows what counts as the same proof or different proofs.

KK: Sure.

MK: But there are some neat contributions in there. One of them he attributes to Bill Thurston in the middle of some very influential notes that Thurston had in differential geometry. And he's talking, he's giving his own proof, I think, that the Euler characteristic of a differential manifold really is an invariant of the manifold, for a smooth manifold, let’s say. You could triangulate it, and then the Euler formula, the Euler characteristic, you could just say is the alternating sum of the faces of every dimension. But why doesn't that depend on which triangulation you pick? And Thurston gave a really beautiful kind of almost physical argument with, like, moving charges around. I like to show the class this one also. At that point, we leave — I don't know how to make that proof work for planar graphs, but it works beautifully for polyhedra, for convex polyhedra, like what Euler first noticed. And apparently, it works also for higher dimensional manifolds, too, although I've never gone through that proof carefully.

KK: Yeah. Right. Well, the proof that you said might be attributed to Conway is sort of the one that I always knew, and I never heard it attributed to him, but that's good. It's sort of nice. You can explain that one to just about anybody right? You just sort of imagine plugging away an edge and a face at the same time basically, yeah.

EL: Yeah. A proof that proof that is of something that is so visual, but you can really understand over a podcast, is a special proof. Because I do think that it doesn't take a whole lot of you know, imagination, to be able to follow this audially.

KK: Audially, is that a new word?

EL: There is a real word that is embedded in that word. Aurally, that’s the real word I was trying to say.

KK: Yeah, so is this sort of a love at first sight theorem? I think I first learned this theorem in the context of graph theory.

MK: I think for me, too.

KK: And then I became a topologist kind of later. And then of course, now I think of it as, oh, it's the alternating sum of the Betti numbers, but that those two quantities are equal is an interesting theorem in its own right.

MK: Right. Yeah. So I was trying to think about this. When did I learn about this theorem? And I think I first learned it in graph theory also. But then I know now that it's much more general, and I don't even know if I ever remember anyone telling me that specifically in a class or reading it in a particular book or paper. I think this to me, maybe part of what I like about the Euler formula is that I feel like my understanding of it has just deepened over time, and that there’s kind of a series of small revelations. At some point, I started thinking of it as the alternating sum of the Betti numbers, and things like that. And since I like the combinatorial side of topology, and have simplicial complexes or cell complexes, also the alternating sum of the number of faces of each dimension. But then even in the last couple of years, my understanding has continued to develop because now I think, you know, well, you could just have a chain complex, and all you know is the dimensions of the vector spaces, but it makes sense to ask what's the homology of the chain complex, so they're the Betti numbers again, and again, the alternating sum of the Betty numbers now is the alternating sum of the dimensions of the vector spaces of your chain complex. But I think I probably first saw, you know, the graph theory version of it, maybe in an undergrad or a first graduate combinatorics course.

KK: All right. So the other thing on this podcast is we like to ask our guest to pair their theorem with something. So what have you chosen to pair Euler’s formula with?

MK: Well, you know, I've been stumped by this. But you know, thanks for the warning that I am going to get asked this question. So I had a little time to think about it, and I'm not totally stumped on the spot. But the thing that keeps coming to mind the most when I ask myself that question is some of Bach's music. Johann Sebastian Bach is really known for his four-part harmonies and for counterpoint, and it feels a little bit like this: You're listening to a beautiful piece of — it could be anything, you know: a fugue on an organ, or four-part harmonies that were written for choral music or something like that. And when you listen to it, you can listen to a recording of it two or three times and each time pick out a different voice to follow along. And there are just these independent melodies, harmonies that he's somehow weaving together. You can also just relax and just let the whole thing wash over you. And honestly, that's most often how I listen to music. But it's completely fascinating to just hone in on one particular thread. And so I think a lot of people feel like Bach's music has maybe a mathematical feeling to it, or that it's mathematically perfect or precise. So you could say that Bach, pairs with mathematics already, but the reason I want to try to connect it with the Euler formula that I like as my favorite theorem is that there are these sort of different layers. And just the same way you can kind of listen for one voice, and then tune your ear and listen to a different voice and emphasize that, I feel like this is one of these areas, of one of these kinds of mathematical phenomena, that’s just sitting there in, you know, platonic space, or wherever it lives. And you can look at it. So if you look at it from the topological point of view, it's the alternating sum of the Betti numbers, the number of holes in each dimension. But if you look at it through a combinatorial lens, then it's the alternating sum of the number of faces of each dimension. Or you can just step back and it's just its own thing. It's just an invariant of the space, the Euler characteristic, and these just happen to be different ways to compute it. But it has that feeling to me that you can look at it different ways. But you're really always looking at the same thing. Just we're putting on different glasses or looking at it through different lenses, and so it reminds me of that sort of, I don't know, counterpoint and music or something.

EL: Yeah. Oh, I love this pairing! I'm also, I play viola and I sing, and when you get to a point when you’ve learned a piece that you've learned it enough that you don't have to be just concentrating on, like, am I singing the right note at the right time, but you can actually start hearing like, oh, I didn't originally hear that the parallel that the bass and the soprano line has right here, or the way we come in and then the altos come in and something like that. I've been singing a lot of, you know, things that have these fugal sections in them, which is — I haven't actually sung much Bach recently, but similar things — and I just love that pairing and how seeing the same thing, or singing the same music over and over again, you hear something different every time. You know, just a little easter egg that you didn't pick up the first 20 times you practiced this piece, and then now you hear and you say, oh, next time I really want to make sure that I, you know, do that crescendo with the tenors just perfectly or something. I love that.

KK: Yeah, so I see the edge of a keyboard there in your Zoom, Matt. Do you play?

MK: A little bit. I mean, nothing to write home about, but it's something I enjoy. I took it back up during the pandemic as a hobby. And I've been practicing a little bit. This over here, I have a little portable keyboard, and then I have an electric piano out in the living room. But I've been practicing music with one of my friends. We get together, like, once a week and and just play some cover songs. And I like what you're saying, Evelyn, about hearing different things. Even, you know, we'll be playing some song by REM or somebody that I've known, I don't know, it seems like my whole life, it’s very familiar. But once we start to play it, once we start to sing it, then I hear all kinds of different things in it that just listening to the same recording that I've listened to before all of a sudden, I'm like, wait, Mike Mills is actually doing some really interesting harmonizing and this track, and not only is he harmonizing, like singing different notes, than what Michael Stipe is singing, he’s actually singing different words. He's saying something in that song I never even noticed he was saying. So anyway, music and mathematics, I think that's probably another big thing that they have in common, is that, you know, a little bit can go a long way, and even just entry-level, you can already start to appreciate the beauty of it, but that it's sort of almost inexhaustible how deep it goes and that you can always, there's always more to learn. There's there's always more to notice.

EL: Yeah, with Bach specifically, you know, as a viola student, I think I started playing the Bach cello suites, an octave up on the viola, I was probably 10 or 11 years old? And it's like, I will still play those same suites that I started learning when I was in fifth grade. And it's like, it always has something to teach me. It's something that I can always get something more out of.

KK: Yeah. I think we can all agree that that Mike Mills is REM’s secret weapon. I took up the guitar about 10 years ago, so I'm terrible, and I play alone. But it's still something that I enjoy to do. It's certainly, it's a good way to exercise your — what was it Leibniz said? That music is the pleasure the brain derives from counting without knowing that it's counting?

EL: Oh, yeah. That’s a good little quote, to file away for us math-musician-type people.

KK: That’s right. All right, well, so we always like to give our guests a chance to plug anything. Where can we find you on the interwebs?

MK: Yeah, I don't have anything particular to plug, but you can find, you know, all my mathematical work on my professional webpage, matthewkahle.org. There's links to all my papers and everything there. And, you know, if my friend and I get our REM cover band off the off the ground, we’ll keep you posted.

EL: All of our Columbus area listeners can find you.

KK: I can play rhythm guitar on some of the tracks if you need somebody.

MK: All right. We'll have to all get together if you come out and visit in Columbus.

KK: Well this has been great fun.

EL: Thanks so much for joining us.

MK: Thanks for having me today.

[outro]

On this episode, we were delighted to talk with Matthew Kahle of the Ohio State University about Euler's polyhedral formula, also known as V−E+F=2. Here are some links you might find useful as you listen to the episode.
Kahle's website
His 
paper about torsion in homology groups of random simplicial complexes
The 
Erdős–Rényi model of random graphs
Euclid's Elements, book 13, is devoted to the classification of Platonic solids. Also found herestarting on page 438.
The Jordan curve theorem has made a previous appearance on the podcast in our episode with 
Susan D'Agostino.
David Eppstein's website with 21 different proofs of Euler's formula. Thurston's proof is here.