Episode 23 - Ingrid Daubechies

Evelyn Lamb: Hello and welcome to My Favorite Theorem. This is a podcast about math where we invite a mathematician in each episode to tell us about their favorite theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m excited about this one.

EL: Yes, I’m very excited. I’m too excited to do any banter. We’re coming up on our one-year anniversary, and we are very honored today to have a special guest. She is a professor at Duke. She has gotten a MacArthur Fellowship, won many prizes. I was just reading her Wikipedia page, and there are too many to list. So we are very happy to have Ingrid Daubechies on the show. Hi, Ingrid. Can you tell us a little bit about yourself?

Ingrid Daubechies: Hi, Evelyn and Kevin. Sure. I have just come back from spending several months in Belgium, in Brussels. I had arranged to have a sabbatical there to be close to and help set up arrangements for my very elderly parents. But I also was involved in a number of fun things, like the annual contest for high school students, to encourage them to major in mathematics once they get to college. And this is the year I turn 64, and because 64 is a much more special number than 60 for mathematicians, my students had arranged to organize some festivities, which we held in a conference center in my native village.

KK: That’s fantastic.

ID: A lot of fun. We had family and friends, we opted to have a family and friends activity instead of a conference where we tried to get the highest possible collection of big name marquee. I enjoyed it hugely. We had a big party in Belgium where I invited via Facebook everybody who ever crossed my timeline. There were people I went to high school with, there was a professor who taught me linear algebra.

KK: Oh, wow.

ID: So it was really a lot of fun.

KK: That’s fantastic.

EL: Yeah, and you have also been president of the International Mathematical Union. I meant to say that at the beginning and forgot. So that is also very exciting. I think while you were president, you probably don’t remember this, but I think we met at a conference, and I was trying to talk to you about something and was very anxious because my grandfather had just gone to the hospital, and I really couldn’t think about anything else. I remember how kind you were to me during that, and just, I think you were talking about your parents as well. And I was just thinking, wow, I’m talking to the president of the International Mathematical Union, and all I can think about is my grandpa, and she is being so nice to me.

ID: Well, of course. This is so important. We are people. We are connected to other people around us, and that is a big part of our life, even if we are mathematicians.

EL: But we have you on the show today to talk about theorems, so what is your favorite theorem?

ID: Well, I of course can’t say that I have one particular favorite theorem. There are so many beautiful theorems. Right now I have learned very recently, and I am ashamed to confess how recently, because it’s a theorem that many people learn in kindergarten, it’s a theorem called Tutte’s embedding theorem about graphs, meshes, in my case it’s a triangular mesh, and the fact that you can embed it, meaning defining a map to a polygon in the plane without having any of the vertices cross, so really an embedding of the whole graph, so every triangle on the complicated mesh that you have, it’s a disk-type mesh, meaning it has no holes, a boundary, lots of triangles, but you can think of it as a complicated thing, and you can embed it under certain conditions in a convex polygon in the plane, and I really, really, really love that. I visualize it by thinking of it as a complicated shape and applying a hair dryer to it to kind of, like Saran wrap, and a hair dryer will flatten it nicely, will try to flatten it, and I think the fact that you can always do it is great. And we’re using it for an interesting, actually we are extending it to mappings, the theorem is originally formulated for a convex polygon in the plane, you can always map to a convex polygon in the plane, and we are extending it to the case where you have a non-convex polygon because that’s what we need, and then we have certain conditions.

KK: Sure. Well, there have to be some conditions, right, because certainly not every graph, every mesh you would draw is planar.

ID: Yeah.

KK: What are those conditions?

ID: It has to be 3-connected, and you define a set of weights on it, on the edges, that ensure planarity. You define weights on the edges that are all positive. What happens is that once you have it in the polygon, you can write each one of the vertices as a convex combination of its neighbors.

KK: Yeah.

ID: And those define your weights. You have to have a set of weights on the edges on your original graph that will make that possible.

KK: Okay.

ID: So you define weights on the original graph that help you in the embedding. What happens is that the positive weights are then used for that convexity. So you have these positive weights, and you use them to make this embedding, and so it’s a theorem that doesn’t tell you only that it is planar, but it gives you a mechanism for building that map to the plane. That’s really the power of the theorem. So you start already with something that you know is planar and you build that map.

KK: Okay.

ID: It’s really powerful. It’s used a lot by people in computer graphics. They then can reason on that Tutte embedding in the plane to build other things and apply them back to the original mesh they had in 3-space for the complicated object they had. And that’s also what we’re trying to use it for. But we like the idea of going to non-convex polygons because that, for certain of the applications that we have, will give us much less deformation.

EL: So, is this related to, I know that you’ve done some work with art reconstruction, and actually in the back of the video here, I think I see some pictures of art that you have helped reconstruct. So is it related to that work?

ID: Actually, it isn’t, although if at some point we go to 3-D objects rather than the paintings we are doing now, it might become useful. But right now this collaboration is with biologists where we’re trying to, well we have been working for several years and we’re getting good results, we are quantifying similarity of morphological surfaces. So the people we work with are working on bones and teeth. They’re paleontologists. Well, they’re interested in evolutionary anthropology, but they work a lot with teeth and bones. And there’s a lot of domain knowledge they have because they’ve seen so many, and they remember things. But of course in order to do science with it, they need to quantify how many similar or dissimilar things are. And they have many methods to do that. And we are trying to work with them to try to automate some of these methods in ways that they find useful and ways that they seek. We’ve gotten very good results in this over the many years that we’ve worked with them. We’re very excited about recent progress we’ve made. In doing that, these surfaces already for their studies get scanned and triangulated. So they have these 3-d triangulations in space. When you work with these organs and muscles and all these things in biology, usually you have 3-d shapes, and in many instances you have them voxelized, meaning you have the 3-d thing. But because they work with fossils, which often they cannot borrow from the place where the fossil is, they work with casts of those in very high-quality resin. And as a result of that, when they bring the cast back, they have the surface very accurately, but they don’t have the 3-d structure. So we work with the surfaces, and that’s why we work with these 3-d meshes of surfaces. And we then have to quantify how close and similar or dissimilar things are. And not just the whole thing, but pieces of it. We have to find ways in which to segment these in biologically meaningful ways. The embedding theorem comes in useful.

But it’s been very interesting to try to build mathematically a structure that will embody a lot of how biologists work. Traditionally what they do is, because they know so much about the collection of things they study, is they find landmarks, so they have this whole collection. They see all these things have this particular thing in common. It looks different and so on. But this landmark point that we mark digitally on these scanned surfaces is the same point in all of them. And the other point is the same. So they mark landmarks, maybe 20 landmarks. And then you can use that to define a mapping. But they asked us, “could we possibly do this landmark-free at some point?” And many biologists scoffed at the idea. How could you do this? At the beginning, of course, we couldn’t. We could find distances that were not so different from theirs, but the landmarks were not in the right places. But we then started realizing, look, why do they have this immense knowledge? Because they have seen so many more than just 20 that they’re now studying.

So we realized this was something where we should look at many collections, and there we have found, with a student of mine who made a breakthrough, if you start from a mapping between, so you have many surfaces, and you have a first way of mapping one to the other and then defining a similarity or not, depending on how faithful the mapping is, all these mappings are kind of wrong, not quite right. But because you have a large collection, there are so many little mistakes that are made that if you have a way of looking at it all, you can view those mistakes as the errors in a data set, and you can try to cancel them out. You can try to separate the grains from the chaff to get the essence of what is in there. A little bit like students will learn when they have a mentor who tells them, no, that point is not really what you think, and so on. So that’s what we do now. We have large collections. We have initial mappings that are not perfect. And we use the fact that we have the large collection to define, then, from that large collection, using machine learning tools, a much better mapping. The biologists have been really impressed by how much better the mappings are once we do that. The wonderful thing is that we use this framework, of course we use machine learning tools, we use all these computer graphics and dealing with surfaces to be efficient. We frame it as a fiber bundle, and we learn. If you think of it, every single one, if you look at a large collection, every one differs by little bits. We want to learn the structure of this set of teeth. Every tooth is a 2-d surface, and similar teeth can map to each other, so they’re all fibers, and we have a connection. And we learn that connection. We have a very noisy version of the connection. But because we know it’s a connection, and because it’s a connection that should be flat because things can be brought back to their common ancestor, and so going from A to B and B to C, it should not matter in what order you go because all these mappings can go to the common ancestor, and so it should kind of commute, we can really get things out. We have been able to use that in order to build correspondences that biologists are now using for their statistical analysis.

KK: So differential geometry for biology.

ID: Yes. Discrete differential geometry, which if there is an oxymoron, that’s one.

KK: Wow.

ID: So we have a team that has a biologist, it has people who are differential geometers, we have a computational geometer, and he was telling me, “you know, for this particular piece of it, it would be really useful if we had a generalization of Tutte’s theorem to non-convex polygons,” and I said, “well, what’s Tutte’s theorem?” And so I learned it last week, and that’s why it’s today my favorite theorem.

EL: Oh wow, that’s really neat.

KK: So we’ll follow up with you next year and see what your favorite theorem is then.

EL: Yeah, it sounds like a really neat collaborative environment there where everybody has their own special knowledge that they’re bringing to the table.

ID: Yes, and actually I have found that to be very, very stimulating in my whole career. I like working with other people. I like when they give you challenges. I like feeling my brain at work, with working together with their different expertise. And, well, once you’ve seen a couple of these collaborations at work, you get a feel for how you jump-start that, how you manage to get people talking about the problems they have and kind of brainstorm until a few problems get isolated on which we really can start to get our teeth dug in and work on it. And that itself is a dynamic you have to learn. I’m sure there are social scientists who know much more about this. In my limited setting, I now have some experience in starting these things up, and so my students and postdocs participate. And some of them have become good at propagating. I’m very motivated by the fact that you can do applications of mathematics that are really nontrivial, and you can distill nontrivial problems out of what people think are mundane applications. But it takes some investing to get there. Because usually the people who have the applications—the biologists, in my case—they didn’t say, “we had this very particular fiber bundle problem.”

EL: Right.

ID: In fact, it’s my student who then realized we really had a fiber bundle, and that helped define a machine learning problem differently than it had been before. That then led to interesting results. So you need all the background, you need the sense of adventure of trying to build tools in that background that might be useful. And I’m convinced that for some of these tools that we build, when more pure mathematicians learn about them, they might distill things in their world from what we need. And this can lead to more pure mathematics ultimately.

KK: Sure, a big feedback loop.

ID: Yes, absolutely. That’s what I believe in very, very strongly. But part of my life is being open to when I hear about things, is there a meaningful mathematical way to frame this? Not just for the fun of it, but will it help?

EL: Yeah, well, as I mentioned, I was amazed by the way you’ve used math for this art reconstruction. I think I saw a talk or an article you wrote about, and it was just fascinating. Things that I never would have thought would be applicable to that sphere.

ID: Yeah, and again it’s the case that there’s a whole lot of knowledge we have that could be applicable, and in that particular case, I have found that it’s a wonderful way to get undergraduates involved because they at the same time learn these tools of image processing and small machine learning tools working on these wonderful images. I mean, how much cooler is it to work on the Ghent altar piece, or even less famous artwork, than to work on standards of image analysis. So that has been a lot of fun. And actually, as I was in Belgium, the first event of the week of celebration we had was an IP4AI, which is Image Processing for Art Investigation, workshop. It’s really over the last 10-15 years, as a community is taking off. We’re trying to have this series of workshops where we have people who are interested in image processing and the mathematics and the engineering of that talk to people who have concrete problems in art conservation or understand art history. We try to have these workshops in museums, and we had it at a museum in Ghent, and it again was very, very stimulating exhilarating.

KK: So another thing we like to do on this podcast is ask our guest to pair their favorite theorem with something. So I’m curious. What do you think pairs well with Tutte’s theorem?

ID: Well, I was already thinking of Saran wrap and the hair dryer, but…

KK: No, that’s perfect. Yeah.

ID: I think also—not for Tutte’s theorem, there I really think of Saran wrap and a hair dryer—but I also am using in some of the work in biology as well what people call diffusion, manifold learning through diffusion techniques. The idea is if you have a complicated world where you have many instances and some of them are very similar, and others are similar to them, and so on, but after you’ve moved 100 steps away, things look not similar at all anymore, and you’d like to learn the geometry of that whole collection.

KK: Right.

ID: Very often it’s given to you by zillions of parameters. I mean, like images, if you think of each pixel of the image as a variable, then you live in thousands, millions of dimensions. And you know that the whole collection of images is not something that fills that whole space. It’s a very thin, wispy set in there. You’d like to learn its geometry because if you learn its geometry, you can do much more with it. So one tool that was devised, I mean 10 years ago or so—it’s not deep learning, it’s not as recent as that—is manifold learning in which you say, well, in every neighborhood if you look at all the things that are similar to me, then I have a little flat disc, it’s close enough to flat that I can really approximate it as flat. And then I have another one, and so on, and I have two mental images for that. I have one mental image: this whole kind of crochet thing, where each one of it you make with a crochet. You cover the whole thing with doilies in a certain sense. You can knit it together, or crochet it together and get the more complex geometry. Another image I often have is sequins. Every little sequin is a little disc.

EL: Yeah.

ID: But it can make it much more complex. So many of my mental images and pairings, if you want, are hands-on, crafty things.

KK: Do you knit and crochet yourself?

ID: Yes, I do. I like making things. I use metaphors like that a lot when I teach calculus because it’s kind of obvious. I find I use almost no sports metaphors. Sports metaphors are big in teaching mathematics, but I use much more handicraft metaphors.

KK: So what else should we talk about? ID: One thing, actually, I was saying, I had such a lot of fun a couple of weeks ago when there was a celebration. The town in which I was born happens to have a fantastic new administrative building in which they have brought together all different services that used to be in different buildings in the town. The building was put together by fantastic architects, and it feels very mathematical. And it has beautiful shapes.

It’s in a mining town—I’m from a coal mining town—and so they have two hyperboloid shapes that they used to bring light down to the lower floors. That reminds people of the cooling towers of the coal mine. They have all these features in it that feel very mathematical. I told the mayor, I said, “Look, I’ll have this group of mathematicians, some of whom are very interested in outreach and education. We could, since there will be a party on Saturday and the conference only starts on Monday, we could on the Sunday have a brainstorming thing in which we try to design a clue-finding search through the building. We design mathematical little things in the building that fit with the whole design with the building. So you should have the interior designers as part of the workshop. I have no idea what will come out, but if something comes out, then we could find a little bit of money to realize it, and that could be something that adds another feature to the building.”

He loved the idea! I thought he was going to be…but he loved the idea. He talked to the person who runs the cafeteria about cooking a special meal for us. So we had a tagine because he was from Morocco. We wanted just sandwiches, but this man made this fantastic meal. We toured the building in the morning and in the afternoon we had brainstorming with local high school teachers and mathematicians and so on. We put them in three small groups, and they came up with three completely different ideas, which all sound really interesting. And then one of them said, “Why don’t we make it an activity that either a family could do, one after the idea, or a classroom could do? You’d typically have only an hour or an hour and a half, and the class would be too big, but you’d split the class into three groups, and each group does one of the activities. They all find a clue, and by putting the clues together, they find some kind of a treasure.”

KK: Oh, wow.

ID: So the ideas were great, and they link completely different things. One is more dynamical systems, one is actually embodying some group and graph theory (although we won’t call it that). And what I like, one of the goals was to find ideas that would require mathematical thinking but that were not linked to curriculum, so you’d start thinking, how would I even frame this? And so on, and trying to give stepwise progression in the problems so that they wouldn’t immediately have the full, complete difficult thing but would have to find ways of building tools that would get you there. They did excellent work. Now each team has a group leader that over email is working out details. We have committed to in a year working out all the details of texts and putting the materials together so it can actually be realized. That was the designers’ part. Can we make something like that not too expensive? They said, oh yeah, with foam and fabric. And I know they will do it.

A year from now I will see whether it all worked on that.

EL: So will you come to Salt Lake next and do that in my town?

ID: Do you have a great building in which it work?

EL: I’m trying to think.

ID: We’re linking it to a building.

EL: I’ll have to think about that.

KK: Well, we have a brand new science museum here in Gainesville. It’s called the Cade Museum. So Dr. Cade is the man who invented Gatorade, you know, the sports drink.

ID: Yes.

KK: And his family got together and built this wonderful new science museum. I haven’t been yet. It just opened a few months ago.

ID: Oh wow.

KK: I’m going to walk in there thinking about this idea.

ID: Yeah, and if you happen to be in Belgium, I can send you the location of this building, and you can have a look there.

KK: Okay. Sounds excellent. Well, this has been great, Ingrid. We really appreciate your taking your time to talk to us today.

ID: Well thank you.

KK: We’re really very honored.

ID: Well it’s great to have this podcast, the whole series.

KK: Yeah, we’re having a good time.

EL: We also want to thank our listeners for listening to us for a year. I’m just going to assume that everyone has listened religiously to every single episode. But yeah, it’s been a lot of fun to put this together for the past year, and we hope there will be many more.

ID: Yes, good luck with that.

KK: Thanks.

ID: Bye.

Episode 22 - Ken Ribet

Evelyn Lamb: Welcome to My Favorite Theorem, a podcast about math. I’m Evelyn Lamb, one of your cohosts, and I’m a freelance math and science writer in Salt Lake City, Utah.

Kevin Knudson: Hi, I’m Kevin Knudson, a professor of mathematics at the University of Florida. How are you doing, Evelyn? Happy New Year!

EL: Thanks. Our listeners listening sometime in the summer will really appreciate the sentiment. Things are good here. I promised myself I wouldn’t talk about the weather, so instead in the obligatory weird banter section, I will say that I just finished a sewing project, only slightly late, as a holiday gift for my spouse. So that was fun. I made some napkins. Most sewing projects are non-Euclidean geometry because bodies are not Euclidean.

KK: Sure.

EL: But this one was actually Euclidean geometry, which is a little easier.

KK: Well I’m freezing. No one ever believes this about Florida, but I’ve never been so cold in my life as I have been in Florida, with my 70-year-old, poorly insulated home, when highs are only in the 40s. It’s miserable.

EL: Yeah.

KK: But the beauty of Florida, of course, is that it ends. Next week it will be 75. I’m excited about this show. This is going to be a good one.

EL: Yes, so we should at this point introduce our guest. Today we are very happy to have Ken Ribet on the show. Ken, would you like to tell us a little bit about yourself?

Ken Ribet: Okay, I can tell you about myself professionally first. I’m a professor of mathematics at the University of California Berkeley, and I’ve been on the Berkeley campus since 1978, so we’re coming up on 40 years, although I’ve spent a lot of time in France and elsewhere in Europe and around the country. I am currently president of the American Mathematical Society, which is how a lot of people know me. I’m the husband of a mathematician. My wife is Lisa Goldberg. She does statistics and economics and mathematics, and she’s currently interested in particular in the statistics of sport. We have two daughters who are in their early twenties, and they were home for the holidays.

KK: Good. My son started college this year, and this was his first time home. My wife and I were super excited for him to come home. You don’t realize how much you’re going to miss them when they’re gone.

KR: Exactly.

EL: Hi, Mom! I didn’t go home this year for the holidays. I went home for Thanksgiving, but not for Christmas or New Year.

KK: Well, she missed you.

EL: Sorry, Mom.

KK: So, Ken, you gave us a list of something like five theorems that you were maybe going to call your favorite, which, it’s true, it’s like picking a favorite child. But what did you settle on? What’s your favorite theorem?

KR: Well, maybe I should say first that talking about one’s favorite theorem really is like talking about one’s favorite child, and some years ago I was interviewed for an undergraduate project by a Berkeley student, who asked me to choose my favorite prime number. I said, well, you really can’t do that because we love all our prime numbers, just like we love all our children, but then I ended up reciting a couple of them offhand, and they made their way into the publication that she prepared. One of them is the six-digit prime number 144169, which I encountered early in my research.

KK: That’s a good one.

KR: Another is 1234567891, which was discovered in the 1980s by a senior mathematician who was being shown a factorization program. And he just typed some 10-digit number into the program to see how it would factor it, and it turned out to be prime!

KK: Wow.

KR: This was kind of completely amazing. So it was a good anecdote, and that reminded me of prime numbers. I think that what I should cite as my favorite theorem today, for the purposes of this encounter, is a theorem about prime numbers. The prime numbers are the ones that can’t be factored, numbers bigger than 1. So for example 6 is not a prime number because it can be factored as 2x3, but 2 and 3 are prime numbers because they can’t be factored any further. And one of the oldest theorems in mathematics is the theorem that there are infinitely many prime numbers. The set of primes keeps going on to infinity, and I told one of my daughters yesterday that I would discuss this as a theorem. She was very surprised that it’s not, so to speak, obvious. And she said, why wouldn’t there be infinitely many prime numbers? And you can imagine an alternative reality in which the largest prime number had, say, 50,000 digits, and beyond that, there was nothing. So it is a statement that we want to prove. One of the interesting things about this theorem is that there are myriad of proofs that you can cite. The best one is due to Euclid from 2500 years ago.

Many people know that proof, and I could talk about it for a bit if you’d like, but there are several others, probably many others, and people say that it’s very good to have lots of proofs of this one theorem because the set of prime numbers is a set that we know a lot about, but not that much about. Primes are in some sense mysterious, and by having some alternative proofs of the fact that there are infinitely many primes, we could perhaps say we are gaining more and more insight into the set of prime numbers.

EL: Yeah, and if I understand correctly, you’ve spent a lot of your working life trying to understand the set of prime numbers better.

KR: Well, so that’s interesting. I call myself a number theorist, and number theory began with very, very simple problems, really enunciated by the ancient Greeks. Diophantus is a name that comes up frequently. And you could say that number theorists are engaged in trying to solve problems from antiquity, many of which remain as open problems.

KK: Right.

KR: Like most people in professional life, number theorists have become specialists, and all sorts of quote-on-quote technical tools have been developed to try to probe number theory. If you ask a number theorist on the ground, as CNN likes to say, what she’s working on, it’ll be some problem that sounds very technical, is probably hard to explain to a general listener, and has only a remote connection to the original problems that motivated the study. For me personally, one of the wonderful events that occurred in my professional life was the proof of Fermat’s last theorem in the mid-1990s because the proof uses highly technical tools that were developed with the idea that they might someday shed light on classical problems, and lo and behold, some problem that was then around 350 years old was solved using the techniques that had been developed principally in the last part of the 20th century.

KK: And if I remember right — I’m not a number theorist — were you the person who proved that the Taniyama-Weil conjecture implied Fermat’s Last Theorem?

KR: That’s right. The proof consists of several components, and I proved that something implies Fermat’s Last Theorem.

KK: Right.

KR: And then Andrew Wiles partially, with the help of Richard Taylor, proved that something. That something is the statement that elliptic curves (whatever they are) have a certain property called modularity, whatever that is.

EL: It’s not fair for you to try to sneak an extra theorem into this podcast. I know Kevin baited you into it, so you’ll get off here, but we need to circle back around. You mentioned Euclid’s proof of the infinitude of primes, and that’s probably the one most people are the most familiar with of these proofs. Do you want to outline that a little bit? Actually not too long ago, I was talking to the next door neighbors’ 11-year-old kid, he was interested in prime numbers, and the mom knows we’re mathematicians, so we were talking about it, and he was asking about what the biggest prime number was, and we talked about how one might figure out whether there was a biggest prime number.

KR: Yeah, well, in fact when people talk about the proof, often they talk about it in a very circular way. They start with the statement “suppose there were only finitely many primes,” and then this and this and this and this, but in fact, Euclid’s proof is perfectly direct and constructive. What Euclid’s proof does is, you could start with no primes at all, but let’s say we start with the prime 2. We add 1 to it, and we see what we get, and we get the number 3, which happens to be prime. So we have another prime. And then what we do is take 2 and multiply it by 3. 2 and 3 are the primes that we’ve listed, and we add 1 to that product. The product is 6, and we get 7. We look at 7 and say, what is the smallest prime number dividing 7? Well, 7 is already prime, so we take it, and there’s a very simple argument that when you do this repeatedly, you get primes that you’ve never seen before. So you start with 2, then you get 3, then you get 7. If you multiply 2x3x7, you get 6x7, which is 42. You add 1, and you get 43, which again happens to be prime. If you multiply 2x3x7x43 and add 1, you get a big number that I don’t recall offhand. You look for the prime factorization of it, and you find the smallest prime, and you get 13. You add 13 to the list. You have 2, 3, 7, 43, 13, and you keep on going. The sequence you get has its own Wikipedia page. It’s the Euclid-Mullin sequence, and it’s kind of remarkable that after you repeat this process around 50 times, you get to a number that is so large that you can’t figure out how to factor it. You can do a primality test and discover that it is not prime, but it’s a number analogous to the numbers that occur in cryptography, where you know the number is not prime, but you are unable to factor it using current technology and hardware. So the sequence is an infinite sequence by construction. But it ends, as far as Wikipedia is concerned, around the 51st term, I think it is, and then the page says that subsequent terms are not known explicitly.

EL: Interesting! It’s kind of surprising that it explodes that quickly and it doesn’t somehow give you all of the small prime numbers quickly.

KR: It doesn’t explode in the sense that it gets bigger and bigger. You have 43, and it drops back to 13, and if you look at the elements of the sequence on the page, which I haven’t done lately, you’ll see that the numbers go up and then down. There’s a conjecture, which was maybe made without too much evidence, that as you go to the sequence, you’ll get all prime numbers.

EL: Okay. I was about to ask that, if we knew if you would eventually get all of them, or end up with some subsequence of them.

KR: Well, the expectation, which as I say is not based on really hard evidence, is that you should be able to get everything.

KK: Sure. But is it clear that this sequence is actually infinite? How do we know we don’t get a bunch of repeats after a while?

KR: Well, because the principle of the proof is that if you have a prime that’s appeared on the list, it will not divide the product plus 1. It divides the product, but it doesn’t divide 1, so it can’t divide the new number. So when you take the product and you factor it, whatever you get will be a quote-on-quote new prime.

KK: So this is a more direct version of what I immediately thought of, the typical contradiction proof, where if you only had a finite number of primes, you take your product, add 1, and ask what divides it? Well, none of those primes divides it. Therefore, contradiction.

KR: Yes, it’s a direct proof. Completely algorithmic, recursive, and you generate an infinite set of primes.

KK: Okay. Now I buy it.

EL: I’m glad we did it the direct way. Because setting it up as a proof by contradiction when it doesn’t really need the contradiction, it’s a good way, when I’ve taught things like this in the past, this is a good way to get the proof, but you can kind of polish it up and make it a little prettier by taking out the contradiction step since it’s not really required.

KR: Right.

KK: And for your 11-year-old friend, contradiction isn’t what you want to do, right? You want a direct proof.

KR: Exactly. You want that friend to start computing.

KK: Are there other direct proofs? There must be.

KR: Well, another direct proof is to consider the numbers known as Fermat numbers. I’ll tell you what the Fermat numbers are. You take the powers of 2, so the powers of 2 are 1, 2, 4, 8, 16, 32, and so on. And you consider those as exponents. So you take 2 to those powers of 2. 2^1, 2^2, 2^4, and so on. To these numbers, you add the number 1. So you start with 2^0, which is 1, 2^1 is 2, and you add 1 and get 3. Then the next power of 2 is 2. You add 1 and you get 5. The next power of 2 is 4. 2^4 is 16. You add 1, and you get 17. The next power of 2 is 8. 2^8 is 256, and you add 1 and get 257. So you have this sequence, which is 3, 5, 17, 257, and the first elements of the sequence are prime numbers. 257 is a prime number. And it’s rather a famous gaffe of Fermat that he apparently claimed that all the numbers in the sequence were prime numbers, that you could just generate primes that way. But in fact, if you take the next one, it will not be prime, and I think all subsequent numbers that have been computed have been verified to be non-prime. So you get these Fermat numbers, a whole sequence of them, an infinite sequence of them, and it turns out that a very simple argument shows you that any two different numbers in the sequence have no common factor at all. And so, for example, if you take 257 and, say, the 19th Fermat number, that pair of numbers will have no common factor. So since 257 happens to be prime, you could say 257 doesn’t divide the 19th Fermat number. But the 19th Fermat number is a big number. It’s divisible by some prime. And you can take the sequence of numbers and for each element of the sequence, take the smallest prime divisor, and then you get a sequence of primes, and that’s a infinite sequence of primes. The primes are all different because none of the numbers have a common factor.

KK: That’s nice. I like that proof.

EL: Nice! It’s kind of like killing a mosquito with a sledgehammer. It’s a big sequence of these somewhat complicated numbers, but there’s something very fun about that. Probably not fun to try to kill mosquitoes with a sledgehammer. Don’t try that at home.

KK: You might need it in Florida. We have pretty big ones.

KR: I can tell you yet a third proof of the theorem if you think we have time.

KK: Sure!

KR: This proof I learned about, and it’s an exercise in a textbook that’s one of my all-time favorite books to read. It’s called A Classical Introduction to [Modern] Number Theory by Kenneth Irleand and Michael Rosen. When I was an undergraduate at Brown, Ireland and Rosen were two of my professors, and Ken Ireland passed away, unfortunately, about 25 years ago, but Mike Rosen is still at Brown University and is still teaching. They have as an exercise in their book a proof due to a mathematician at Kansas State, I think it was, named Eckford Cohen, and he published a paper in the American Mathematical Monthly in 1969. And the proof is very simple. I’ll tell you the gist of it. It’s a proof by contradiction. What you do is you take for the different numbers n, you take the geometric mean of the first n numbers. What that means is you take the numbers 1, 2, 3, you multiply them together, and in the case of 3, you take the cube root of that number. We could even do that for 2, you take 1 and 2 and multiply them together and take the square root, 1.42. And these numbers that you get are smaller than the averages of the numbers. For example, the square root of 2 is less than 1.5, and the cube root of 6, of 1x2x3, is less than 2, which is the average of 1, 2, and 3. But nevertheless these numbers get pretty big, and you can show using high school mathematics that these numbers approach infinity, they get bigger and bigger. You can show, using an argument by contradiction, that if there were only finitely many primes, these numbers would not get bigger and bigger, they would stop and be all less than some number, depending on the primes that you could list out.

EL: Huh, that’s really cool.

KK: I like that.

KR: That’s kind of an amazing proof, and you see that it has absolutely nothing to do with the two proofs I told you about before.

KK: Sure.

EL: Yeah.

KK: Well that’s what’s so nice about number theory. It’s such a rich field. You can ask these seemingly simple questions and prove them 10 different ways, or not prove them at all.

KR: That’s right. When number theory began, I think it was a real collection of miscellany. People would study equations one by one, and they’d observe facts and record them for later use, and there didn’t seem to be a lot of order to the garden. And the mathematicians who tried to introduce the conceptual techniques in the last part of the 20th century, Carl Ludwig Siegel, André Weil, Jean-Pierre Serre, and so on, these people tried to make everything be viewed from a systematic perspective. But nonetheless if you look down at the fine grain, you’ll see there are lots of special cases and lots of interesting phenomena. And there are lots of facts that you couldn’t predict just by flying at 30,000 feet and trying to make everything be orderly.

EL: So, I think now it’s pairing time. So on the show, we like to ask our mathematicians to pair their theorem with something—food, beverage, music, art, whatever your fancy is. What have you chosen to pair with the infinitude of primes?

KR: Well, this is interesting. Just as I’ve told you three proofs of this theorem, I’d like to discuss a number of possible pairings. Would that be okay?

KK: Sure. Not infinitely many, though.

KR: Not infinitely many.

EL: Yeah, one for each prime.

KR: One thing is that prime numbers are often associated with music in some way, and in fact there is a book by Marcus du Sautoy, which is called The Music of the Primes. So perhaps I could say that the subject could be paired with his book. Another thing I thought of was the question of algorithmic recursive music. You see, we had a recursive description of a sequence coming from Euclid’s method, and yesterday I did a Google search on recursive music, and I got lots of hits. Another thing that occurred to me is the word prime, because I like wine a lot and because I’ve spent a lot of time in France, it reminds me of the phrase vin primeur. So you probably know that in November there is a day when the Beaujolais nouveau is released all around the world, and people drink the wine of the year, a very fresh young wine with lots of flavor, low alcohol, and no tannin, and in France, the general category of new wines is called vin primeur. It sounds like prime wines. In fact, if you walk around in Paris in November or December and you try to buy vin primeur, you’ll see that there are many others, many in addition to the Beaujolais nouveau. We could pair this theorem with maybe a Côtes du Rhône primeur or something like that.

But finally, I wanted to settle on one thing, and a few days ago, maybe a week ago, someone told me that in 2017, actually just about a year ago, a woman named Maggie Roche passed away. She was one of three sisters who performed music in the 70s and 80s, and I’m sure beyond. The music group was called the Roches. And the Roches were a fantastic hit, R-O-C-H-E, and they are viewed as the predecessors for, for example, the Indigo Girls, and a number of groups who now perform. They would stand up, three women with guitars. They had wonderful harmonies, very simple songs, and they would weave their voices in and out. And I knew about their music when it first came out and found myself by accident in a record store in Berkeley the first year I was teaching, which was 1978-79, long ago, and the three Roches were there signing record albums. These were vinyl albums at the time, and they had big record jackets with room for signatures, and I went up to Maggie and started talking to her. I think I spoke to her for 10 or 15 minutes. It was just kind of an electrifying experience. I just felt somehow like I had bonded with someone whom I never expected to see again, and never did see again. I bought one or two of the albums and got their signatures. I no longer have the albums. I think I left them in France. But she made a big impression on me. So if I wanted to pair one piece of music with this discussion, it would be a piece by the Roches. There are lots of them on Youtube. One called the Hammond Song, is especially beautiful, and I will officially declare that I am pairing the infinitude of primes with the Hammond Song by the Roches.

EL: Okay, I’ll have to listen to that. I’m not familiar with them, so it sounds like a good thing to listen to once we hang up here.

KK: We’ll link it in the show notes, too, so everyone can see it.

EL: That sounds like a lot of fun. It’s always a cool experience to feel like you’re connecting with someone like that. I went to a King’s Singers concert one time a few years ago and got a CD signed, and how warm and friendly people can be sometimes even though they’re very busy and very fancy and everything.

KR: I’ve been around a long time, and people don’t appreciate the fact that until the last decade or two, people who performed publicly were quite accessible. You could just go up to people before concerts or after concerts and chat with them, and they really enjoyed chatting with the public. Now there’s so much emphasis on security that it’s very hard to actually be face to face with someone whose work you admire.

KK: Well this has been fun. I learned some new proofs today.

KR: Fun for me too.

EL: Thanks a lot for being on the show.

KR: It’s my great pleasure, and I love talking to you, and I love talking about the mathematics. Happy New Year to everyone.


Episode 21 - Jana Rodriguez Hertz

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. I’m excited because we’re trying a different recording setup today, and a few of our recent episodes, I’ve had a few connection problems, so I’m hoping that everything goes well, and I’ve probably jinxed myself by saying that.

KK: No, no, it’s going to be fine. Positive thinking.

EL: Yeah, I’m hoping that the blips that our listeners may have heard in recent episodes won’t be happening. How about you? Are you doing well?

KK: I’m fine. Spring break is next week, and we’ve had the air conditioning on this week. This is the absurdity of my life. It’s February, and the air conditioning is on. But it’s okay. It’s nice. My son is coming home for spring break, so we’re excited.

EL: Great. We’re very happy today to have Jana Rodriguez-Hertz on the show. So, Jana, would you like to tell us a little bit about yourself?

Jana Rodriguez-Hertz: Hi, thank you so much. I’m originally from Argentina, I have lived in Uruguay for 20 years, and now I live in China, in Shenzhen.

EL: Yeah, that’s quite a big change. When we were first talking, first emailing, I mean, you were in Uruguay then, you’re back in China now. What took you out there?

JRH: Well, we got a nice job offer, and we thought we’d like to try. We said, why not, and we went here. It’s nice. It’s a totally different culture, but I’m liking it so far.

KK: What part of China are you in, which university?

JRH: In Southern University of Science and Technology in Shenzhen. It’s in Shenzhen. Shenzhen is in mainland China in front of Hong Kong, right in front of Hong Kong.

KK: Okay. That’s very far south.

EL: I guess February weather isn’t too bad over there.

JRH: It’s still winter, but it’s not too bad.

EL: Of course, that will be very relevant to our listeners when they hear this in a few months. We’re glad to have you here. Can you tell us about your favorite theorem?

JRH: Well, you know, I live in China now, and every noon I see a dynamical process that looks like the theorem I want to talk to you about, which is the dynamical properties of Smale’s horseshoe. Here it goes. You know, at the canteen of my university, there is a cook that makes noodles.

EL: Oh, nice.

JRH: He takes the dough and stretches it and folds it without mixing, and stretches it and folds it again, until the strips are so thin that they’re ready to be noodles, and then he cuts the dough. Well, this procedure can be described as a chaotic dynamical system, which is the Smale’s horseshoe.

KK: Okay.

JRH: So I want to talk to you about this. But we will do it in a mathematical model so it is more precise. So suppose that the cook has a piece of dough in a square mold, say of side 1. Then the cook stretches the dough so it becomes three times longer in the vertical sense but 1/3 of its original width in the horizontal sense. Then he folds it and puts the dough again in the square mold, making a horseshoe form. So the lower third of the square is converted into a rectangle of height 1 and width 1/3 and will be placed on the left side of the mold. The middle third of the square is going to be bent and will go outside the mold and will be cut. The upper third will be converted to another rectangle of height 1 and width 1/3 and will be put upside down in the right side of the mold. Do you get it?

KK: Yeah.

JRH: Now in the mold there will be two connected components of dough, one in the left third of the square and one in the right third of the square, and the middle third will be empty. In this way, we have obtained a map from a subset of the square into another subset of the square. And each time this map is applied, that is, each time we stretch and fold the dough, and cut the bent part, it’s called a forward iteration. So in the first forward iteration of the square, we obtain two rectangles of width 1/3 and height 1. Now in the second forward iteration of the square, we obtain four rectangles of width 1/9 and height 1. Two rectangles are contained in the left third, two rectangles in the right third. These are four noodles in total.

Counting from left to right, we will see one noodle of width 1/9, one gap of width 1/9, a second noodle of width 1/9, a gap of 1/3, and two more noodles of width 1/9 separated by a gap of width 1/9. Is that okay?

KK: Yes.

JRH: So if we iterate n times, we will obtain 2n noodles of width (1/3)n. And if we let the number of iterations go to infinity, that is, if we stretch and fold infinitely many times, cutting each time the bent part, we will obtain a Cantor set of vertical noodles.

KK: Yes.

EL: Right. So as you were saying the ninths with these gaps, and this 1/3, I was thinking, huh, this sounds awfully familiar.

KK: Yeah, yeah.

EL: We’ll include a picture of the Cantor set in the show notes for people to look at.

JRH: When we iterate forward, in the limit we will obtain a Cantor set of noodles. We can also iterate backwards. And what is that? We want to know for each point in the square, that is, for each flour particle of the dough in the mold, where it was before the cook stretched vertically and folded the dough the first time, where it came from. Now we recall that the forward iteration was to stretch in the vertical sense and fold it, so if we zoom back and put it backwards, we will obtain that the backward sense the cook has squeezed in the vertical sense and stretched in the horizontal sense and folded, okay?

EL: Yes.

JRH: Each time we iterate backwards, we stretch in the horizontal sense and fold it and put it in that sense. In this way, the left vertical rectangle is converted into the lower rectangle, the lower third rectangle. And the right side rectangle, the vertical rectangle, is converted into the upper third rectangle, and the bent part is cut. If we iterate backwards, now we will get in the first backward iteration four horizontal rectangles of width 1/9 and the gaps, and if we let the iterations go to infinity, we will obtain a Cantor set of horizontal noodles.

When we iterate forward and consider only what’s left in the mold, we start with two horizontal rectangles and finish with two vertical rectangles. When we iterate backwards we start with two vertical rectangles and finish with two horizontal rectangles. Now we want to consider the particles that stay forever in the mold, that is, the points so that all of the forward iterates and all the backwards iterates stay in the square. This will be the product of two middle-thirds Cantor set. It will look more like grated cheese than noodles.

KK: Right.

JRH: This set will be called the invariant set.

KK: Although they’re not pointwise fixed, they just stay inside the set.

JRH: That’s right. They stay inside the square. In fact, not only will they be not fixed, they will have a chaotic behavior. That is what I want to tell you about.

KK: Okay.

JRH: This is one of the simplest models of an invertible map that is chaotic. So what is chaotic dynamics anyways? There is no universally accepted definition about that. But one that is more or less accepted is one that has three properties. These properties are that periodic points are dense, it is topologically mixing, and it has sensitivity to initial conditions. And let me explain a little bit about this.

A periodic point is a particle of flour that has a trajectory that comes back exactly to the position where it started. This is a periodic point. What does it mean that they are dense? As close as you wish from any point you get one of these.

Topologically mixing you can imagine that means that the dough gets completely mixed, so if you take any two small squares and iterate one of them, it will get completely mixed with the other one forever. From one iteration on, you will get dough from the first rectangle in the second rectangle, always. That means topologically mixing.

I would like to focus on the sensitivity to initial conditions because this is the essence of chaos.

EL: Yeah, that’s kind of what you think of for the idea of chaos. So yeah, can you talk a little about that?

JRH: Yeah. This means that any two particles of flour, no matter how close they are, they will get uniformly separated by the dynamics. in fact, they will be 1/3 apart for some forward or backward iterate. Let me explain this because it is not difficult. Remember that we had the lower third rectangle? Call this lower third rectangle 0, and the upper third rectangle 1. Then we will see that for some forward or backward iterate, any two different particles will be in different horizontal rectangles. One will be in 1, and the other one will be in the 0 rectangle. How is that? If two particles are at different heights, than either they are already in different rectangles, so we are done, or else they are in the same rectangle. But if they are in the same rectangle, the cook stretches the vertical distance by 3. Every time they are in the same horizontal rectangle, their vertical distance is stretched by 3, so they cannot stay forever in the same rectangle unless they are at the same height.

KK: Sure.

JRH: If they are at different heights, they will get eventually separated. On the other hand, if they are in the same vertical rectangle but at different x-coordinates, if we iterate backwards, the cook will stretch the dough in the horizontal sense, so the horizontal distance will be tripled. Each time they are in the same vertical rectangle, they cannot be forever in the same vertical rectangle unless they are in the same, unless their horizontal distance is 0. But if they are in different positions, then either their horizontal distance is positive or the vertical distance is positive. So in some iterate, they will be 1/3 apart. Not only that, if they are in two different vertical rectangles, then in the next backwards iterate, they are in different horizontal rectangles. So we can state that any two different particles for some iterate will be in different horizontal rectangles, no matter how close they are. So that’s something I like very much because each particle is defined by its trajectory.

EL: Right, so you can tell exactly what you are by where you’ve been.

JRH: Yeah, two particles are defined by what they have done and what they will do. That allows something that is very interesting in this type of chaotic dynamics, which is symbolic dynamics. Now you know that any two points in some iterate will have distinct horizontal rectangles, so you can code any particle by its position in the horizontal rectangles. If one particle is in the beginning in the 0 rectangle, you will assign to them a sequence so that its zero position is 0, a double infinite sequence. If the first iterate is in the rectangle 1, then in the first position you will put a 1. In this way you can code any particle by a bi-infinite sequence of zeroes and ones. So in dynamics this is called conjugation. You can conjugate the horseshoe map with a sequence of bi-infinite sequences. This means that you can code the dynamics. Anything that happens in the set of bi-infinite sequences, happens in the horseshoe and vice versa. This is very interesting because you will find particles that describe any trajectory that you wish because you can write any sequence of zeroes and ones as you wish. You will have all Shakespeare coded in the horseshoe map, all of Donald Trump’s tweets will be there too.

KK: Let’s hope not. Sad!

JRH: Everything will be there.

EL: History of the world, for better and worse.

KK: What about Borges’s Library of Babel? It’s in there too, right?

JRH: If you can code it with zeroes and ones, it’s there.

EL: Yeah, that’s really cool. So where did you first run into this theorem?

JRH: When I was a graduate student, I ran into chaos, and I first ran into a baby model of this, which is the tent map. A tent map is in the interval, and that was very cool. Unlike this model, it’s coded by one-sided sequences. And later on, I went to IMPA [Instituto de Matemática Pura e Aplicada] in Rio de Janeiro, and I learned that Smale, the author of this example, had produced this example while being at IMPA in Rio.

KK: Right.

JRH: It was cool. I learned a little more about dynamics, about hyperbolic dynamics, and in fact, now I’m working in partially hyperbolic dynamics, which is very much related to this, so that is why I like it so much.

KK: Yeah, one of my colleagues spends a lot of time in Brazil, and he’s still studying the tent map. It’s remarkable, I mean, it’s such a simple model, and it’s remarkable what we still don’t know about it. And this is even more complicated, it’s a 2-d version.

EL: So part of this show is asking our guests to pair their theorem with something. I have an idea of what you might have chosen to pair with your theorem, but can you tell us what you’ve chosen?

JRH: Yeah, I like this sensitivity to initial conditions because you are defined by your trajectory. That’s pretty cool. For instance, if you consider humans as particles in a system, actually nowadays in Shenzhen, it is only me who was born in Argentina, lived in Uruguay, and lives in Shenzhen.

EL: Oh wow.

JRH: This is a city of 20 million people. But I am defined by my trajectory. And I’m sure any one of you are defined by your trajectory. If you look at a couple of things in your life, you will discover that you are the only person in the world who has done that. That is something I like. You’re defined, either by what you’ve done, or what you will do.

EL: Your path in life. It’s interesting that you go there because when I was talking to Ami Radunskaya, who also chose a theorem in dynamics, she also talked about how her theorem related to this idea of your path in life, so that’s a fun idea.

JRH: I like it.

KK: Of course, I was thinking about taffy-pulling the whole time you were describing the horseshoe map. You’ve seen these machines that pull taffy, I think they’re patented, and everything’s getting mixed up.

EL: Yeah.

JRH: All of this mixing is what makes us unique.

EL: So you can enjoy this theorem while pondering your life’s path and maybe over a bowl of noodles with some taffy for dessert.

KK: This has been fun. I’d never really thought too much about the horseshoe map. I knew it as this classical example, and I always heard it was so complicated that Smale decided to give up on dynamics, and I’m sure that’s false. I know that’s false. He’s a brilliant man.

JRH: Actually, he’s coming to a conference we’re organizing this year.

EL: Oh, neat.

KK: He’s still doing amazingly interesting stuff. I work in topological data analysis, and he’s been working in that area lately. He’s just a brilliant guy. The Fields Medal was not wasted on him, for sure.

EL: Well thanks a lot for taking the time to talk to us. I really enjoyed talking with you.

JRH: Thank you for inviting me.


Episode 20 - Francis Su

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. I am hanging in there in the winter as a displaced Texan.

KK: It’s not even winter yet.

EL: Yeah, well, somehow I manage to make it to the end of the season without dying every year outside of Texas, but yeah, the first few cold days really throw me for a loop.

KK: Well my son’s in college now, and they had snow last week.

EL: Well the south got a bunch of snow. Is he in South Carolina, is that right?

KK: North Carolina, and he’s never driven in snow before, and we told him not to, but of course he did. No incidents, so it was okay.

EL: So we’re very glad to have our guest today, who I believe is another displaced Texan, Francis Su. Francis, would you like to tell us a little bit about yourself?

Francis Su: Hi, Evelyn and Kevin. Sure. I’m a professor of mathematics at Harvey Mudd College, and that’s a small science and engineering school in southern California, and Evelyn is right. I am a displaced Texan from a small town in south Texas called Kingsville.

EL: Okay. I grew up in Dallas. Is Kingsville kind of between Houston and Beaumont?

FS: It’s between Houston and the valley. Closer to Corpus Christi.

EL: Ah, the other side. Many of us displaced Texans end up all over the country and elsewhere in the world.

FS: That’s right. I’m in California now, which means I don’t have to deal with the winter weather that you guys are wrestling with.

KK: I’m in Florida. I’m okay.

EL: Yeah. And you’re currently in the Bay Area at MSRI, so you’re not on fire right now.

FS: That’s right. I’m at the Math Sciences Research Institute. There’s a semester program going on in geometric and topological combinatorics.

KK: Cool.

EL: Yeah, that must be nice. Is this your, it’s not too long after your presidency of the Mathematical Association of America, so it must be nice to be able to not have those responsibilities and be able to just focus on research at MSRI this semester.

FS: That’s right. It was a way of hopping back into doing research after a couple of years doing some fun work for the MAA.

EL: So, what is your favorite theorem? We would love to hear it.

FS: You know, I went around and around with this because as mathematicians we have lots of favorite theorems. The one I kept coming back to was the Brouwer fixed point theorem.

KK: I love this theorem.

FS: Yes, so the Brouwer fixed point theorem is an amazing theorem. It’s about a hundred years old. It shows up in all sorts of unexpected places. But what it loosely says is if you have a continuous function from a ball to itself—and I’ll say what a ball means in a minute—it must have a fixed point, a point that doesn’t move. And a ball can be anything that basically has no holes.

EL: So anything you can make out of clay without punching a hole in it, or snaking it around and attaching two ends of it together. I’m gesturing with my hands. That’s very helpful for our podcast listeners.

KK: Right.

FS: Exactly.

KK: We don’t even need convexity, right? You can have some kind of dimpled blob and it still works.

FS: That’s right. It could be a blob with a funny shape. As long as it can be deformed to something that’s a ball, the ball has no holes, then the theorem applies. And a continuous function would be, one way of thinking about a continuous function from a ball to itself is let’s deform this blob, and as long as we deform the blob so that the blob stays within itself, then the blob doesn’t move. A very popular way of describing this theorem is if you take a cup of coffee, let’s say I have a cup of coffee and I take a picture of it. Then slosh the coffee around in a continuous fashion and then take another picture. There is going to be a point in the coffee that is in the same spot in both pictures. It might have moved around in between, but there’s going to be a point that’s in the same spot in both pictures. And then if I move that point out of its original position, I can’t help but move some other point into its original position.

EL: Yeah, almost like a reverse diagonalization. In diagonalization you show that there’s a problem because anything you thought you could get on your list, you show that something else, even if you stick it on the list, something else is not on the list still. Here, you’re saying even if you think, if I just had one fixed point, I could move it and then I wouldn’t have any, you’re saying you can’t do that without adding some other fixed point.

FS: That’s right. The coffee cup sloshing example is a nice one because you can see that if I take the cup of coffee and I just empty it and pour the liquid somewhere else, clearly there’s not going to be a fixed point. So you sort of see the necessity of having the ball, the coffee, mapped to itself.

KK: And if you had a donut-shaped cup of coffee, this would not be true, right? You could swirl it around longitudinally and nothing would be fixed.

FS: That’s right. If you had a donut-shaped coffee mug, we could. That’s right. The continuity is kind of interesting. Another way I like to think about this theorem is if you take a map of Texas and you crumple it up somewhere in Texas, there’s a point in the map that’s exactly above the point it represents in Texas. So that’s sort of a two-dimensional version of this theorem. And you see the necessity of continuity because if I tore the map in two pieces and threw east Texas into west Texas and west Texas into east Texas, it wouldn’t be true that there would be a point exactly above the point it represents. So continuity is really important in this theorem as well.

KK: Right. You know, for fun, I put the one-dimensional version of this as a bonus question on a calculus test this semester.

FS: I like that version. Are you referring to graphing this one-dimensional function?

KK: Right, so if you have a map from a unit interval to itself, it has a fixed point. This case is nice because it’s just a consequence of the intermediate value theorem.

FS: Yes, that’s a great one. I love that.

KK: But in higher dimensions you need a little more fire power.

FS: Right. So yeah, this is a fun theorem because it has all sorts of maybe surprising versions. I told you one of the popular versions with coffee. It can be used, for instance, to prove the fundamental theorem of algebra, that every polynomial has a root in the complex numbers.

EL: Oh, interesting! I don’t think I knew that.

KK: I’m trying to think of that proof.

FS: Yeah, so the idea here is that if you think about a polynomial as a function and you’re thinking of this as a function on the complex plane, basically it takes a two-dimensional region like Texas and maps it in some fashion back onto the plane. And you can show that there’s a region in this map that gets sent to itself, roughly speaking. That’s one way to think about what’s going on. And then the existence of a zero corresponds to a fixed point of a continuous function, which I haven’t named but that’s sort of the idea.

EL: Interesting. That’s nice. It’s so cool how, at least if I’m remembering correctly, all the proofs I know of the fundamental theorem of algebra are topological. It’s nice, I think, for topology to get to throw an assist to algebra. Algebra has helped topology so much.

FS: I love that too. I guess I’m attracted to topology because it says a lot of things that are interesting about the existence of certain things that have to happen. One of the things that’s going on at this program at MSRI, as the name implies, geometric and topological combinatorics, people are trying to think about how to use topology to solve problems in combinatorics, which seems strange because combinatorics feels like it just has to do with counting discrete objects.

EL: Right. Combinatorics feels very discrete, and topology feels very continuous, and how do you get that to translate across that boundary? That’s really interesting.

FS: I’ll give you another example of a surprising application. In the 1970s, actually people studied this game called Hex for a while. I guess Hex was developed in the ‘40s or ‘50s. Hex is a game that’s played on a board with hexagonal tiles, a diamond-shaped board. Two players take turns, X and O, and they’re trying to construct a chain from one side of the board to the other, to the opposite side. It turns out that the Brouwer fixed-point theorem, well you can ask the question: can that game ever end in a draw configuration where nobody wins? For large boards, it’s not so obvious that the game can’t end in a draw. But in a spectacular application of the Brouwer fixed-point theorem it can’t end in a draw using the Brouwer fixed-point theorem.

EL: Oh, that’s so cool.

KK: That is cool. And allegedly this game was invented by John Nash in the men’s room at Princeton, right?

FS: Yes, there’s some story like that, though I think it actually dates back to somebody before.

KK: Probably. But it’s a good story, right, because Nash is so famous.

EL: So was it love at first sight with the Brouwer fixed-point theorem for you, or how did you come across it and grow to love it?

FS: I guess I encountered it first as an undergraduate in college when a professor of mine, a topology professor of mine, showed me this theorem, and he showed me a combinatorial way to prove this theorem, using something known as Sperner’s lemma. There’s another connection between topology and combinatorics, and I really appreciated the way you could use combinatorics to prove something in topology.

EL: Cool.

KK: Very cool.

KK: You know, part of our show is we ask our guest to pair their theorem with something. So what have you chosen to pair the Brouwer fixed-point theorem with?

FS: I’d like to pair it with parlor games. Think of a game like chess, or think of a game like rock-paper-scissors. It turns out that the Brouwer fixed-point theorem is also related to how you play a game optimally, a game like chess or rock-paper-scissors optimally.

KK: So how do you get the optimal strategy for chess from the Brouwer fixed-point theorem?

FS: Very good question. So the Brouwer fixed-point theorem can’t tell you what the optimal strategy is.

KK: Just that it exists, right, yeah.

FS: It tells you that there is a pair of optimal strategies that players can play to play the game optimally. What I’m referring to is something known as the Nash equilibrium theorem. Nash makes another appearance in this segment. What Nash showed is that if you have a game, well there’s this concept called the Nash equilibrium. The question Nash asked is if you’re looking at some game, can you predict how players are going to play this game? That’s one question. Can you prescribe how players should play this game? That’s another question. And a third question is can you describe why players play a game a certain way? So there’s the prediction, descriptions, and prescription about games that mathematicians and economists have gotten interested in. And what Nash proposed is that in fact something called a Nash equilibrium is the best way to describe, prescribe, and predict how people are going to play a game. And the idea of a Nash equilibrium is very simple, it’s just players playing strategies that are mutually best responses to each other. And it turns out that if you allow what are called mixed strategies, every finite game has an equilibrium, which is kind of surprising. It suggests that you could maybe suggest to people what the best course of action is to play. There is some pair of strategies by both players, or by all players if it’s a multiplayer game, that actually are mutual best replies. People are not going to have an incentive to change their strategies by looking at the other strategies.

KK: The Brouwer fixed point theorem is so strange because it’s one of those existence things. It just says yeah, there is a fixed point. We tend to prove it by contradiction usually, or something. There’s not really any good constructive proofs. I guess you could just pick a point and start iterating. Then by compactness what it converges to is a fixed point.

FS: There is actually, maybe this is a little surprising as well, this theorem I mention learning as an undergrad, it’s called Sperner’s lemma, it actually has a constructive proof, in the sense that there’s an efficient way of finding the combinatorial object that corresponds to a fixed point. What’s surprising is that you can actually in many places use this constructive combinatorial proof to find, or get close to, a proposed fixed point.

KK: Very cool.

FS: That’s kind of led to a whole bunch of research in the last 40 years or so in various areas, to try to come up with constructive versions of things that prior to that people had thought of as non-constructive.

EL: Oh, that’s so cool. I must admit I did not have proper appreciation for the Brouwer fixed-point theorem before, so I’m very glad we had you on. I guess I kind of saw it as this novelty theorem. You see it often as you crumple up the map, or do these little tricks. But why did I really care that I could crumple up the map? I didn’t see all of these connections to these other points. I am sorry to the Brouwer fixed-point theorem for not properly appreciating it before now.

FS: Yes. I think it definitely belongs on a top ten list of top theorems in mathematics. I wonder how many mathematicians would agree.

KK: I read this book once, and the author is escaping me and I’m kind of embarrassed because it’s on the shelf in my other office, called Five Golden Rules. Have you ever seen this book? It was maybe 10 or 15 years ago.

EL: No.

KK: One of the theorems, there are like five big theorems in mathematics, it was the Brouwer fixed-point theorem. And yeah, it’s actually of fundamental importance to know that you have fixed points for maps. They are really important things. But the application he pointed to was to football ranking schemes, right? Because that’s clearly important. College football ranking schemes in which in essence you’re looking for an eigenvector of something, and an eigenvector is a fixed point with eigenvalue 1, and of course the details are escaping me now. This book is really well-done. Five Golden Rules.

EL: We’ll find that and put it in the show notes for sure.

FS: I haven’t heard of that. I should look that one up.

KK: It’s good stuff.

FS: I’ll just mention with this Nash theorem, the basic idea of using the Brouwer fixed-point theorem to prove it is pretty simple to describe. It’s that if you look at the set of all collections of strategies, if they’re mixed strategies allowing randomization, then in fact that space is a ball.

KK: That makes sense.

FS: And then the cool thing is if players have an incentive to deviate, to change their strategies, that suggests a direction in which each point could move. If they want to deviate, it suggests a motion of the ball to itself. And the fact that the ball has a fixed point means there’s a place where nobody is incentivized to change their strategy.

EL: Yeah.

KK: Well I’ve learned a lot. And I even knew about the Brouwer fixed-point theorem, but it’s nice to learn about all these extra applications. I should go learn more combinatorics, that’s my takeaway.

EL: Yeah, thanks so much for being on the show, Francis. If people want to find you, there are a few places online that they can find you, right? You’re on Twitter, and we’ll put a link to your Twitter in the show notes. You also have a blog, and I’m sorry I just forgot what it’s called.

FS: The Mathematical Yawp.

EL: That’s right. We’ll put that in the show notes. I know there are a lot of posts of yours that I’ve really appreciated, especially the ones about helping students thrive, doing math as a way for humans to grow as people and helping all students access that realm of learning and growth. I know those have been influential in the math community and fun to read and hear.

Episode 19 - Emily Riehl

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and everyone’s favorite theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.

Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. So how are things going, Kevin?

KK: Okay. We’re hiring a lot, and so I haven’t eaten a meal at home this week, and maybe not last week either. You think that might be fun until you’re in the middle of it. It’s been great meeting all these new people, and I’m really excited about getting some new colleagues in the department. It’s a fun time to be at the University of Florida. We’re hiring something like 500 new faculty in the next two years.

EL: Wow!

KK: It’s pretty ambitious. Not in the math department.

EL: Right.

KK: I wish. We could solve the mathematician glut just like that.

EL: Yeah, that would be great.

KK: How are things in Salt Lake?

EL: Pretty good. It’s a warm winter here, which will be very relevant to our listeners when they listen in the summer. But it’s hiring season at the University of Utah, where my spouse works. He’s been doing all of that handshaking.

KK: The handshaking, the taking to the dean and showing around, it’s fun. It’s good stuff. Anyway, enough about that. I’m excited about today’s guest. Today we are pleased to welcome Emily Riehl from Johns Hopkins. Hi, Emily.

Emily Riehl: Hi.

KK: Tell everyone about yourself.

ER: Let’s see. I’ve known I wanted to be a mathematician since I knew that that was a thing that somebody could be, so that’s what I’m up to. I’m at Johns Hopkins now. Before that I was a postdoc at Harvard, where I was also an undergraduate. My Ph.D. is from Chicago. I was a student of Peter May, an algebraic topologist, but I work mostly in category theory, and particularly in category theory as it relates to homotopy theory.

KK: So how many students does Peter have? Like 5000 or something?

ER: I was his 50th, and that was seven years ago.

EL: Emily and I have kind of a weird connection. We’ve never actually met, but we both lived in Chicago and I kind of replaced Emily in a chamber music group. I played with Walter and the gang I guess shortly after you graduated. I moved there in 2011. They’re like, oh, you must know Emily Riehl because you’re both mathematicians who play viola. I was like, no, that sounds like a person, though, because violists are all the best people.

KK: So, Emily, you’ve told us, and I’ve had time to think about it but still haven’t thought of my favorite application of this theorem. But what’s your favorite theorem?

ER: I should confess: my favorite theorem is not the theorem I want to talk about today. Maybe I’ll talk about what I don’t want to talk about briefly if you’ll indulge me.

KK: Sure.

ER: So I’m a category theorist, and every category theorist’s favorite theorem is the Yoneda lemma. It says that a mathematical object of some kind is uniquely determined by the relationships that it has to all other objects of the same type. In fact, it’s uniquely characterized in two different ways. You can either look at maps from the object you’re trying to understand or maps to the object you’re trying to understand, and either way suffices to determine in. This is an amazing theorem. There’s a joke in category that all proofs are the Yoneda lemma. I mean, all proofs [reduce] to the Yoneda lemma. The reason I don’t want to talk about it today is two-fold. Number one, the discussion might sound a little more philosophical than mathematical because one thing that the Yoneda lemma does is it orients the philosophy of category theory. Secondly, there’s this wonderful experience you have as a student when you see the Yoneda lemma for the first time because the statement you’ll probably see is not the one I just described but sort of a weirder one involving natural transformations from representable functors, and you see them, and you’re like, okay, I guess that’s plausible, but why on earth would anyone care about that? And then it sort of dawns on you over however many years, in my case, why it’s such a profound and useful observation. And I don’t want to ruin that experience for anybody.

KK: You’re not worried about getting excommunicated, right?

ER: That’s why I had to confess. I was joking with some category theorists, I was just in Sydney visiting the Center of Australian Category Theory, which is the name of the group, and it’s also the center of Australian category theory. And I want to be invited back, so yes, of course, my favorite theorem is the Yoneda lemma. But what I want to talk about today instead is a theorem I really like because it’s a relatively simple idea, and it comes up all over mathematics. Once it’s a pattern you know to look for, it’s quite likely that you’ll stumble upon it fairly frequently. The proof, it’s a general proof in category theory, specializes in each context to a really nice argument in that particular context. Anyway, the theorem is called right adjoints preserve limits.

EL: All right.

KK: So I’m a topologist, so to me, we put a modifier in front of our limit, so there’s direct and inverse. And limit in this context means inverse limit, right?

ER: Right. That’s the historical terminology for what category theorists call limits.

KK: So I always think of inverse limits as essentially products, more or less, and direct limits are unions, or direct sum kinds of things. Is that right?

ER: Right.

KK: I hope that’s right. I’m embarrassed if I’m wrong.

ER: You’re alluding to something great in category theory, which is that when you prove a theorem, you get another theorem for free, the dual theorem. A category is a collection of objects and a collection of transformations between them that you depict graphically as arrows. Kind of like in projective geometry, you can dualize the axioms, you can turn around the direction of the arrows, and you still have a category. What that means is that if you have a theorem in category theory that says for all categories blah blah blah, then you can apply that in particular to the opposite category where things are turned around. In this case, there are secretly two categories involved, so we have three dual versions of the original theorem, the most useful being that left adjoints preserve colimits, which are the direct limits that you’re talking about. So whether they’re inverse limits or direct limits, there’s a version of this theorem that’s relevant to that.

KK: Do we want to unpack what adjoint functors are?

ER: Yes.

EL: Yeah, let’s do that. For those of us who don’t really know category theory.

ER: Like anything, it’s a language that some people have learned to speak and some people are not acquainted with yet, and that’s totally fine. Firstly, a category is a type of mathematical object, basically it’s a theory of mathematical objects. We have a category of groups, and then the transformations between groups are the group homomorphisms. We have a category of sets and the functions between them. We have a category of spaces and the continuous functions. These are the categories. A morphism between categories is something called a functor. It’s a way of converting objects of one type to objects of another type, so a group has an underlying set, for instance. A set can be regarded as a discrete space, and these are the translations.

So sometimes if you have a functor from one category to another and another functor going back in the reverse direction, those functors can satisfy a special dual relationship, and this is a pair of adjoint functors. One of them gets called a left adjoint, and one of them the right adjoint. What the duality says is that if you look at maps out of the image of the left adjoint, then those correspond bijectively and naturally (which is a technical term I’m not going to get into) to maps in the other category into the image of the right adjoint. So maps in one category out of the image of the left adjoint correspond naturally to maps in the other category into the image of the right adjoint. So let me just mention one prototypical example.

KK: Yeah.

ER: So there’s a free and forgetful construction. So I mentioned that a group has an underlying set. The reverse process takes a set and freely makes a group out of that set, so the elements of that group will be words in the letters and formal inverses modulo some relation, blah blah blah, but the special property of these free groups is if I look at the group homomorphism that’s defined on a free group, so this is a map in the category of groups out of an object in the image of the left adjoint, to define that I just have to tell you where the generators go, and I’m allowed to make those choices freely, and I just need to find a function of sets from the generating set into the underlying set of the group I’m mapping into.

KK: Right.

ER: That’s this adjoint relationship. Group homomorphisms from a free group to whatever group correspond to functions from the generators of that free group to that underlying set of the group.

EL: I always feel like I’m about to drown when I try to think about category theory. It’s hard for me to read category theory, but when people talk to me about it, I always think, oh, okay, I see why people like this so much.

KK: Reading category theory is sort of like the whole picture being worth a thousand words thing. The diagrams are so lovely, and there’s so much information embedded in a diagram. Category theory used to get a bad rap, abstract nonsense or whatever, but it’s shown to be incredibly powerful, certainly as an organizing principle but also just in being able to help us push boundaries in various fields. Really if you think about it just right, if you think about things as functors, lots of things come out, almost for free. It feels like for free, but the category theorist would say, no, there’s a ton of work there. So what’s a good example of this particular theorem?

ER: Before I go there, exactly to this point, there’s a great quote by Eilenberg and Steenrod. So Eilenberg was one of the founders of category theory. Saunders MacLand wrote a paper, the “General Theory of Natural Equivalences,” in the ‘40s that defined these categories and functors and also the notion of naturality that I was alluding to. They thought that was going to be both the first and last. Anyway, ten years later, Eilenberg and Steenrod wrote this book, Foundations of Algebraic Topology, that incorporated these diagrammatic techniques into a pre-existing mathematical area, algebraic topology. It had been around since at least the beginning of the twentieth century, I’d say. So they write, “the diagrams incorporate a large amount of information. Their use provides extensive savings in space and in mental effort. In the case of many theorems, the setting up of the correct diagram is a major part of the proof. We therefore urge that the reader stop at the end of each theorem and attempt to construct for himself (it’s a quote here) the relevant diagram before examining the one which is given in the text. Once this is done, the subsequent demonstration can be followed more readily. In fact, the reader can usually supply it himself.”

KK: Right. Like proving Meier-Vietoris, for example. You just set up the right diagram, and in principle it drops out, right?

ER: Right, and in general in category theory, the definitions, the concepts are the hard thing. The proofs of the theorems are generally easier. And in fact, I’d like to prove my favorite theorem for you. I’m going to do it in a particular example, and actually I’m going to do it in the dual. So I’m going to prove that left adjoints preserve colimits.

EL: Okay.

ER: The statement I’m going to prove, the specific statement I’m going to prove by using the proof that left adjoints preserve colimits, is that for natural numbers a, b, and c, I’m going to prove that a(b+c)=ab+ac.

KK: Distributive law, yes!

ER: Distributive property of multiplication over addition. So how are we going to prove this? The first thing I’m going to do is categorify my natural numbers. And what is a natural number? It’s a cardinality of a finite set. In place of the natural numbers a, b, and c, I’m going to think about sets, which I’ll also call A, B, and C. The natural numbers stand for the cardinality of these sets.

EL: Cardinality being the size, basically.

ER: Absolutely. A, B, and C are now sets. If we’re trying to prove this statement about natural numbers, they’re finite sets. The theorem is actually true for arbitrary sets, so it doesn’t matter. And I’ve replaced a, b, and c by sets. Now I have this operation “times” and this operation “plus,” so I need to categorify those as well. I’m going to replace them by operations on sets. So what’s something you can do to two sets so that the cardinalities add, so that the sizes add?

KK: Disjoint union.

EL: Yeah, you could union them.

ER: So disjoint union is going to be my interpretation of the symbol plus. And we also need an interpretation of times, so what can I do for sets to multiply the cardinalities?

EL: Take the product, or pairs of elements in each set.

ER: That’s right. Absolutely. So we have the cartesian product of sets and the disjoint union of sets. The statement is now for any sets a, b, and c, I’m going to prove that if I take the disjoint union B+C, and then form the cartesian product with A, then that set is isomorphic to, has in particular the same number of elements as, the set that you’d get by first forming the products A times B and A times C and then taking the disjoint union.

KK: Okay.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

EL: Okay.

ER: Okay. So let’s just try and begin. So the way I’m going to prove that A times (B+C) is (AxB) +(AxC) is actually using a Yoneda lemma-style proof because the Yoneda lemma comes up everywhere. We know that these sets are isomorphic by arguing that functions from them to another set X correspond. So if the sets have exactly the same functions to every other set, then they must be isomorphic. That’s the Yoneda lemma. Let’s now consider a function from the set A times the disjoint union of B+C to another set X. The first thing I can do with such a function is something called currying, or maybe uncurrying. (I never remember which way these go.) I have a function here of two variables. The domain is the set A times the disjoint union (B+C). So I can instead regard this as a function from the set (B+C), the disjoint union, into the set of functions from A to X.

KK: Yes.

ER: Rather than have A times (B+C) to X, I have from (B+C) to functions from A to X. There I’ve just transposed the cross and adjunction. That was the adjunction bit. So now I have a function from the disjoint union B+C to the set of functions from A to X. Now when I’m mapping out of a disjoint union, that just means a case analysis. Either I need to define a function like this, I have to define firstly a function from B to functions from A to the X, and also from C to functions from A to the X. So now a single function is given by these two functions. And if I look at the piece, now, which is a function from B to functions from A to the X, by this uncurrying thing, that’s equally just a function from A times B to X. Similarly on the C piece, it’s just my function from C to functions from A to X is just a function from A times C to X. So now I have a function from A times B to X and also from A times C to X, and those amalgamate to form a single function from the disjoint union A times B to X, or disjoint union A times C to X. So in summary, functions from A times the disjoint union (B+C) to X correspond in this way to functions from (AxB) disjoint union (AxC) to X, so therefore the sets A times B+C and A times B plus A times C.

EL: And now I feel like I know a category theory proof.

ER: So what’s great about that proof is that it’s completely independent of the context. It’s all about the formal relationships between the mathematical objects, so if you want to interpret A, B, and C as vector spaces and plus as the direct sum, which you might as an example of a colimit, and times as a tensor product, I’ve just proven that the tensor product distributes as a direct sum, like modules over commutative rings. That’s a much more complicated setting, but the exact same argument goes through. And of course there are lots of other examples of limits and colimits. One thing that kind of mystified me as an undergraduate is that if you have a function between sets, the inverse image preserves both unions and intersections, whereas the direct image preserves only unions and not intersections. And there’s a reason for that. The inverse image is a functor between these poset categories of sets of subsets, and admits both left and right adjoints, so it preserves all limits and all colimits of both intersections and unions, whereas this left adjoint, which is the direct image, only preserves the colimits.

KK: Right. So here’s the philosophical question. You didn’t want to get philosophical, but here it is anyway. So category theory in a lot of ways reminds me of the new math. We had this idea that we were going to teach set theory to kindergarteners. Would it be the right way to teach mathematics? So you mention all of these things that sort of drop out of this rather straightforward fact. So should we start there? Or should we develop this whole library? The example of tensor products distributing over direct sums, I mean, everybody’s seen a proof of that in Atiyah and McDonald or whatever, and okay, fine, it works. But wouldn’t it be nice to just get out your sledgehammer and say, look, limits and adjoints commute. Boom!

ER: So I give little hints of category theory when I teach undergraduate point-set topology. So in Munkres, chapter 2 is constructing the product topology, constructing the quotient topology, constructing subspace topologies, and rather than treat these all as completely separate topics, I group all the limits together and group all the colimits together, and I present the features of the constructions. This is the coarsest topology so that such and such maps are continuous, this is the finest topology so that the dual maps are continuous. I don’t define limit or colimit. Too much of a digression. In teaching abstract algebra to undergraduates in an undergraduate course, I do say a little bit about categories. I guess I think it’s useful to precisely understand function composition before getting into technical arguments about group homomorphisms, and the first isomorphism theorem is essentially the same for groups and for rings and for modules, and if we’re going to see the same theorem over and over again, we should acknowledge that that’s what happens.

KK: Right.

ER: I think category theory is not hard. You can teach it on day one to undergraduates. But appreciating what it’s for takes some mathematical sophistication. I think it’s worth waiting.

EL: Yeah. You need to travel on the path a little while before bringing that in, seeing it from that point of view.

ER: The other thing to acknowledge is it’s not equally relevant to all mathematical disciplines. In algebraic geometry, you can’t even define the basic objects of study anymore without using categorical language, but that’s not true for PDEs.

KK: So another fun thing we like to do on this podcast is ask our guest to pair their theorem with something. So what have you chosen to pair this theorem with?

ER: Right. In honor of the way Evelyn and I almost met, I’ve chosen a piece that I’ve loved since I was in middle school. It’s Benjamin Britten’s Simple Symphony, his movement 3, which is the Sentimental Sarabande. The reason I love this piece, so Benjamin Britten is a British composer. I found out when I was looking this up this morning that he composed this when he was 20.

EL: Wow.

ER: The themes that he used, it’s pretty easy to understand. It isn’t dark, stormy classical music. The themes are relatively simple, and they’re things I think he wrote as a young teenager, which is insane to me. What I love about this piece is that it starts, it’s for string orchestra, so it’s a simple mix of different textures. It starts in this stormy, dramatic, unified fashion where the violins are carrying the main theme, and the cellos are echoing it in a much deeper register. And when I played this in an orchestra, I was in the viola section, I think I was 13 or so, and the violas sort of never get good parts. I think the violists in the orchestra are sort of like category theory in mathematics. If you take away the viola section, it’s not like a main theme will disappear, but all of a sudden the orchestra sounds horrible, and you’re not sure why. What’s missing? And then very occasionally, the clouds part, and the violas do get to play a more prominent role. And that’s exactly what happens in this movement. A few minutes in, it gets quiet, and then all of a sudden there’s this beautiful viola soli, which means the entire viola section gets to play this theme while the rest of the orchestra bows out. It’s this really lovely moment. The violas will all play way too loud because we’re so excited. [music clip] Then of course, 16 bars later, the violins take the theme away. The violins get everything.

EL: Yeah, I mean it’s always short-lived when we have that moment of glory.

ER: I still remember, I haven’t played this in an orchestra for 20 years now, but I still remember it like it was yesterday.

EL: Yeah, well I listened to this after you shared it with us over email, and I turned it on and then did something else, and the moment that happened, I said, oh, this is the part she was talking about!

KK: We’ll be sure to highlight that part.

EL: I must say, the comparison of category theory to violists is the single best way to get me to want to know more about category theory. I don’t know how effective it is for other people, but you hooked me for sure.

KK: We also like to give our guests a chance to plug whatever they’re doing. When did your book come out? Pretty recently, a year or two ago?

EL: You’ve got two of them, right?

ER: I do. My new book is called Category Theory in Context, and the intended audience is mathematicians in other disciplines. So you know you like mathematics. Why might category theory be relevant? Actually, in the context of my favorite theorem, the proof that right adjoints preserve limits is actually the watermark on the book.

KK: Oh, nice.

ER: I had nothing to do with that. Whoever the graphic designer is, like you said, the diagrams are very pretty. They pulled them out, and that’s the watermark. It’s something I’ve taught at the advanced undergraduate or beginning graduate level. It was a lot of fun to write. Something interesting about the writing process is I wanted a category theory book that was really rich with compelling examples of the ideas, so I emailed the category theory mailing list, I posted on a category theory blog, and I just got all these wonderful suggestions from colleagues. For instance, row reduction, the fact that the elementary row operations can be implemented by multiplication by an elementary matrix, and then you take the identity matrix and perform the row operations on that matrix, that’s the Yoneda lemma.

KK: Wow, okay.

ER: A colleague friend told me about that example, so it’s really a kind of community effort in some sense.

KK: Very cool. And our regular listeners also found out on a previous episode that you’re also an elite athlete. Why don’t you tell us about that a little bit?

ER: So I think I already mentioned the Center of Australian Category Theory. So there’s this really famous category theory group based in Sydney, Australia, and when I was a Ph.D. student, I went for a few months to visit Dominic Verity, who’s ~28:40 now my main research collaborator. It was really an eventful trip. I had been a rugby player in college, so then when I was in Sydney, I thought it might be fun to try this thing called Australian rules football, which I’d heard about as another contact sport, and I just completely fell in love. It’s a beautiful game, in my opinion. So then I came back to the US and looked up Australian rules football because I wanted to keep playing, and it does exist here. It’s pretty obscure. I guess a consequence of that is I was able to play on the US women’s national team. I’ve been doing that for the past seven years, and what’s great about that is occasionally we play tournaments in Australia, so whenever that happens, I get to visit my research colleagues in Sydney, and then go down to Melbourne, which is really the center of footie, and combine these two passions.

EL: We were talking about this with John Urschel, who of course plays American rules football, or recently retired. This is one time when I wish we had a video feed for this because his face when we were trying to explain, which of course, two mathematicians who have sort of seen this on a TV in a bar trying to explain what Australian rules football is, he had this look of bewilderment.

KK: Yeah, I was explaining that the pitch is a big oval and there’s the big posts on the end, and he was like, wait a minute.

EL: His face was priceless there.

KK: It was good. I used to love watching it. I used to watch it in the early days of ESPN. I thought it was just a fun game to watch. Well, Emily, this has been fun. Thanks for joining us.

ER: Thanks for having me. I’ve loved listening to the past episodes, and I can’t wait to see what’s in the pipeline.

KK: Neither can we. I think we’re still figuring it out. But we’re having a good time, too. Thanks again, Emily.

EL: All right, bye.

ER: Bye.

[end stuff]

Episode 18 - John Urschel

Kevin Knudson: Welcome to My Favorite Theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. I’m joined by your cohost.

Evelyn Lamb: Hi, I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah, where it is very cold now, and so I’m very jealous of Kevin living in Florida.

KK: It’s a dreary day here today. It’s raining and it’s “cold.” Our listeners can’t see me doing the air quotes. It’s only about 60 degrees and rainy. It’s actually kind of lousy. but it’s our department holiday party today, and I have my festive candy cane tie on, and I’m good to go. And I’m super excited.

John Urschel: So I haven’t been introduced yet, but can I jump in on this weather conversation? I’m in Cambridge right now, and I must say, I think it’s probably nicer in Cambridge, Massachusetts than it is in Utah right now. It’s a nice breezy day, high 40s, low 50s, put on a little sweater and you’re good to go.

EL: Yeah, I’m jealous of both of you.

KK: Evelyn, I don’t know about you, but I’m super excited about this one. I mean, I’m always excited to do these, but it’s the rare day you get to talk to a professional athlete about math. This is really very cool. So our guest on this episode is John Urschel. John, do you want to tell everyone about yourself?

JU: Yes, I’d be happy to. I think I might actually be the only person, the only professional athlete you can ask high-level math about.

KK: That might be true. Emily Riehl, Emily Riehl counts, right?

EL: Yeah.

KK: She’s a category theorist at Johns Hopkins. She’s on the US women’s Australian rules football team.

EL: Yeah,

JU: Australian rules football? You mean rugby?

KK: Australian rules football is like rugby, but it’s a little different. See, you guys aren’t old enough. I’m old enough to remember ESPN in the early days when they didn’t have the high-end contracts, they’d show things like Australian rules football. It’s fascinating. It’s kind of like rugby, but not really at the same time. It’s very weird.

JU: What are the main differences?

EL: You punch the ball sometimes.

KK: They don’t have a scrum, but they have this thing where they bounce the ball really hard. (We should get Emily on here.) They bounce the ball up in the air, and they jump up to get it. You can run with it, and you can sort of punch the ball underhanded, and you can kick it through these three posts on either end [Editor's note: there are 4 poles on either end.]. It’s sort of this big oval-shaped field, and there are three poles at either end, and you try to kick it. If you get it through the middle pair, that’s a goal. If you get it on either of the sides, that’s called a “behind.” The referees wear a coat and tie and a little hat. I used to love watching it.

JU: Wait, you say the field is an oval shape?

KK: It’s like an oval pitch, yeah.

JU: Interesting.

KK: Yeah. You should look this up. It’s very cool. It is a bit like rugby in that there are no pads, and they’re wearing shorts and all of that.

JU: And it’s a very continuous game like rugby?

KK: Yes, very fast. It’s great.

JU: Gotcha.

KK: Anyway, that’s enough of us. You didn’t tell us about yourself.

JU: Oh yeah. My name is John Urschel. I’m a retired NFL offensive lineman. I played for the Baltimore Ravens. I’m also a mathematician. I am getting my Ph.D. in applied math at MIT.

KK: Good for you.

EL: Yeah.

KK: Do you miss the NFL? I don’t want to belabor the football thing, but do you miss playing in the NFL?

JU: No, not really. I really loved playing in the NFL, and it was a really amazing experience to be an elite, elite at whatever sport you love, but at the same time I’m very happy to be focusing on math full-time, focusing on my Ph.D. I’m in my third year right now, and being able to sort of devote more time to this passion of mine, which is ideally going to be my lifelong career.

EL: Right. Yeah, so not to be creepy, but I have followed your career and the writing you’ve done and stuff like that, and it’s been really cool to see what you’ve written about combining being an athlete with being a mathematician and how you’ve changed your focus as you’ve left playing in the NFL and moved to doing this full-time. It’s very neat.

KK: So, John, what’s your favorite theorem?

JU: Yes, so I guess this is the name of the podcast?

KK: Yeah.

JU: So I should probably give you a theorem. So my favorite theorem is a theorem by Batson, Spielman, and Srivastava.

EL: No, I don’t. Please educate us.

JU: Good! So this is perfect because I’m about to introduce you to my mathematical idol.

KK: Okay, great.

JU: Pretty much who I think is the most amazing applied mathematician of this generation, Dan Spielman at Yale. Dan Spielman got his Ph.D. at MIT. He was advised by Mike Sipser, and he was a professor at MIT and eventually moved to Yale. He’s done amazing work in a number of fields, but this paper, it’s a very elegant paper in applied math that doesn’t really have direct algorithmic applications but has some elegance. The formulation is as follows. So suppose you have some graph, vertices and edges. What I want to tell you is that there exists some other weighted graph with at most a constant times the order of the graph number of edges, so linear in number of edges with respect to vertices, that approximates the Laplacian of this original very dense graph, no matter how dense it is.

So I’m doing not the very best job of explaining this, but let me put it like this. You have a graph. It’s very dense. You have this elliptic operator on this graph, and there’s somehow some way to find a graph that’s not dense at all, but extremely, extremely sparse, but somehow with the exact, well not exact, but nearly the exact same properties. These operators are very, very close.

KK: Can you remind our reader—readers, our listeners—what the Laplacian is?

JU: Yeah, so the graph Laplacian, what you can do, the way I like to introduce it, especially for people not in graph theory type things, is you can define a gradient on a graph. You take every edge, directed in some way, and you can think of the gradient as being a discrete derivative along the edge. And now, as in the continuous case, you take this gradient, you get your Laplacian, and the same way you get a Laplacian in the continuous case, this is how you get your graph Laplacian.

KK: This theorem, so the problem is that dense graphs are kind of hard to work with because, well, they’re dense?

EL: So can I jump in? Dense meaning a lot of edges, I assume?

JU: Lots of edges, as many edges as you want.

KK: So a high degree on every vertex.

JU: Lots of edges, edges going everywhere.

EL: And then with the weighting, that might also mean something like, not that many total edges, but they have a high weight? Does that also make it dense, or is that a different property?

JU: No, in that case, we wouldn’t really consider it very dense.

KK: But the new graph you construct is weighted?

JU: And the old graph can be weighted as well.

KK: All right. What do the weights tell you?

JU: What do you mean?

KK: On the new graph. You generate this new graph that’s more sparse, but it’s weighted. Why do you want the weights? What do the weights get you?

JU: The benefit of the weights is it gives you additional leeway about how you’re scaling things because the weights actually come into the Laplacian because for weighted graphs, when you take this Laplacian, it’s the difference between the average of each node, of all its neighbors, and the node, in a way, and the weights tell you how much each edge counts for. In that way, it allows you greater leeway. If you weren’t able to weight this very sparse graph, this wouldn’t work very well at all.

KK: Right, because like you said, you think of sort of having a gradient on your graph, so this new graph should somehow have the same kind of dynamics as your original.

JU: Exactly. And the really interesting thing is that you can capture these dynamics. Not only can you capture them, but you can capture them with a linear number of edges, linear in the order of the graph.

KK: Right.

JU: So Dan Spielman is famous for many things. One of the things he’s famous for is he was one of the first people to give provable guarantees for algorithms that can solve, like, a Laplacian system of equations in near-linear time, so O(n) plus some logs. From his work there have been many, many different sorts of improvements, and this one is extremely interesting to me because you only use a linear number of edges, which implies that this technique, given this graph you have should be extremely efficient. And that’s exactly what you want because it’s a linear number of edges, you apply this via some iterative algorithm, and you can use this guy as a sort of preconditioner, and things get very nice. The issue is, I believe—and it has been a little bit since I’ve read the paper—I believe the amount of time it takes to find this graph, I think is cubic.

EL: Okay.

JU: So it’s not a sort of paper where it’s extremely useful algorithmically, I would say, but it is a paper that is very beautiful from a mathematical perspective.

KK: Has the algorithm been improved? Has somebody found a better than cubic way to generate this thing?

JU: Don’t quote me on that, I do not know, but I think that no one has found a good way yet. And by good I mean good enough to make it algorithmically useful. For instance, if the amount of time it takes to find this thing is quadratic, or even maybe n to the 1.5 or something like that, this is already not useful for anything greater than near-linear. It’s a very interesting thing, and it’s something that really spoke to me, and I really just fell in love with it, and I think what I like about it most is that it is a very sort of applied area, and it is applied mathematics, theoretical computer science type things, but it is very theoretical and very elegant. Though I am an applied mathematician, I do like very clean things. I do like very nice looking things. And perhaps I can be a bad applied mathematician because I don’t always care about applications. Which kind of makes you a bad applied mathematicians, but in all my papers I’m not sure I’ve ever really, really cared about the applications, in the sense that if I see a very interesting problem that someone brings to me, and it happens to have, like some of the things I’ve gotten to do in machine learning, great, this is like the cherry on top, but that isn’t the motivating thing. If it’s an amazing application but some ugly, ugly thing, I’m not touching it.

EL: Well, before we actually started recording, we talked a little bit about how there are different flavors of applied math. There are ones that are more on the theoretical side, and probably people who do a lot of things with theoretical computer science would tend towards that more, and then there are the people who are actually looking at a biological system and solving differential equations or something like this, where they’re really getting their hands dirty. It sounds like you’re more interested in the theoretical side of applied math.

JU: Yeah.

KK: Applied math needs good theory, though.

JU: That’s just true.

KK: You’ve got to develop good theory so that you know your algorithms work, and you want them to be efficient and all that, but if you can’t prove that they actually work, then you’re a physicist.

JU: There’s nothing I hate more than heuristics. But heuristics do have a place in this world. They’re an important thing, but there’s nothing I dislike more in this world than doing things with heuristics without being able to give any guarantees.

EL: So where did you first encounter this theorem? Was it in the research you’ve been doing, the study you’ve been doing for your Ph.D.?

JU: Yes, I did encounter this, I think it was when I was preparing for my qualifying exams. I was reading a number of different things on so-called spectral graph theory, which is this whole field of, you have a graph and some sort of elliptic operator on it, and this paper obviously falls under this category. I saw a lecture on it, and I was just fascinated. You know it’s a very nice result when you hear about it and you’re almost in disbelief.

KK: Right.

JU: I heard about it and I thought I didn’t quite hear the formulation correctly, but in fact I did.

KK: And I seem to remember reading in Sports Illustrated — that’s an odd sentence to say — that you were working on some version of the traveling salesman problem.

JU: That is true. But I would say,

KK: That’s hard.

JU: Just because I’m working on the asymmetric traveling salesman problem does not mean you should be holding your breath for me to produce something on the traveling salesman problem. This is an interesting thing because I am getting my Ph.D., and you do want, you want to try to find a research project where yes, it’s tough and it’s challenging you, but at the end of your four or five years you have something to show for it.

KK: Right. Is this version of the problem NP-hard?

JU: Yes, it is. But this version, there isn’t any sort of inapproximability result as in some of the other versions of TSP. But my advisor Michele Gomez [spelling], who—for the record, I’m convinced I have the single best advisor in the world, like he is amazing, amazing. He has a strong background in combinatorial optimization, which is the idea that you have some set of discrete objects. You need to pick your best option when the number of choices you have is often not polynomial in the size of your input. But you need to pick the best option in some reasonable amount of time that perhaps is polynomial.

EL: Yeah, so are these results that will say something like, we know we can get within 3 percent of the optimal…

JU: Exactly. These sorts of things are called approximation algorithms. If it runs in polynomial time and you can guarantee it’s within, say, a constant factor of the optimal solution, then you have a constant approximation algorithm. We’ve been reading up on some of the more recent breakthroughs on ATSP. There was a breakthrough this August someone proved the first constant approximation algorithm for the asymmetric traveling salesman problem, and Michele Gomez, who also is the department head at MIT of math, he had the previous best paper on this. He had a log log approximation algorithm from maybe 2008 or 2009, but don’t quote me on this. Late 2000s, so this is something we’ve been reading about and thinking about.

EL: Trying to chip away a little bit at that.

JU: Exactly. It’s interesting because this constant approximation algorithm that came out, it used this approach that, I think Michele won’t mind me saying this, it used an approach that Michele didn’t think was the right way to go about it, and so it’s very interesting. There are different ways to construct an approximation algorithm. At its core, you have something you’re trying to solve, and this thing is hard, but now you have to ask yourself, what makes it hard? Then you need to sort of take one of the things that makes it hard and you need to loosen that. And his approach in his previous paper was quite different than their approach, so it’s interesting.

KK: So the other thing we like to do on this show is to ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?

JU: I still haven’t fully thought about this, but you’ve put me on the spot, and so I’m going to say this: I would pair this with, I think this is a thing, Miller 64. That’s a thing, right?

KK: This is a beer?

JU: Yeah, the beer.

KK: It’s a super low-calorie beer?

JU: It’s a beer, and they advertise it on TV.

KK: I see, it’s very sparse.

JU: People weightlifting, people running, and then drinking a 64-calorie beer. It’s the beer for athletes.

EL: Okay.

JU: I think it’s a very, very good beer because it at least claims to taste like a beer, be very much like a beer, and yet be very sparse.

EL: Okay, so it’s, yeah, I guess I don’t know a good name for this kind of graphs, but it’s this graph of beers.

JU: Yes, it’s like, these things are called spectral sparsifiers.

EL: Okay, it’s the spectral sparsifier of beers.

KK: That’s it.

EL: So they’ve used the “Champagne of beers” slogan before, but I really think they should switch to the “spectral sparsifier of beers.” That’s a free idea, by the way, Miller, you can just take that.

JU: Hold on.

KK: John’s all about the endorsements, right?

JU: Let’s not start giving things away for free now.

KK: John has representation.

EL: That’s true.

JU: We will give this to you guys, but you need to sponsor the podcast. This needs to be done.

EL: Okay. I’m sure if they try to expand their market share of mathematicians, this will be the first podcast they come to.

KK: That’s right. So hey, do you want to talk some smack? Were you actually the smartest athlete in the NFL?

JU: I am not the person to ask about that.

KK: I knew you would defer.

JU: Trust me, I’ve gone through many, many hours of media training. You need something a little more high-level to catch me than that.

KK: I’m sure. You know, I wasn’t really trying to catch you. You know, Aaron Rodgers looked good on Jeopardy. I don’t know if you saw him on Celebrity Jeopardy a couple years ago.

JU: No.

KK: He won his game. My mother—sorry—was a huge Packers fan. She grew up near Green Bay, and she loved Aaron Rodgers, and I think she recorded that episode of Jeopardy and watched it all the time.

JU: I was invited to go on Family Feud once, the celebrity Family Feud.

KK: Yeah?

JU: But I don’t know why, but I wasn’t really about that life. I wasn’t really into it.

KK: You didn’t want Steve Harvey making fun of you?

JU: Also, I’m not sure I’m great at guessing what people think.

EL: Yeah.

JU: That’s not one of my talents.

EL: Finger isn’t on the pulse of America?

JU: No, my finger is not on the pulse. What do people, what’s people’s favorite, I can’t even think of a question.

EL: Yeah.

KK: Well, John, this has been great. Thanks for joining us.

JU: Thanks for having me. I can say this with certainty, this is my second favorite podcast I have ever done.

KK: Okay. We’ll take that. We won’t even put you on the spot and ask you what the favorite was. We won’t even ask.

JU: When I started the sentence, know that I was going to say favorite, and then I remembered that one other. I’ve done many podcasts, and this is one of my favorites. It’s a fascinating idea, and I think my favorite thing about the podcast is that the audience is really the people I really like.

KK: Thanks, John.

EL: Thanks for being here.

[end stuff]

Episode 17 - Nalini Joshi

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your cohost Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other cohost.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m looking forward to this because of the time zone issue here. This is taking place on two different days.

EL: Yes, yes, we are delighted to be joined by Nalini Joshi, who is joining us from tomorrow in Australia, which we’re getting a kick out of because we’re easily amused.

KK: That’s right.

EL: Hi, Nalini. Would you like to tell us a little bit about yourself?

Nalini Joshi: Sure. My name is Nalini Joshi. I’m a professor of applied mathematics at the University of Sydney. What else can I say except I’m broadcasting from the future? I was born in Burma and I moved to Australia as a child with my parents when they emigrated to Australia, and most of my education has been in Australia except for going to the U.S. to do a Ph.D., which I did at Princeton.

EL: Okay, so you’ve spent some time in both hemispheres. I guess in multiple times in your life.

NJ: Yeah.

EL: So when I was a little kid I had this idea that the world could never end because, you know, in the U.S., there’s always someone who’s a full day ahead, so I know that Thursday would have to happen because if it was Wednesday where I was, someone was already living in Thursday, so the world could never end.

NJ: That’s such a deep insight. That’s wonderful.

KK: That’s pretty good.

EL: Well…

KK: I was watching football when I was a kid.

NJ: I used to hang out at the back of the school library reading through all the old Scientific American magazines. If only they had columns like yours, Evelyn. Fantastic. I really, really wanted to work out what was happening in the universe, and so I thought about time travel and space travel a lot as a teenager.

EL: Oh. So did you start your career wanting to maybe go more into physics, or did you always know you wanted to be a mathematician?

NJ: No, I really wanted to become an astrophysicist, because I thought that was the way, surely, to understand space travel. I wanted to be an astronaut, actually. I went to an all-girls school for the first half of my school years, and I still remember going to see the careers counselor and telling her I wanted to be an astronaut. She looked at me and she said, you have to be more realistic, dear. There was no way that somebody like me could ever aspire to it. And nowadays it’s normal almost. People from all different countries around the world become astronauts. But at the time I had to think about something else, and I thought, okay, I’m going to become a scientist, explore things through my own mind, and that was one way I could explore the universe. So I wanted to do physics when I came to university. I studied at the University of Sydney as an undergraduate. When I got to first-year physics, I realized my other big problem, which is that I have no physical intuition. So I thought, I really needed to understand things from a really explicit, literal, logical, analytical point of view, and that’s how I came to know I must be more of a mathematician.

EL: Okay.

KK: I have the same problem. I was always going to be a math major, but I thought I might pick up a second major in physics, and then I walked into this junior-level relativity class, and I just couldn’t do it. I couldn’t wrap my head around it at all. I dropped it and took logic instead. I was much happier.

NJ: Yeah. Oh good.

EL: So we invited you on to find out what your favorite theorem is.

NJ: Yes. Well that was a very difficult thing to do. It was like choosing my favorite child, which I would never do. But I finally decided I would choose Mittag-Leffler’s theorem because that was something that really I was blown away by when I started reading more about complex analysis as a student. I mean, we all learnt the basics of complex analysis, which is beautiful in itself. But then when you went a little bit further, so I started reading, for example, the book by Lars Ahlfors, which I still have, called Complex Analysis.

KK: Still in use.

EL: That’s a great one.

NJ: Which was first I think published in 1953. I had the 1979 version. I saw that there were so many powerful things you could do with complex analysis. And the Mittag-Leffler theorem was one of the first ones that gave me that perspective. The main thing I loved about it is that you were taking what was a local, small piece of information, around, for example, poles of a function. So we’re talking about meromorphic functions here, that’s the subject of the theorem.

EL: Can we maybe set the stage a little bit? So what is a meromorphic function?

NJ: A meromorphic function is a function that’s analytic except at isolated points, which are poles. The worst singularities it has are poles.

EL: So these are places where the function explodes, but otherwise it’s very smooth and friendly.

KK: And it explodes in a controlled way, it’s like 1/zn for some finite n kind of thing.

NJ: Exactly. Right. An integer, positive n. When I try to explain this kind of thing to people who are not mathematicians, I say it’s like walking around in a landscape with volcanoes. Well-timed, well-controlled, well-spaced volcanoes. You’re walking in the landscape of just the Earth, say, walking around these places. There are well-defined pathways for you to move along by analytic continuation. You know ahead of time how strong the volcano’s eruption is going to be. You can observe it from a little distance away if you like because there is no danger because you can skirt all of these volcanoes.

KK: That’s a really good metaphor. I’m going to start using that. I teach complex variables in the summer. I’m going to start using that. That’s good.

NJ: So a meromorphic function, as I say, [cut ~7:29-7:31?] it’s a function that gives you a pathway and the elevation, the smoothness of your path in this landscape. And its poles are where the volcanoes are.

EL: So Mittag-Leffler’s theorem, then, is about controlling exactly where those poles are?

NJ: Not quite. It’s the other way around. If you give me information about locations of poles and how strong they are, the most singular part of that pole, then I can reconstruct a function that has poles exactly at those points and with exactly those strengths. That’s what the theorem tells you. And what you need is just a sequence of points and that information about the strength of the poles, and you need potentially an infinite number of these poles. There’s one other condition, that the sequence of these poles has a limit at infinity.

KK: Okay, so they don’t cluster, in other words.

NJ: Exactly. They don’t coalesce anywhere. They don’t have a limit point in the finite plane. Their limit point is at infinity.

EL: But there could be an infinite number of these poles if they’re isolated, on integer lattice points in the complex plane or something like that.

NJ: Right, for example.

KK: That’s pretty remarkable.

NJ: If you take your standard trigonometric functions, like the sine function or the cosine function, you know it has periodically spaced zeroes. You take the reciprocal of that function, then you’ve got periodically placed poles, and it’s a meromorphic function, and you can work out which trig function it is by knowing those poles. It’s powerful in the sense that you can reconstruct the function everywhere not just at the precise points which are poles. You can work out that function anywhere in between the poles by using this theorem.

KK: That’s really remarkable. That’s the surprising part, right?

NJ: Exactly.

KK: If you knew you had a finite number of poles, you could sort of imagine that you could kind of locally construct the function and glue it together, that wouldn’t be a problem. But the fact that you can do this for infinitely many is really pretty remarkable.

NJ: Right. It’s like going from local information that you might have in one little patch of time or one little patch of space and working out what happens everywhere in the universe by knowing those little local patches. It’s the local to global information I find so intriguing, so powerful. And then it struck me that this information is given in the form of a sum of those singular parts. So the function is reconstructed as a series, as an infinite sum of the singular parts of the information you’re given around each pole. That’s a very simple way of defining the function, just taking the sum of all these singular things.

KK: Right.

EL: Yeah, I love complex analysis. It’s just full of all of these things where you can take such a small amount of local information and suddenly know what has to be happening everywhere. It’s so wonderful.

NJ: Right, right. Those two elements, the local to global and the fact that you have information coming from a discrete set of points to give you continuous smooth information everywhere in between, those two elements, I realized much later, feature in a lot of the research that I do. So I was already primed to look for that kind of information in my later work.

EL: Yeah, so I was going to ask, I was wondering how this came up for you, maybe not the Mittag-Leffler theorem specifically, but using complex analysis in your work as an applied mathematician.

NJ: Right. So what I do is build toolboxes of methods. So I’m an applied mathematician in the sense that I want to make usable tools. So I study asymptotics of functions, I study how you define functions globally, functions that turn out to be useful in various mathematical physics contexts. I’m more of a theoretical applied mathematician, if you like, or I often say to people I’m actually a mathematician without an adjective.

KK: Right. Yeah.

NJ: You know that there is kind of a hierarchy of numbers in the number system. We start with the counting numbers, and we can add and subtract them. Subtraction leads you to negative integers. Multiplication and division leads you to rational numbers, and then solving polynomial equations leads you to algebraic numbers. Each time you’re building a higher being of a type of number. Beyond all of those are numbers like π and e, which are transcendental numbers, in the sense that they can’t be constructed in terms of a finite number of operations from these earlier known operations and earlier known objects.

So alongside that hierarchy of numbers there’s a hierarchy, a very, very closely related hierarchy of functions. So integers correspond to polynomials. Square roots and so on correspond to algebraic functions. And then there are transcendental functions, the exponential being one of them, exponential of x. So a lot of the transcendentality of functions is occupied by functions which are defined by differential equations.

I started off by studying differential equations and the corresponding functions that they define. So even when you’re looking at linear differential equations, you get very complicated transcendental functions, things like the exponential being one of them. So I study functions that are even more highly transcendental, in the sense that they solve nonlinear equations, and they are like π in the sense that these functions turn out to be universal models in many different contexts, particularly in random matrix theory where you might be, for example, trying to work out the statistics of how fundamental particles interact when you fire them around the huge loop of the CERN collider. You do that by looking at distributions of entries in infinitely large matrices where the entries are random variables. Now under certain symmetries, symmetry groups acting on, for example, you might have particles that have properties that allow these random matrices to be orthogonal matrices, or Hermitian matrices, or some other kind of matrices. So when you study these ensembles of matrices with these symmetry properties and you study properties like what’s their largest eigenvalue, then you get a probability distribution function which happens to be, by some miracle, one of those functions I’ve studied. There’s kind of a miraculous bridge there that nobody really knows why these happen. Then there’s another miraculous thing, which is that these models, using random matrices, happen to be valid not just for particle physics but if you’re studying bus arrival times in Cuernavaca, or aircraft boarding times, or when you study patient card sorting, all kinds of things are universally described by these models and therefore these functions. So first of all, these functions have this property: they’re locally defined by initial value problems given for the differential equation.

KK: Right.

NJ: But then they have these amazing properties which allow them to be globally defined in the complex plane. So even though we didn’t have the technology to describe these functions explicitly, not like I could say, take 1 over the sine function, that gives you a meromorphic function, whose formulae I could write down, whose picture I could draw, these functions are so transcendental that you can’t do that very easily, but I study their global properties that make them more predictable wherever you go in the complex plane. So the Mittag-Leffler theorem sort of sets up the baseline. I could just write them as the sum of their poles. And that’s just so powerful to me. There are so many facets of this. I could go on and on. There is another direction I wanted to insert into our conversation, which is that the next natural level when you go beyond things like trigonometric functions and their reciprocals is to take functions that are doubly periodic, so trigonometric functions have one period. If you take double periodicity in the complex plane, then you get elliptic functions, right? So these also have sums of their poles as an expression for them. Now take any one of these functions. They turn out to be functions that parametrize very nice curves, cubic curves, for example, in two dimensions. And so the whole picture shifts from an analytic one to an algebraic geometric one. There are two sides to the same function. You have meromorphic functions on one side, and differential equations, and on the other side you have algebraic functions and curves, and algebraic properties and geometric properties of these curves, and they give you information about the functions on the other side of that perspective. So that’s what I’ve been doing for the last ten years or so, trying to understand the converse side so I can get more information about those functions.

EL: Yeah, so using the algebraic world,

NJ: Exactly, the algebro-geometric world. This was a huge challenge at the beginning, because as I said, I was educated as an applied mathematician, and that means primarily the analytic point of view. But to try and marry that to the algebraic point of view is something that turned out to be a hurdle at the beginning, but once you get past that, it’s so freeing and so beautiful and so strikingly informative that I’m now saying to people, all applied mathematicians should be learning algebraic geometry.

KK: And I would say the converse is true. I think the algebraic geometers should probably learn some applied math, right?

NJ: True, that too. There’s so many different perspectives here. It all started for me with the Mittag-Leffler theorem.

EL: So something we like to do on this show is to ask our guest to pair their theorem with something: food, beverage, music, anything like that. So what have you chosen to pair your theorem with?

NJ: That was another difficult question, and I decided that I would concentrate on the discrete to continuous aspect of this, or volcanoes to landscapes if you like. As I said, I was born in Burma, and in Burma there are these amazing dishes called le thoke. I’ll send you a Wikipedia link so you can see the spelling and description. Not all of it is accurate, by the way, from what I remember, but anyway. Le thoke is a hand-mixed salad. “Le” is hand and “thoke” is mixture. In particular, the one that’s based on rice is one of my favorites. You take a series of different ingredients, so one is rice, another might be noodles, there have to be specific types, another is tamarind. Tamarind is a sour plant-based thing, which you make into a sauce. Another is fried onions, fried garlic. Then there’s roasted chickpea flour, or garbanzo flour.

KK: This sounds amazing.

NJ: Then another one is potatoes, boiled potatoes. Another one is coriander leaves. Each person might have their favorite suite of these many, many little dishes, which are all just independent ingredients. And you take each of them into a bigger bowl. You mix it with your hands. Add as much spices as you want: chili powder, salt, lemon juice, and what you’re doing is amalgamating and combining those discrete ingredients to create something that transcends the discrete. So you’re no longer tasting the distinct tamarind, or the distinct fried onion, or potatoes. You have something that’s a fusion, if you like, but the taste is totally different. You’ve created your meromorphic function, which is that taste in your mouth, by combining those discrete things, which each of them you wouldn’t eat separately.

KK: Sure. It’s not fair. It’s almost dinner time here, and I’m hungry.

NJ: I’m sorry!

EL: Are there any Burmese restaurants in Gainesville?

NJ: I don’t know. I think there’s one in San Francisco.

EL: Yes! I actually was just at a Burmese restaurant in San Francisco last month. I had this tea leaf salad that sounds like this.

NJ: Yeah, that’s a variation. Pickled tea leaves as an ingredient.

EL: Yeah, it was great.

NJ: I was also thinking about music. So there are these compositions by Philip Glass and Steve Reich which are basically percussive, independent sounds. Then when they interweave into those patterns you create these harmonies and music that transcends each of those particular percussive instruments, the strikes on the marimba and the xylophones and so on.

EL: Like Six Marimbas by Steve Reich?

NJ: Yeah.

EL: Another of our guests, her episode hasn’t aired yet, though it will by the time our listeners are hearing this, another of our guests chose Steve Reich to pair with her theorem.

KK: That’s right.

EL: One of the most popular musicians among mathematicians pairing their theorems with music.

NJ: Somebody should write a book about this.

KK: I’m sure. So my son is a college student. He’s studying music composition. He’s a percussionist. I need to get on him about this Steve Reich business. He must know.

EL: Yeah, he’s got to.

KK: This has been great fun, Nalini. I learned a lot about not just math, but I really knew nothing about Burmese food.

NJ: Right. I recommend it highly.

KK: Next time I’m there.

NJ: You said something about mentioning books?

EL: Yeah, yeah, if you have a website or book or anything you’d like to mention on here.

NJ: This is my book. I think it would be a bit too far away from the topic of this composition, but it has this idea of going from continuous to discrete.

EL: It’s called Discrete Systems and Integrability.

NJ: Yes.

EL: We’ll put a link to some information about that book, and we’ll also link to your website on the show notes so people can find you. You tweet some. I think we kind of met in the first place on Twitter.

NJ: That’s right, Exactly.

EL: We’ll put a link to that as well so people can follow you there.

NJ: Excellent. Thank you so much.

EL: Thank you so much for being here. I hope Friday is great. You can give us a preview while we’re still here.

KK: We’ll find out tomorrow, I guess.

NJ: Thank you for inviting me, and I’m sorry about the long delay. It’s been a very intense few years for me.

EL: Understandable. Well, we’re glad you could fit it in. Have a good day.

NJ: Thank you. Bye.


Episode 16 - Jayadev Athreya

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m Evelyn Lamb, one of your hosts. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m good. I actually forgot to say what I do. In case anyone doesn’t know, I’m a freelance math and science writer, and I live in Salt Lake City, Utah, where it has been very cold recently, and I’m from Texas originally, so I am not okay with this.

KK: Everyone knows who you are, Evelyn. In fact, Princeton University Press just sent me a complimentary copy of the Best Math Writing of 2017, and you’re in it, so congratulations, it’s really very cool. [clapping]

EL: Well thanks. And that clapping you heard from the peanut gallery is our guest today, Jayadev Athreya. Do you want to tell us a little bit about yourself?

Jayadev Athreya: Yeah, so I’m based in Seattle, Washington, where it is, at least for the last 15 minutes it has not been raining. I’m an associate professor of mathematics at the University of Washington, and I’m the director of the Washington Experimental Mathematics Lab. My work is in geometry, dynamical systems, connections to number theory, and I’m passionate about getting as many people involved in mathematics as a creative enterprise as is possible.

KK: Very cool.

EL: And we actually met a while ago because my spouse also works in your field. I have the nice privilege of getting to know you and not having to learn too much about dynamical systems.

JA: Evelyn and I have actually known each other since, I think Evelyn was in grad school at Rice. I think we met at some conferences, and Evelyn’s partner and I have worked on several papers together, and I’ve been a guest in their wonderful home and eaten tons of great granola among other things. On one incredibly memorable occasion, a buttermilk pie, which I won’t forget for a long time.

KK: Nice. I’ve visited your department several times. I love Seattle. You have a great department there.

JA: It’s a wonderful group of people, and one of the great things about it is of course all departments recognize research, and many departments also recognize teaching, but this department has a great tradition of public engagement with people like Jim Morrow, who was part of the annual [ed. note: JA meant inaugural; see https://sites.google.com/site/awmmath/awm-fellows] class of AWM fellows and runs this REU and this amazing event called Math Day where he gets two thousand high school kids from the Seattle area on campus. It’s just a very cool thing for a research math department to seriously recognize and appreciate these efforts. I’m very lucky to be here.

KK: Also because I’m a topologist, I have to take a moment to give, well, I don’t know what the word is, but you guys lost a colleague recently.

JA: We did.

KK: Steve Mitchell. He was a great topologist, but even more, he was just a really great guy. Sort of unfailingly kind and always really friendly and helpful to me when I was just starting out in the game. My condolences to you and your colleagues because Steve really was great, and he’s going to be missed.

JA: Thank you, Kevin. There was a really moving memorial service for Steve. For any of the readers who are interested in learning more about Steve, for the last few years of his life he wrote a really wonderful blog reflecting on mathematics and life and how the two go together, and I really recommend it. It’s very thoughtful. It’s very funny, even as he was facing a series of challenges, and I think it really reflects Steve really well.

KK: His biography that he wrote was really interesting too.

JA: Amazing. He came with a background that was very different to a lot of mathematicians.

EL: I’ll have to check it out.

KK: Enough of that. Let’s talk about theorems.

EL: Would you like to share your favorite theorem?

JA: Sure. So now that I’m in the northwest, and in fact I’m even wearing a flannel shirt today, I’m going to state the theorem from the perspective of a lumberjack.

EL: Okay.

JA: So when trees are planted by a paper company, they’re planted in a fairly regular grid. So imagine you have the plane, two number lines meeting at a 90 degree angle, and you have a grid, and you plant a tree at each grid point. So from a mathematician’s perspective, we’re just talking about the integer lattice, points with integer coordinates. So let’s say where I’m standing there’s a center point where maybe there’s no tree, and we call that the origin. That’s maybe the only place where we don’t plant a tree. And I stand there and I look out. Now there are a lot of trees around me. Let’s say I look around, and I can see maybe distance R in any direction, and I say, hm, I wonder how many trees there are? And of course you can do kind of a rough estimate.

Now I’m going to switch analogies and I’ll be working in flooring. I’m going to be tiling a floor. So if you think about the space between the trees as a tile and say that has area 1, you look out a distance R and say, well, the area of the region that you can see is about πR2, it’s the area of the circle, and each of these tiles has size 1, so maybe you might guess that there are roughly πR2 trees. That’s what’s called the Gauss circle problem or the lattice point counting problem. And the fact that that is actually increasingly accurate as your range of vision gets bigger and bigger, as R gets bigger and bigger, is a beautiful theorem with an elementary proof, which we could talk about later, but what I want to talk about is when you’re looking out, turning around in this spot, you can’t see every tree.

EL: Right.

JA: For instance, there’s a tree just to the right of you. You can see that tree, but there’s a tree just to the right of that tree that you can’t because it’s blocked by the first tree that you see. There’s a tree at 45 degrees that would have the coordinate (1,1), and that blocks all the other trees with coordinates (2,2) or (3,3). It blocks all the other trees in that line. We call the trees that we can see, the visible trees, we call those primitive lattice points. It’s a really nice exercise to see that if you label it by how many steps to the right and how many steps forward it is, call that that the integer coordinates (m,n), or maybe since we’re on the radio and can’t write, we’ll call it (m,k), so the sounds don’t get too confusing.

EL: Okay.

JA: A point (m,k) is visible if the greatest common divisor of the numbers m and k is 1. That’s an elementary exercise because, well maybe we’ll just talk a little bit about it, if you had m and k and they didn’t have greatest common divisor 1, you could divide them by their greatest common divisor and you’d get a tree that blocks (m,k) from where you’re sitting.

EL: Right.

JA: We call these lattice points, they’re called visible points, or sometimes they’re called primitive points, and a much trickier question is how many primitive points are there in the ball of radius R, or in any kind of increasingly large sequence of sets. And this was actually computed, I believe for the first time, by Euler

KK: Probably. Sure, why not?

JA: Yeah, Euler, I think Cauchy also noticed this. These are names, anything you get at the beginning of analysis or number theory, these names are going to show up.

KK: Right.

JA: And miraculously enough, we agreed that in the ball of radius R, the total number of trees was roughly the area of the ball, πR2. Now if you look at the proportion of these that are primitive, it’s actually 6/π2.

KK: Oh.

JA: So the total number of primitive lattice points is actually 6/π2 times πR2. And now, listeners of this podcast might remember some of their sequences and series from calc 1, or 2, or 3, and you might remember seeing, probably not proving, but seeing, that if you add up the following series: 1 plus 1/4 plus 1/9 plus 1/16 plus 1/25, and so on, and you can actually do this, you can write a little Python script to do this. You’ll get closer and closer to π2/6. Now it’s amazing, now there is of course this principle that there aren’t enough small numbers in mathematics, which is why you have all these coincidences, but this isn’t a coincidence. That π2/6 and our 6/π2 are in a very real mathematical sense the same object. So that’s my favorite mathematical theorem. So when you count all lattice points, you get π showing up in the numerator. When you count primitive ones, you get π showing up in the denominator.

KK: So the primitive ones, that must be related to the fact that if you pick two random integers, the probability that they’re relatively prime is this number, 6/π2.

JA: These are essentially equivalent statements exactly. What we’re saying is, look in the ball of radius R. Take two integers sort of randomly in between, so that m2+n2 is less than R squared, what’s the proportion of primitive ones is exactly the probability that they’re relatively prime. That’s a beautiful reformulation of this theorem.

KK: Exactly. And asymptotically, as you go off to infinity, that’s 6/π2.

JA: Yeah, and what’s fun is, if a listener does like to do a little Python programming, in this case, infinity doesn’t even have to be so big. You can see 6/π2 happening relatively quickly. Even at R=100, you’re not far off.

EL: Well the squares get smaller so fast. You’re just adding up something quite small in not too long.

JA: That’s right. That’s my favorite mathematical theorem for many reasons. For one, this number, 6/π2, it shows up in so many places. What I do is at the intersection of many fields of mathematics. I’m interested in how objects change. I’m interested in counting things, and I’m interested in the geometry of things. And all of these things come into play when you’re thinking about this theorem and thinking about various incarnations of this theorem.

EL: Yeah, I was a little surprised when you told us this was going to be your theorem because I was thinking it was going to be some kind of ergodic theorem for flows or something because the stuff I know about your field is more what my spouse does, which is more related to dynamical systems. I actually think of myself as a dynamicist-in-law.

JA: That’s right. The family of dynamicists actually views you as a favorite in-law, Evelyn. You publicize us very nicely. You write about things like billiards with a slit, which is something that we’ve been telling the world about, but until you did.

EL: And that was a birthday gift for my spouse. He had been wanting me to write about that, and I just thought it was so technical, I don’t feel like it. Finally, it’s a really cool space, but it’s just a lot to actually go in and write about that. But yeah, I was surprised to see something I think of as more number theory related show up here. That number 6/π2, or π2/6, whichever way you see it, it’s one of those things where the first time you see it, you wonder why would you ever square π? It comes as an area thing, so something else is usually being squared when you see it. Strange thing.

JA: So now what I’m going to say is maybe a little bit more about why I picked it. For me, that number π2/6 is actually the volume of a moduli space of abelian differentials.

KK: Ah!

EL: Of course!

JA: Of course it is. It’s what’s called a Siegel-Veech constant, or a Siegel constant. Can I say just a couple words about why I love π2/6 so much?

EL: Of course.

JA: Let’s say that instead of planting your trees in a square grid, you have a timber company where they wanted to shoot an ad where they shot over the forest and they wanted it to look cool, and instead of doing a square grid, they decided to do a grid with parallelograms. Still the trees are planted in a regular grid, but now you have a parallelogram. So in mathematical terms, instead of taking the lattice generated by (1,0) and (0,1), you just take two vectors in the plane. As long as they’re linearly independent, you can generate a lattice. You can still talk about primitive vectors, which are the ones you can see from (0,0). There are some that are going to be blocked and some that aren’t going to be blocked. In fact, it’s a nice formulation. If you think of your vectors as (a,c) and (b,d), then what you’re essentially doing is taking the matrix (ab,cd)[ed. note: this is a square array of numbers where the numbers a and b are in the top row and c and d are in the bottom row] and applying it to the integer grid. You’re transforming your squares into parallelograms.

KK: Right.

JA: And a vector in your new lattice is primitive if it’s the image of a primitive vector from the integer lattice.

EL: Yeah, so there’s this linear relationship. You can easily take what you know about the regular integer lattice and send it over to whatever cool commercial tree lattice you have.

JA: That’s right. Whatever parallelogram tiling of the plane you want. What’s interesting is even with this change, the proportion of primitive guys is still 6/π2. The limiting proportion. That’s maybe not so surprising given what I just said. But here’s something that is a little bit more surprising. Since we care about proportions of primitive guys, we really don’t care if we were to inflate our parallelograms or deflate them. If they were area 17 or area 1, this proportion wouldn’t change. So let’s just look at area 1 guys, just to nail one class down. This is the notion of an equivalence class essentially. You can look at all possible area 1 lattices. This is something mathematicians love to do. You have an object, and you realize that it comes as part of a family of objects. So we started with this square grid. We realized it sits inside this family of parallelogram grids. And then we want to package all of these grids into its own object. And this procedure is usually called building a moduli space, or sometimes a parameter space of objects. Here the moduli space is really simple. You just have your matrices, and if you want it to be area 1, the determinant of the matrix has to be 1. In mathematical terms, this is called SL(2,R), the special linear group with real coefficients. There’s a joke somewhere that Serge Lang was dedicating a book to his friend R, and so he inscribed it “SL2R,” but that’s a truly terrible joke that I’m sorry, you should definitely delete from your podcast.

KK: No, that’s staying in.

JA: Great.

EL: You’re on the record with this.

JA: Great. That’s sort of all possible deformations, but then you realize that if you hit the integer lattice with integer matrices, you just get it back. Basically the space of all lattices you can basically think of as 2 by 2 matrices with real entries and determinant 1 up to 2x2 matrices with integer entries. What this allows you to do is allows you to give a notion of a random lattice. There’s a probability measure you can put on this space that tells you what it means to choose one of these lattices at random. Basically what this means is you pick your first vector at random, and then you pick your second vector at random as uniformly as possible from the ones that make determinant 1 with it. That’s actually accurate. That’s actually a technically accurate statement.

Now what that means is you can talk about the average behavior of a lattice. You can say, look, I have all of these lattices, I can average. And now what’s amazing is you can fix your R. R could be 1. R could be 100. R could be a million. And now you can look at the number of primitive points divided by the number of total points in the lattice. You average that, or let me put it a slightly different way: you average the number of primitive points and divide by the average number of total points.

KK: Okay.

JA: That’s 6/π2.

EL: So is that…

JA: That’s not an asymptotic. That’s, if you average, if you integrate over the space of lattices, you integrate and you look at the number of primitive points, you divide by the average number of total points, it’s 6/π2.That’s no matter the shape of the region you’re looking in. It doesn’t have to be a ball, it can be anything. That’s an honest-to-God, dead-on statement that’s not asymptotic.

EL: So is that basically saying that the integer lattice behaves like the average lattice?

JA: It’s saying at the very large scale, every lattice behaves like the average lattice. Basically there’s this function on the space of lattices that’s becoming closer and closer to constant. If you take the sequence of functions which is proportion of primitive vectors, that’s becoming closer and closer to constant. At each scale when you average it, it averages out nicely. There might be some fluctuations at any given scale, and what it’s saying is if you look at larger and larger scales, these fluctuations are getting smaller and smaller. In fact, you can kind of make this precise, if you’re in probability, what we’ve been talking about is basically computing a mean or an expectation. You can try and compute a variance of the number of primitive points in a ball. And that’s actually something my student Sam Fairchild and I are working on right now. There are methods that people have thought about, and there’s in fact a paper by a mathematician named Rogers in the 1950s who wrote about 15 different papers called Mean Values on the Space of Lattices, all of which contain a phenomenal number of really interesting ideas. But he got the dimension 2 case slightly wrong. We’re in the process of fixing that right now and understanding how to compute the variance. It turns out that what we do goes back to work of Wolfgang Schmidt, and we’re kind of assembling that in a little bit more modern language and pushing it a little further.

I do want to mention one more name, which is, I mentioned it very briefly already. I said this is what is called a Siegel-Veech constant. Siegel was the one who computed many of these averages. He was a German mathematician who was famous for his work on a field called the geometry of numbers. It’s about the geometry of grids. Inspired by Siegel, a mathematician named William Veech, who was one of Evelyn’s teachers at Rice, started to think about how to generalize this problem to what are called higher-genus surfaces, how to average certain things over slightly more complicated spaces of geometric objects. I particularly wanted to mention Bill Veech because he passed away somewhat unexpectedly.

EL: A year ago or so?

JA: Yeah, a little bit less than a year ago. He was somebody who was a big inspiration to a lot of people in this field, who really had just an enormous number of brilliant ideas, and I still think we’re still kind of exploring those ideas.

EL: Yeah, and a very humble person too, at least in the interactions I had with him, and very approachable considering what enormous work he did.

JA: That’s right. He was deeply modest and an incredibly approachable person. I remember the first time I went to Rice. I was a graduate student, and he had read things I had written. This was huge deal for me, to know that, I didn’t think anybody was reading things I’d written. And not to make this, I guess we started off with remembering Steve, and we’re remembering Bill.

There’s one more person who I think is very important to remember in this context, somebody who took Siegel’s ideas about averaging things over spaces and really pushed them to an extent that’s just incredible, and the number 6/π2 shows up in the introduction to one of the papers that came out of her thesis. This was Maryam Mirzakhani, who also we lost at a very, very young age. She was a person who, like Veech, had incredibly deep contributions that I think we’re going to continue to mine for ideas, and she’s going to continue having a really incredible legacy, who was also very encouraging to colleagues, contemporaries, and young people. If you’re interested in 6/π2 and how it connects to not just lattices in the plane but other surfaces, her thesis resulted in three papers, one in Inventiones, one in the Annals, and one in the Journal of the American Math Society, which might be the three top journals in the field.

EL: Right.

JA: For the record, for instance, I think of myself as a pretty good research mathematician, and I have a total over 12 years of zero in any of those three journals.

KK: Right there with you.

JA: The introduction to this paper, she studies simple closed curves on the punctured torus, which are very closely linked to integer lattice points. She shows how 6/π2 also shows up as what’s called a Weil-Peterson volume, or rather π2/6 shows up as what’s called a Weil-Peterson volume of the moduli space. Again, a way of packaging lots of spaces together.

EL: We’ll link to that, I’m sure we can find links for that for the show notes so people can read a little more about that if they want.

JA: Yeah. I think even there are very nice survey papers that have come out recently that describe some of the links there. These are sort of the big things I wanted to hit on with this theorem. What I love about it is it’s a thread that shows up in number theory, as you pointed out. It’s a thread that shows up in geometry. It’s a thread that shows up in dynamical systems. You can use dynamics to actually do this counting problem.

EL: Okay.

JA: Yeah, so there’s a way of doing dynamics on this object where we package everything together to get the 6/π2. It’s not the most efficient, not the most direct proof, but it’s a proof that generalizes in really interesting ways. For me, a theorem in mathematics is really beautiful if you can see it from many different perspectives, and this one to me starts so many stories. It starts a story where if you think of a lattice, you can think about going to higher-dimensional lattices. Or you can think of it as a surface, where you take the parallelogram or the square and glue opposite sides and get a torus, or you can start doing more holes, that’s higher genus. It’s rare that all of these different generalizations will hold really fruitful and beautiful mathematics, but in this case I think it does.

KK: So hey, another part of this podcast is that we ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?

JA: So there’s a grape called, I’m just going to look it up so I make sure I get everything right about it. It’s called primitivo. So it’s an Italian grape. It’s closely related to zinfandel, which I kind of like also because I want primitive, and of course I want the integers in there, so I’ve got a Z. Primitivos are also an excellent value wine, so that makes me very happy. It’s an Italian wine. Both primitivo and zinfandel are apparently descended from a Croatian grape, and so what I like about it is it’s something connected, it connects in a lot of different ways to a lot of different things. Now I don’t know how trustworthy this site is, it’s a site called winegeeks.com. Apparently primitivo can trace its ancestry from the ancient Phoenicians in the province of Apulia, the heel of Italy’s boot. I’m a big fan of the Phoenicians because they were these cosmopolitan seafarers who founded one of my favorite cities in the world, Marseille, actually Marseille might be the first place I learned about this theorem, so there you go.

EL: Another connection.

JA: Yeah. And it’s apparently the wine that was served at the last supper.

KK: Okay.

EL: I’m sure that’s very reliable.

JA: I’m sure.

EL: Good information about vintages of those.

JA: I would pair it with a primitivo wine because of the connections, these visible points are also called primitive points by mathematicians, so therefore I’m going to pair it with a primitivo wine. Another possible option, if you can’t get your hands on that, is to pair it with a spontaneously fermented, or primitive beer.

EL: Oh yeah.

JA: I’m a big fan of spontaneously fermented beers. I like lambics, I like other things.

EL: Two choices. If you’re more of a wine person or more of a beer person, you’ve got your pairing picked out. I’m glad you’re so considerate to make sure we’ve got options there.

JA: Or I might drink too much, that’s the other possibility.

KK: No, not possible.

EL: Well it’s 9:30 where you are, so I’m hoping you’re not about to go out and have one of these to start your day. Maybe at the end of the day.

JA: I think I’ll go with my usual cappuccino to start my day.

KK: Well this has been great fun. I learned a lot today.

EL: Yeah. Thanks for being on. You had mentioned that you wanted to make sure our listeners know about the website for the Washington math lab, which is where you do some outreach and some student training.

JA: That’s right. The website is wxml.math.washington.edu. It’s the Washington Experimental Math Lab. WXML is also a Christian radio station in Ohio. We are not affiliated with the Christian radio station in Ohio. If anybody listens to that, please don’t sue us. So what I said at the top of the podcast, we’re very interested in trying to create as large as possible a community of people who are creating their own mathematics. To that end, we have student research projects where undergraduate students work together with faculty and graduate students and collaborative teams to do exploratory and experimental mathematics, teams have done projects ranging from creating sounds associated to number theory sequences to updating and maintaining OEIS and Wikipedia pages about mathematical concepts to doing research modeling stock prices, modeling rare events in protein folding, to right now one of my teams is working on counting pairs and triples and quadruples of primitive integer vectors and trying to understand how those behave. So that’s one side of it. The other side is we do a lot of, like Evelyn said, public engagement. We run teacher’s circles for middle schools and elementary schools throughout the Seattle area and the northwest, and we do a lot of fabrication with 3d printing teaching tools. Right now I’m teaching calculus 3, so we’re printing Riemann sums, 3d Riemann sums as we do integration in two variables. The reason I’m spending so much time plugging this is if you’re in a university and this sounds intriguing to you, we have a lab starter kit on our webpage which gives you information on how you might want to start a lab. All labs look different, but at this point we just had our Geometry Labs United conference this summer. There are labs at Maryland, at the University of Illinois Urbana-Champaign, at the University of Illinois in Chicago, at George Mason University, at University of Texas Rio Grande Valley, Kansas State. There’s one starting at Oklahoma State, at the University of Kentucky. So the lab movement is on the march, and if you’re interested in joining that, please go to our website, check out our lab starter kit, and please feel free to contact us about what are some good ways to get started on this track.

EL: All right. Thanks for being on the show.

JA: Thanks so much for the opportunity. I really appreciate it, and I’m a big fan of the podcast. I loved the episode with Eriko Hironaka. I thought that was just amazing.

KK: Thanks. We liked that one too.

JA: Take care, guys.

EL: Bye.


Episode 15 - Federico Ardila

Evelyn Lamb: Welcome to My Favorite Theorem. I'm your host Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah, and this is your cohost.

Kevin Knudson: I'm Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I am still on an eclipse high. On Monday, a friend and I got up, well got up in time to get going by 5 in the morning, to get up to Idaho and got to experience a total eclipse, which really lived up to the hype.

KK: You got totality?

EL: Yes, we got in the band of totality for a little over two minutes.

KK: We had 90 percent totality. It was still pretty impressive. Our astronomy department here set up their telescopes. We have a great astronomy department here. They had the filters on. There were probably 500 kids in line to see the eclipse. It was really pretty spectacular.

EL: It was pretty cool. I'm already making plans to go visit my parents on April 8, 2024 because they're in Dallas, which is in the path for that one.

KK: Very nice.

EL: So I've been trying to get some work done this week, but then I just keep going and looking at my friends' pictures of the eclipse, and NASA's pictures and everything. I'm sure I will get over that at some point.

KK: It was the first day of classes here for the eclipse. It was a bit disruptive, but in a good way.

EL: My spouse also had his first day of class, so he couldn't come with us.

KK: Too bad.

EL: But anyway, we are not here to talk about my feels about the eclipse. We are here to welcome Federico Ardila to the podcast. So Federico, would you like to say a bit about yourself?

Federico Ardila: Yeah, first of all, thanks so much for having me. As Evelyn just said, my name is Federico Ardila. I never quite know how to introduce myself. I'm a mathematician, I'm a DJ, I'm an immigrant from Colombia to the US, and I guess most relevant to the podcast, I'm a math professor at San Francisco state university. I also have an adjunct position in Colombia at theUniversidad de los Andes. I'm also spending the semester at MSRI [Mathematical Sciences Research Institute] in Berkeley as a research professor, so that's what I'm up to these days.

KK: I love MSRI. I love it over there. I spent a semester there, and every day at teatime, you walk into the lounge and get the full panoramic view of the bay. You can watch the fog roll in through the gate. It's really spectacular.

FA: Yeah, you know, one tricky thing is you kind of want to stay for the sunset because it's so beautiful, but then you end up staying really late at work because of it. It's a balance, I guess.

KK: So, the point of this thing is that someone has a favorite theorem, so I actually don't know what your favorite theorem is, so I'm going to be surprised. What's your favorite theorem, Federico?

FA: Yeah, so first of all I apologize for not following your directions, but it was deliberate. You both asked me to tell you my favorite theorem ahead of time, but I'm not very good at following directions. But I also thought that since I want to talk about something that I think not a lot of people think about, maybe I shouldn't give you a heads-up so we can talk about it, and you can interrupt me with any questions that you have.

EL: Get our real-time reactions here.

FA: Exactly. The other thing is that instead of talking about a favorite theorem, I want to talk about a favorite object. There's a theorem related to it, but more than the theorem, what I really like is the object.

EL: Okay.

FA: I want to talk a little about matroid theory. How much do you two think about matroids?

KK: I don't think about them much.

EL: Not at all.

KK: I used to know what a matroid is, so remind us.

FA: Excellent. Yeah, so matroid theory was basically an abstraction of the notion of independence. So something that was developed by Hassler Whitney, George Birkhoff, and Saunders MacLane in the '30s. Back then, you could write a thesis in graph theory at Harvard. This was part of Hassler Whitney's Ph.D. thesis where he was trying to solve the four-color theorem, which basically says that if you want to color the countries in a map, and you only have four colors, you will always be able to do that in such a way that no two neighboring countries are going to have the same color. So this was one of the big open problems at the time. At the time they were trying to figure out a more mathematical grounding or structure that they could put on graphs, and so out of that the theory of matroids was born. This was in a paper of Whitney in 1935, and he had the realization that the properties that graphs have with regards to how graphs cycle around, what the cycles are, what the spanning trees are, and so on, are exactly the same properties that vectors have. So there was a very strong link between graph theory and linear algebra, and he basically tried to pursue an axiomatization of what was the key combinatorial essence of independence?

EL: Okay, and so by independence, is that like we would think of linear independence in a matrix? Matroid and matrix are kind of suggestively similarly named. So is that the right thing we should be thinking about for independence?

FA: Exactly, so you might think that you have a finite set of vectors in a vector space, and now you want to figure out the linear dependencies between them. And actually that information is what's called the matroid. Basically you're saying these two vectors are aligned, or these three vectors lie on the same plane. So that information is called the matroid, and Whitney basically laid out some axioms for what the kind of combinatorial properties that linear independence has, and what he realizes is that these are exactly the same axioms that graphs have when you think about independence. Now you need a new notion of independence. In a graph you're going to say you have a dependency in edges whenever they form a cycle. So somehow it is redundant to be able to walk from point A to point B in two different ways, so whenever there is that redundancy, we call it dependency in a graph.

Basically Whitney that these were the same kind of properties, and he defined a matrix to be an abstract mathematical object that was supposed to capture that notion of independence.

EL: Okay. So this is very new to me, so I'm just kind of doing free association here. So I'm familiar with the adjacency matrix of a graph. Does this contain information about the matroid, or is this a little side path that is not really the same thing?

FA: This is a really good point. To every graph you can associate an adjacency matrix. Basically what you do is if you have an edge from vertex i to vertex j in the graph, in the matrix you put a column that has a bunch of 0's with a 1 in position i and a -1 in position j. You might think of this as the vector ei-ej where the e's are the standard basis in your vector space. And you're absolutely right, Evelyn, that when you look at the combinatorial dependencies between the graph in terms of graph dependence, they're exactly the linear dependencies in that set of vectors, so in that sense, that vector perfectly models the graph as matroid theory is concerned.

EL: Okay.

FA: So, yeah, that's a really good comparison. One reason that I love matroids is that it turns out that they actually apply in a lot of other different settings. There are many different notions of independence in mathematics, and it was realized over the years that they also satisfy these properties. Another notion of independence that you might be familiar with is the notion of algebraic independence. You learn this in a course in field extensions, and you learn about extension degrees and transcendence bases and things like this. That's the notion of algebraic independence, and it turns out that that notion of independence also satisfies these axioms that Whitney laid out, and so they also form a matroid. So whenever you have a field extension, you also have a matroid.

KK: So what's the data you present? Say X is a matroid. If you're trying to write this down, what gets handed to you?

FA: That's another really good question, and I think it's a bit of a frustrating question because it depends on who you ask. The reason for this is that so many people encounter matroids in their everyday objects that they think of them in very different ways. Some people, if they hand you a matroid, they're going to give you a bunch of sets. Maybe this is the most common things. If you give me a list of vectors, then I could give you the linearly independent sets out of these sets of vectors. That would be a list, say 1 and 2 are independent, 1 and 4 are independent, 1, 6, and 7 are dependent, and so on. That would be a set system. If you asked somebody else, then they might think of that as a simplicial complex, and they might hand you a simplicial complex and say that's a matroid. One thing that Birkhoff realized, and this was very fashionable in the '30s at Harvard, is to think about lattices in the sense of posets. If you had Birkhoff, he would actually hand you a lattice and say that's a matroid. I think this is something that's a bit frustrating for people that are trying to learn matroids. I think there are at least 10 different definitions of what a matroid is, and they're all equivalent to each other. Actually Rota made up the name cryptomorphism. You have the same theory, and you have two different axiom systems for the same theory, and you need to prove they're equivalent. This is something that when I first learned about matroids, I hated it. I found it really frustrating. But I think as you work in this topic, you realize that it's very useful to have the insight someone in linear algebra would have, the insight somebody in graph theory would have, the insight that somebody in algebraic geometry would have. And so to do that, you end up kind of going back and forth between these different ways of presenting a matroid.

EL: Like the clothing that the matroid is wearing at the time. Which outfit do you prefer?

FA: Absolutely.

KK: Being a good algebraic topologist, I want to say that this sort of reminds me of category theory. Can you describe these things as a functor from something to something else? It sort of sounds like you've got these sort of structures that are preserved, they're all the same, or they're cryptomorphic, right? So there must be something, you've got a category of something and another different category, and the matroid is sort of this functor that shows a realization between them, or am I just making stuff up?

FA: I should admit that I'm not a topologist, so I don't think a lot about categories, but I definitely do agree that over the last few years, one program has been to set down stronger algebraic foundations, and there's definitely a program of categorizing matroids. I'm not sure what you're saying is exactly correct.

KK: I'm sure it isn't.

FA: But that kind of philosophy is at play here.

KK: So you mentioned that there was a theorem lurking behind your love of matroids.

FA: So let me first mention one quick application, and then I'll tell you what the object is that I really like.

There's another application of this to matching problems. One example that I think academic mathematicians are very familiar with is the problem of matching job candidates and positions. It's a very difficult problem. Here you have a notion of dependences; for example, if the same person is offered two different jobs, they can only take one of those jobs, so in that sense, those two jobs kind of depend on each other. It turns out that this setting also provides a matroid. One reason that that is important is it's a much more applied situation because, you know, there are many situations in real life where you really need to do matchings, and you need to do it quickly and inexpensively and so on. Now when this kind of combinatorial optimization community got a hold of these ideas, and they wanted to find a cheap matching quickly, then one thing that people do in optimization a lot is if you want to optimize something, you make a polytope out of it. And so this is the object that I really like and want to tell you about. This is called the matroid polytope.

EL: Okay.

FA: Out of all these twelve different sets of clothing that matroids like to wear, my favorite outfit is the matroid polytope. Maybe I'll tell you first in the abstract why I like this so much.

EL: First, can we say exactly what a polytope is? So, are we thinking a collection of vertices, edges, faces, and higher-dimensional things because this polytope might live in a high-dimensional space? Is that what we mean?

FA: Exactly. If your polytope is in two dimensions, it's a polygon. If it's in three dimensions, it's the usual solids that we're used to, like cubes, pyramids, and prisms, and they should have flat edges, so they should have vertices, edges, and faces like you said. And then the polytope is just the higher-dimensional generalization for that. This is something that in combinatorial optimization is very natural. They really need these higher-dimensional polytopes because if you have to match ten different jobs, you have ten different axes you have to consider, so you get a polytope in ten dimensions.

KK: Sort of the simultaneous, feasible regions for multiple linear inequalities, right?

FA: Exactly. But yeah, I think Edmonds was the first person who said, okay, I want to study matroids. I'm going to make a polytope out of them. Then one thing that they realized is there is a notion in algorithms of greedy algorithms, which is, a greedy algorithm is when you're trying to accomplish a task quickly, what you do is you just, at each point in time, you just do the thing that seems best at the time. If we go back to the situation of matching jobs, then the first thing you might say is ask one school, okay, what do you want? And then they would hire the first person, and they would choose a person, and then you'd ask the next school, what do you want, and they would choose the next best person, and so on. We know that this strategy doesn't usually work. This is the no long-term planning solution. You just do what immediately what seems best to do, and what the community realized was that matroids are exactly where greedy strategies work. That's another way of thinking of matroids is that's where the greedy algorithm works. And the way they proved this was with this polytope.

So for optimization people, there's this polytope. It turns out that this polytope also arises in several other settings. There's a beautiful paper of Gelfand, Goresky, MacPherson, and and Serganova, and they're doing algebraic geometry. They're studying toric varieties. You don't need to know too much about what this is, but the main point is that if you have a toric variety, there is a polytope associated to it. There's something called the moment map that picks up a toric variety and takes it to a polytope. In this very different setting of toric varieties, they encounter the same polytope, coming from algebraic geometry. Also there's a third way of seeing this polytope coming from commutative algebra. If you have an ideal in a polynomial ring, and again it's not too important that you know exactly what this means, but there's a recipe, given an ideal, to get a polytope out of it. Again, there's a very natural way that, given a very natural ideal, you get the same polytope, coming from commutative algebra.

This is one reason that I like this polytope a lot. It really is kind of a very interdisciplinary object. It's nature. It drops from optimization, it drops from algebraic geometry, it drops from commutative algebra. It really captures the essence of these matroids that have applications in many different fields. So that's the favorite object that I wanted to tell you about.

KK: I like this instead of a theorem in some sense. I learned something today. I mean, I learn something every day. But this idea that, mathematicians know this and a lot of people outside of mathematics don't, that the same structures show up all over the place. Like you say, combinatorics is interesting this way. You count things two different ways and you get a theorem. This is a meta-version of that. You've got these different instances of this fundamental object. Whitney essentially found this fundamental idea. And we can point at it and say, oh, it's there, it's there, it's there, it's there. That's very rich, and it gives you lots to do. You never run out of problems, in some sense. And it also forces you to learn all this new stuff. Maybe you came at this from combinatorics to begin with, but you've had to learn some algebraic geometry, you've had to learn all these other things. It's really wonderful.

FA: I think you're really getting at one thing I really like about studying which is that, I'm always arguing with my students that they'll say, oh, I do analysis, I don't do algebra. Or I do algebra, I don't do topology. And this is one field where you really can't get away with that. You need to appreciate that mathematics is very interconnected and that if you really want to get the full power of the objects and you really want to understand them, you kind of have to learn many different ways of thinking about the same thing, which I think is really very beautiful and very powerful.

EL: So then was the theorem that you were talking about, is this the theorem that the greedy algorithm works on polytopes, or is this something else?

FA: No, so the theorem is a little different. I'll tell you what the theorem is. Out of all the polytopes, there is one which is very fundamental, which is the cube. Now as you know mathematicians are weird, and for us cubes, a square is a cube. A segment is a cube. Cubes exist in every dimension. In zero dimensions it's a point, in one dimension it's a segment, in two dimensions it's a square, in three dimensions it's the 3-cube, and in any dimension there is a cube. And so the theorem that Gelfand, Goresky, MacPherson, and Serganova proved, which probably Edmonds knew at least to some extent, so he was coming from optimization, is that matroids are exactly the sub-polytopes of the cube-in other words, you choose some vertices of the cube and you don't choose others, and then you look at what polytope that determines-that polytope is going to be a matroid if and only if the edges of that polytope are all of the form ei-ej. This goes back to what you were saying at the beginning, Evelyn, that these are exactly those vectors that have a bunch of zeroes, and then they have one 1 and one -1. So matroid polytopes have the property that every edge is one of those vectors, and what I find really striking is that the opposite is true: if you just take any sub-polytope of the cube and the edges have those directions, then you have a matroid on your hands. First of all, I think that's a really beautiful characterization.

KK: It's so clean. It's just very neat.

FA: But then the other thing is that this collection of vectors ei-ej is a very fundamental collection of vectors, so you know, this is the root system of the Lie algebra of type A. This might sound like nonsense, but the point is that this is one of about seven families of root systems that control a lot of very important things in mathematics. Lie groups, Lie algebras, regular polytopes, things like this. And so also this theorem points to how the theory of matroids is just a theory of type A, so to say, that has analogues in many other Coxeter groups. It basically connects to the tradition of Lie groups and Lie theory, and it begins to show how this is a much deeper theory mathematically than I think anybody anticipated.

EL: Oh cool.

KK: Wow.

EL: So I understand that you have a musical pairing for us today. We all have it queued up. We're recording this with headphones, and we're all going to listen to this simultaneously. Then you'll tell us a little bit about what it is.

KK: Are we ready? I'll count us down. 3-2-1-play.

EL: There we go.

FA: We'll let this play for a little while, and I'm going to ask you what you hear when you hear this. One reason I chose this was I saw that you like percussion.

KK: I do. My son is a percussionist.

FA: One thing I want to ask you is when you hear this, what do you hear?

KK: I hear a lot.

EL: It has a really neat complex rhythm going.

FA: Do you speak Spanish?

KK: A little. Otro ves.

EL: I do not, sadly.

KK: It's called Quítalo del rincón, which, I'm sorry, I don't know what quitálo means.

FA: The song is called Quítalo del Rincón by Carlos Embales. And he was a Cuban musician. One thing is that Cubans are famously hard to understand.

KK: Sure.

FA: So I think even for Spanish speakers, this can be a bit tricky to understand. So do you have any idea what's going on, what he's singing?

EL: No idea.

FA: So this is actually a math lesson.

KK: I was going to say, he's counting. I heard some numbers in there.

FA: Yeah, yeah, yeah. It's actually a math lesson. I just think, man, why can't we get our math lessons to feel like this? This has been something that has kind of shifted a lot my understanding about pedagogy of mathematics. Just kind of imagine a math class that looks like this.

KK: Is he just trying to teach us how to count, or is there more going on back there?

FA: It's kind of an arithmetic lesson, but one thing that I really like is it's all about treating mathematics as a community lesson, and it's saying, okay, you know, if there's somebody that doesn't want to learn, we're going to put them in the middle, and they're going to learn with us.

KK: Oh. So they're not going to let anyone off the hook.

FA: Exactly. We all need to succeed together. It's not about the top students only.

KK: Very cool. We'll put a link to this on the blog post. I'm going to fade it out a little bit.

FA: Same here. Maybe I can tell you a little bit more about why I chose this song.

EL: Yeah.

FA: I should say that this was a very difficult task for me because if choosing one theorem is hard for me, choosing one song is even harder.

KK: Sure.

FA: As I mentioned, I also DJ, and whenever I go to a math conference, I always set aside one day to go to the local record stores and see what I will find. Oddly enough, I found this record in a record store in, I want to say Ann Arbor, Michigan, a very unexpected place for this kind of music. It was a very nice find that managed to explain to me how my being as a mathematician, my being as a DJ might actually influence each other. As a DJ, my job is always to provide an atmosphere where people are enjoying themselves, and it took me hearing this record to connect for me that it's also my job as a mathematician, as a math teacher, also to create atmospheres where people can learn math joyfully and everybody can have a good experience and learn something. In that sense it's a very powerful song for me. The other thing that I really like about it and why I wanted to pair it with the matroids is I think this is music that you cannot possibly understand if you don't appreciate the complexity of the history of what goes behind this music. There's definitely a very strong African influence. They're singing in Spanish, there are indigenous instruments. And I've always been fascinated by how people always try to put borders up. They always tell people not to cross borders, and they divide. But music is something that has never respected those borders. I'm fascinated by how this song has roots in Africa and then went to Cuba. Then this type of music actually went back to Congo and became a form of music called the Congolese rumba, and then that music evolved and went back to Colombia, and that music evolved and became a Colombian form of music called champeta. In my mind, it's similar to something I said earlier, that in mathematics you have to appreciate that you cannot put things into separate silos. You can't just be a combinatorialist or just be an algebraist or just a geometer. If you really want to understand the full power of mathematics, you have to travel with the mathematics. This resonates with my taste in music. I think if you really want to understand music, you have to appreciate how it travels around the world and celebrate that.

KK: This isn't just a math podcast today. It's also enthnomusicology.

FA: Something like that.

KK: Something about that, you know, rhythms are universal, right? We all feel these things. You can't help yourself. You start hearing this rhythm and you go, yeah, I get this. This is fantastic.

FA: What our listeners cannot see but I can is how everybody was dancing.

KK: Yeah, it's undeniable. Of course, Cuban music is so interesting because it's such a diverse place. So many diverse influences. People think of Cuba as being this closed off place, well that's just because from the United States you can't go there, right?

FA: Right.

KK: Everybody else goes there, and they think it's great. Of course, living in Florida there's a weird relationship with Cuba here, which is a real shame. What an interesting culture. Oh well. Maybe someday, maybe someday. It's just right there, you know? Why can't we go?

EL: Well, thanks a lot. Would you like to share any websites or social media or anything that our listeners can find you on, or any projects you're excited about?

FA: Sure, so I do have a Twitter account. I occasionally tweet about math or music or soccer. I try not to tweet too much about politics, but sometimes I can't help myself. People can find that at @FedericoArdila. That's my Twitter feed. I also have an Instagram feed with the same name. Then if people are interested in the music nerd side of what I do, my DJ collective is called La Pelanga, and we have a website lapelanga.com. We have Twitter, Instagram, all these things. We actually, one thing we do is collect a lot of old records that have traveled from Africa to the Caribbean to Colombia to various different parts. Many of these records are not available digitally, so sometimes we'll just digitalize a song and put it up there for people to hear. If people like this kind of music, it might be interesting for people to visit. And then I have my website. People can Google my name and find information there.

EL: Well thank you so much for joining us.

KK: This has been great fun, Federico.

FA: Thank you so much. This has been really fun.

KK: Take care.


Episode 14 - Laura Taalman

Kevin Knudson: Welcome to My Favorite Theorem. I’m your host, professor of mathematics at the University of Florida Kevin Knudson. This is my cohost.

Evelyn Lamb: Hi! I’m Evelyn Lamb, a math and science writer in Salt Lake City, Utah. Yeah, things are going well here. I went to the mall the other day, and I was leaving—I had to go to get my computer repaired, and I was in a bad mood and stuff, and I was leaving, and there was just, I walked into the parking lot, there was this beautiful view of this mountain. It’s a mall I don’t normally go to, and these mountains: Wow, it’s amazing that I live here.

KK: Is this the picture you put on Twitter?

EL: Yeah, or Facebook.

KK: Yeah, that is pretty spectacular. Well, I had a haircut today, that’s all I can say. Anyway, let’s get to it. We are very pleased in this episode to welcome Laura Taalman. Laura, do you want to introduce yourself and tell people about yourself?

Laura Taalman: Sure. Hi, thank you for having me on this podcast. I am extremely excited to be on it. Thank you.

EL: We’re glad you’re here.

LT: I’m a math professor at James Madison University, which is in Virginia. I’ve been here since 2000. We don’t have graduate students in our department, we only have undergraduate students. So when I got here, straight out of grad school, I had been studying singular algebraic geometry, and I just could not talk about that with students when we were doing undergraduate research. And I switched to knot theory. I’ve since switched to many things. I seem to switch to a new hat every year or so. My new hat is 3D printing. I’ve been doing a lot with mathematical 3D printing, but I think I’m still wearing that math jacket while I’m wearing the 3D printing hat.

EL: That’s a very exciting costume.

LT: Yes, it’s a very exciting costume, that’s true.

KK: And for a while you were the mathematician in residence at the National Museum of Mathematics, right?

LT: MoMath, that’s true. I did a semester at that, and that was the start of me living in New York City for a couple years to solve a two-body problem. I spent a couple years working in industry in 3D printing there. I just recently, last year, came back to the university. I now have the jacket and hat problem.

KK: Well, that’s better than the two-body problem.

LT: It’s better than not having a jacket or a hat.

KK: That too, right. So actually I was just visiting James Madison a couple of months ago. Laura’s department was very nice. Actually, my wife was visiting, and I was just tagging along, so I crashed their colloquium and just gave one. And everybody was really nice. I really, you know, I went to college at Virginia Tech two hours down the road. I’d never really spent any time in Harrisonburg, but it’s a lovely little town.

LT: It is.

KK: It’s very diverse. I had lunch at an Indonesian place.

EL: Oh wow.

KK: It was fantastic. I can’t get that here, you know.

LT: It’s an amazing place.

KK: It is. I thought it was really great. Anyway, so, you’re going to tell us about your favorite theorem. You told us once beforehand, but I’ve kind of forgotten. I remember, but this is pretty great. So Laura, what’s your favorite theorem?

LT: My favorite theorem comes from my knot theory phase. It’s a theorem in knot theory. I don’t know how much knot theory I should assume before saying what this theorem is, but maybe I should just set it up a little bit.

KK: Yeah, set it up a little bit.

EL: That would be great.

LT: In knot theory, you’re in studying, say, you tie a shoelace and you connect the ends, and you do that again with a different piece of string, and you’re wondering if these could possibly be the same knot in disguise, like you could deform one to another. Of course, we don’t study knots in three dimensions like that because no one can draw that. This is, in fact, how I got into 3D printing was trying to print three-dimensional versions of knots that I could look at their conformations.

KK: Very cool.

LT: But really mathematicians study knots as planar diagrams. You’ve got a diagram of a knot with crossings: over crossings and under crossings, a collections of arcs in the plane with crossings. A very old result in knot theory is that if two of those diagrams represent the same knot secretly (they might look very different), there is a sequence of what are known as Reidemeister moves that gets from one to the other. Reidemeister moves are super simple moves, like putting a twist in a strand or moving one strand over another strand, or moving a strand over or under a crossing, right? Super simple. It’s been proved that that’s sufficient, that’s all you need to change one diagram into any other equivalent diagram.


LT: So my favorite theorem is by Joel Haas and Jeffrey Lagarias, I think is his name. Haas is from UC Davis, and Lagarias is at Michigan. And in 2001, they proved an upper bound for the number of Reidemeister of moves that it takes to turn a knot diagram that’s secretly unknotted and turn it into basically a circle, the unknot. So they wanted to answer this question.

We know we can, if it’s unknotted, turn it into a circle. The question is how many of these Reidemeister moves are you going to need, and even worse than that, if you start with a diagram that has, like, 10 crossings, you might actually have to increase the number of crossings along the way while simplifying the knot. It’s not necessarily true that the number of crossings will be monotonically decreasing throughout the Reidemeister move process. You might increase the number, you might have to increase the number of crossings by a lot. So this is a nontrivial question of how many Reidemeister moves. So they said, OK, look. We want to find this one constant that will give you an upper bound for any knot that’s trivial to unknot it, the number of Reidemeister moves, and they said that the bound would be of the form 2 times [ed note: Taalman misspoke here and meant to the power instead of times, as is clear from the rest of the conversation] a constant times n, where n is the number of crossings. So if it’s a 10-crossing knot, it would be like 2^10 times this constant, right?


LT: I was playing around with some numbers, so for example, if you had a 6-crossing knot, right, and if the constant happened to be 10, this would be 2^60, which is over a quintillion.

KK: That’s a lot.

LT: If that constant were 10, and your knot started out with just 6 crossings, that’s a big number. But that is not the bound that they found.

KK: It’s not 10.

LT: Their theorem, my favorite theorem, is that they came up with a bound that the maximum number of Reidemeister moves that would be needed to unknot a trivial knot, that constant is 2^10^11 times n. The constant is 10^11, so 2^(10^11) times n. So I put this into Wolfram Alpha with n=6. So say you have a 6-crossing knot. It’s not so bad. I put in 2^10million [ed note: Taalman misspoke here and meant hundred billion; 10^7 or 10 million comes up as a bound in a different part of the paper], and then also times 6 in the exponent. I just did this this afternoon, and do you know what Wolfram Alpha said?

KK: It couldn’t do it?

LT: I’ve never seen this. It said nothing.

EL: You broke it?

LT: It didn’t spin and think about it, and it didn’t attempt to say something. It literally just pretended that I did not press the button. This is really a big number.

KK: I’m surprised. You know what it should have done? It should have given you the shrug emoji.

LT: Yeah, that would be great if it had that. That would be perfect. So the reason it’s my favorite theorem, I guess there are a lot of reasons, but the primary reason is: this is ridiculous, right? If you have a 6-crossing knot, there’s no way you’re going to need even a quintillion Reidemeister moves in reality. If I actually give you a 6-crossing knot in reality, you’re not going to need a quintillion Reidemeister moves, let alone this number of silence that Wolfram Alpha can’t even calculate. So to me, it’s just really funny. And I could talk a little more about that. But it’s an important result because it’s the first upper bound, which is great, but also, it’s just, it’s ridiculous.

KK: It’s clearly not sharp. They didn’t cook up an example.

LT: It’s clearly not sharp.

KK: They didn’t cook up an example where they had to use that many moves.

LT: Right, no, they did not. It’s kind of like what happened with the twin prime conjecture, and people online were looking at the largest gap you could guarantee, I don’t know if I’m going to say this right, the largest gap.

KK: Right, it was 70 million.

LT: And eventually primes would have to appear with that gap. That gap started out being huge, I don’t remember what it was, but it was really big, and it ended up getting better and better and better and better.

KK: Right.

LT: So this is like the first shot in that game for Reidemeister moves, is 2 to the 10 to the 11th times the number of crossings.

KK: Has anybody made that better yet?

LT: They have. So that was in 2001, this exponential upper bound with very large exponent, and in 2011, two different mathematicians, Coward and Lackenby, I think, proved a different bound that involved an exponential tower. That gives you an idea of just how big that first bound was, if this bound is an exponential tower.

EL: And it’s better?

LT: Actually, let me say that slightly differently because this is not necessarily better. Their result was actually a little bit different. Their result wasn’t taking a knot to the unknot. It was taking any knot to any other knot it was equivalent to.



LT: This could well be worse, actually. And to tell you the truth, I was not entirely certain how to type this number into Mathematica, into Wolfram Alpha. It could be a lot worse. Their bound for the maximum number of Reidemeister moves that you need to take one knot to another knot that it’s ambient isotopy equivalent to in 3-space, if you had that knot. I’ve got to get my piece of paper to look at this. Their number is what they call exp^c^n(n), so the n is the sum of the crossing numbers of the two knots. The c^n: c is some constant to be determined. It could be laughably large, right? And what exp means is that it’s 2^n iterated that many times. So exp^k, or exp(k)(n) would be 2^n iterated k times.

KK: Right. 2 to the 2 to the 2 to the…

LT: …2 to the n. So this number is 2 to the 2 to the 2 to the…tower, and the height of this tower is c^n, where n is the number of crossings, and then there’s an n at the top. And the number c is 10 to the one millionth power.

KK: Wow.

EL: Wow. So this is bad news.

LT: This is very bad. So the tower is 10 to the one million high. I’m sure this is worse than the other one.

KK: It’s got to be worse.

LT: They didn’t try at all to make that low. I did a small example: what if the tower was only length 2 and there was a 6 on the top, so 2^2^6. And you’re doing your brackets from the top down, so 2 to the quantity 2^6.

EL: Right.

LT: That is over a quintillion.

KK: Sure.

EL: Yeah, like this is Graham’s number stuff.

LT: Yeah, Graham’s number, all that stuff with the arrows. All that stuff with the arrows.

EL: Yeah, basically you can’t even tell someone how big Graham’s number is because you don’t have the words to describe the bigness of this number.

LT: Yeah, and even with a tower of 2, I’m getting a quintillion. Their length is 10 to the one million. I already don’t understand what 10 to the one million is.

KK: No. You know this thing where you pack the known universe with protons, do you know how many there’d be?

LT: No. Not many?

KK: 10^126.

LT: Oh my God.

KK: So 10 to the one million. You’ve surely seen Powers of 10, this old Eames movie, right?

LT: Yeah, yeah.

KK: The known universe just isn’t that big, you know? It’s what, 10 to the 30th across or whatever. It’s nothing.

EL: You definitely can’t come up with an example that needs this because the heat death of the universe would occur well before we proved this example needed this many steps.

KK: Yeah.

LT: I think that these mathematicians know how funny their result it. It’s definitely, it’s not just funny. The proofs are very complicated and have to do with piecewise linear 3-manifolds and all this. I don’t understand the proofs. This is very sophisticated, so I’m not besmirching them by saying it’s funny. But I think they understand how crazy this sounds. They’ll say things like, this Coward-Lackenby paper has a line in there like, notice that this solves the problem of figuring out if two knots are Reidemeister equivalent because all you have to do is look at every sequence of Reidemeister moves of that length, look at them all, and then see if any two of them turn out to be the same knot. Boom, you’ve solved your problem.

KK: All you have to do.

LT: All you have to do! Problem solved.

EL: Yes.

LT: Or that, so earlier you asked if the result has been improved upon, and it has, but that wasn’t the reference I wanted to say for that. It has been improved just three years ago by Lackenby, one of the authors of that other result, and their result is polynomial. They found a polynomial bound, not an exponential bound. It’s much better. They found that if n is the number of crossings to go from a trivial knot to the trivial circle, this is back to that problem, it’s 236 times n to the 11th power.


LT: It’s not so bad.

KK: Right.

LT: Not so bad. It is actually pretty bad. But it’s something that Wolfram could calculate. So I did it for example with n equals 3. So say you have a 3-crossing trivial knot. What’s the most number of Reidemeister moves that you would need according to this bound to unknot it? That would be 236 times 3 to the 11th power. That is 2 times 10^31 power, which is 10 nonillion.

KK: Right, OK.

LT: 10 nonillion.

EL: So this isn’t great.

LT: But it had a name! Dressed in scientific notation. Positive change.

EL: It didn’t cause Wolfram Alpha to run away in fright.

LT: No. I think this is the best one so far, this 2014 result by Lackenby. I think it’s the best one.

EL: Well that’s interesting, because you know, just for the example of 3, if you try, like, 10 Reidemeister moves, that’s gotta be it. It feels like that has to be so much lower. It’ll be interesting to see if it’s possible to shrink this down more to meet some more realistic bound.

LT: Honestly, 3 is a ridiculous example. I used it because it was the smallest, but you’re right. If you think about it, there’s really not that many three-crossing diagrams that one can draw.

KK: Right.

LT: Of the ones that are trivial, I’m sure you could find a path of Reidemeister moves. This result isn’t made for low-crossing knots, really, I think. Or at least not three. But you’re right, it’s got to be way better than this.

KK: This is where mathematicians and computer scientists are never going to see eye to eye on something. A computer scientist will look at this and say, that’s ridiculous. You have not solved the problem.

LT: I agree. It’s not good enough. They did have one result in this 2014 paper. Remember I said that you may have to increase the number of crossings? Well back in the original 2001 paper, Haas and Lagarias were like, hey, here’s a fun corollary: you only have to increase the number of crossings by 2 to the power of 10 to the 11th times n at most, because you can’t have more crossings than what it would take for the number of Reidemeister moves. So that’s their corollary. In 2014, that bound is super significantly improved. They just say it’s (7n) squared. That’s not bad at all. They’re saying it doesn’t have to get worse than that on your way to making it the unknot.

KK: You might have to go up and down and up and down and up and down, right?

LT: Right. I guess then they’re saying the most it would ever have to go up is to that.

KK: Yeah.

LT: So things are getting better.

KK: All the time getting better. So part of the fun of this podcast, aside from just learning about absurd numbers, is that we ask our guests to pair their theorem with something. So what have you chosen to pair your theorem with?

LT: That one is actually harder to answer than what is your favorite theorem.

KK: Sure.

LT: I could answer that right away. But I’ve thought about it, and I’ve decided that the best thing to pair it with is champagne.


LT: Here’s why. First of all, you should really celebrate that a first upper bound has been found.

EL: Yeah.

LT: Especially in terms of when you have undergraduates who are doing research, this kind of meta question of what does it mean to have a first upper bound, a completely non-practical upper bound. The fact that that’s worthy of celebration is something I want them to know. It doesn’t have to be practical. The theory of having an upper bound is very important.

KK: Right.

LT: So champagne is to celebrate, but it’s also to get you over these numbers. I don’t know, maybe it represents how you feel when you’re thinking about the numbers, or what you need to do when you have been thinking about the numbers, is you need a stiff drink. It can be for both.

EL: And champagne is kind of funny, too. It’s got the funny little bubbles, and you’re always happy when you have it. I think it goes very well with the spirit. It’s not practical either.

KK: No.

LT: Yeah.

EL: As drinks go, it’s one of the less practical ones.

KK: And if you get cheap champagne, it will give you a headache, just like these big numbers.

LT: It’s very serious if you had a tower of exponential champagne, this would be a serious problem for you.

KK: Yeah.

EL: Yeah.

KK: Oh wow. We always like to give our guests a chance to plug anything they’re working on. You tweet a lot. I enjoy it.

LT: I do tweet a lot. If you want to find me online, I’m usually known as mathgrrl, like riot grrl but for math. If you’re interested in 3D printable mathematical designs, I have a ton of free math designs on Thingiverse under that name, and I also have a shop on Shapeways which makes you great 3D printed mathematical jewelry and stuff.

EL: It’s all really pretty. You also have a blog, is Hacktastic still going?

LT: Hacktastic is still there. A lot of it has been taken over by these tutorials I’ve been writing about 3D printing with a million different types of software. If you go to mathgrrl.com, Hacktastic is one of the tabs on that.

EL: I like that one.

KK: All over the internet.

EL: Yeah. She will definitely bring some joy to your life on Twitter and on 3D printing worlds. Yeah, thank you so much for being on here. I’m definitely going to look up these papers and try to conceptualize these numbers a little bit.

LT: These are very big numbers. Thank you so much. It’s been really fun talking about this, and thank you for asking what my favorite theorem is.

KK: Thanks, Laura.


Episode 13 - Patrick Honner

Evelyn Lamb: Welcome to My Favorite Theorem. I’m Evelyn Lamb, a freelance math and sci-ence writer in Salt Lake City. And this is my cohost.

Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: Pretty good. It’s hot here, but it gets cool enough at night that it’s survivable. It’s not too bad.

KK: It’s just hot here. It’s awful.

EL: Yeah, there’s really something about that dry heat. I lived in Houston for a while. It’s differ-ent here. So on each episode we invite someone on to tell us about their favorite theorem, and today we’re delighted to have Patrick Honner. Hey! Can you tell us a little bit about yourself?

Patrick Honner: Hi I’m happy to be here. Great to see you, Evelyn and Kevin. I’m in Brooklyn. It’s hot and muggy here. It’s never survivable in New York. I’ve got that going for me. I’m really excited to be here. I’m a high school math teacher. I teach at Brooklyn Technical High School. I studied math long ago, and I’m excited to talk about my favorite theorem today.

KK: Cool.

EL: Great.

KK: So what do you have for us?

PH: In thinking about the prompt of what my favorite theorem was, I guess I came to thinking about it from the perspective of a teacher, of course, because that’s what I’ve been doing for the last almost 20 years. So I was thinking about the kinds of theorems I like to teach, that are fun, that I think are really engaging, that are essential to the courses that i teach. A couple came to mind. I teach calculus occasionally, and I think the intermediate value theorem is probably my favorite theorem in calculus. I feel like the mean value theorem gets all the love in calculus. Eve-ryone thinks that’s the most important, but I really like the intermediate value theorem. I really love De Moivre’s theorem as a connection between complex numbers and geometry and alge-bra, and a little bit of group theory in there. But what really stuck out when thinking about what my favorite theorem is was Varignon’s theorem.

KK: I had to look this up.

PH: Well I think a lot of people, they know it when you show it to them, but they don’t know the name of it. That’s also part of why I like it. The name is sort of exotic sounding. It transports them to France somehow.

EL: Nice.

KK: Varignon’s theorem is a theorem of Euclidean geometry. It’s not that deep or powerful or exciting, but there’s just something about the way you can interact with it and play with it in class, and the way you prove it and the different directions it goes that really makes it one of my favorite theorems.

KK: Now we’re intrigued.

EL: Yeah. What is this theorem?

PH: Imagine, so Varignon’s theorem is a theorem about quadrilaterals. If you imagine a quadrilateral in the plane, you’ve got the four sides. If you construct the midpoints of each of the four sides, and then connect them in a consistent orientation, so clockwise or counterclockwise, then you will get another quadrilateral. You start with the four sides, take the midpoints and connect them. Now you’ve got another quadrilateral. So if you start with a square, you can imagine those mid-points appearing, and you connect them, then that new quadrilateral would be a square. So you have a square inside of a square. This is a picture I think a lot of people can see.

If you started with a rectangle and you constructed those midpoints, if the rectangle were a non-square rectangle, so longer than it was wide, you can think about it for a moment and maybe draw it, and you’d see a rhombus. Sort of a long, skinny rhombus, depending on the nature of the rectangle. Varignon’s theorem says that regardless of whatever quadrilateral you start with, the quadrilateral you form from those midpoints will be a parallelogram. And I just think that this is so cool.

KK: It’s always a parallelogram.

EL: Yeah, that’s really surprising. By every quadrilateral, do you mean only convex ones, is this for all quadrilaterals?

PH: That’s part of the reason why it’s so much fun to play around with this theorem. It’s true for every quadrilateral, and in fact in some ways, it’s true even for things that aren’t quadrilaterals. In some ways it’s this continual intuition-breaking process with kids when you’re playing around with them. The way you can engage a class with this is you can just tell every student to draw their own quadrilateral and then perform this procedure where they construct the midpoints and connect them. Then you can tell them, ‘Look around. What do you see?’ The first thing the kids see is that everybody drew a square and everybody has a square inscribed, right?

So this is a nice opportunity to confront kids about their mathematical prejudices. Like if you ask them to draw a quadrilateral, they draw a square. If you ask them to draw a triangle, they draw an equilateral triangle. But then there will always be a couple of kids who drew something a little bit more interesting. You can get kids thinking about what all of those things have in common and start looking for a conjecture. You can kind of push the and prod them to maybe do some different things. So maybe on the next interaction of this activity, we’ll get some rectangles or some arbitrary, some non-special quadrilaterals. Even after a couple rounds of this, you’ll still see that almost all the quadrilaterals drawn are convex. Then you can start pushing the kids to see if they understand that there’s another way to draw a quadrilateral that might pose a problem for Varignon’s theorem. It’s so cool that when you get to that convex one, kids never believe that it’ll still form a parallelogram.

EL: In the non-convex one.

PH: That’s right, the concave one, the non-convex. I always get the two words mixed up. Maybe that’s why the kids are so confused. Yeah, the kids will never believe in the non-convex case that it’ll still form a parallelogram. Wow, I can’t believe that.

KK: It seems like, I looked this up, even if the thing isn’t really a quadrilateral, if you take four points in the plane and draw two triangles that meet where the lines cross, it still works, right?

PH: Yeah. There’s yet another level to go with this. Now you’ve got the kids like, wait, so for concave, this works? It’s kind of mind-blowing. Then you can start messing around with their idea of what a quadrilateral actually is. If you show them, well, what if I drew a complex quadri-lateral. I don’t use that terminology right away, but just this idea of connecting the vertices in such a way that two sides appear to cross. It can’t possibly work there, can it? The kids don’t know what to think at this point. They think something weird is going around. Amazingly, even if the quadrilateral crosses itself like that, as long as it’s the non-degenerate case, the four points will still make a parallelogram. It’s really remarkable.

KK: Is there a slick proof of this, or is it one of these crazy things, and you have to construct and construct and construct, and before you know it you’ve lost track of what you’re doing?

PH: No, that’s another reason why this is such a great high school activity. The proof is really accessible. In fact there are several proofs. But before we talk about my favorite proof of my favorite there, there’s another case, another level you can go with Varignon’s theorem. Often I’ll leave this with students as something to think about, a homework problem or something like that. Varignon’s theorem actually works even if the four points don’t form a quadrilateral, so if the four points aren’t coplanar, say. This process of connecting the midpoints will still form a parallelogram. It’s amazing just that the four points are coplanar. You wouldn’t necessarily expect that the four midpoints would be in the same plane if the four starting points aren’t in the same plane. Moreover, those four points form a parallelogram. It’s such an amazing thing.

EL: What is your favorite proof, then?

PH: My favorite proof of Varignon’s theorem is something that connects to a couple of key ideas that we routinely explore in high school geometry. The first is one of the first important theorems about triangles that we prove, that’s simple but has some power. It’s that if you connect the mid-points of two sides of a triangle, that line segment is parallel to the third side. And it’s also half the length. But the parallelism is important.

The other idea, and I think this is one of the most important ideas that i try to emphasize with students across courses, is the idea of transitivity, of equality or congruence, or in this case par-allelism. The nice proof of Varignon’s theorem is that you imagine all the quadrilaterals and midpoints. And you draw one diagonal. You just think about one diagonal. Now if you cover up half of the quadrilateral, you’ve got a triangle. The line segment connecting those two midpoints is parallel to that diagonal because that’s just that triangle theorem. Now if you cover up the other half of the quadrilateral, you have a second triangle. And that segment is parallel to the diagonal. So both of those line segments are parallel to that diagonal, and therefore by transitivity, they’re parallel to each other, and now you have that the two opposite sides are parallel. And the exact same argument works for the other sides using the other diagonal.

KK: I like that. My first instinct would be to do some sort of vector analysis. You realize all the sides as vectors and then try to add them up and show that they’re parallel or something.

PH: Yeah, and in some of the courses i teach, I do some work with vectors, and this is definitely something we do. We explore that proof using vectors, or coordinate geometry. Maybe later in the year we’ll do some work with coordinate geometry. We can prove it that way too.

EL: Yeah, I think I would immediately go to coordinates. Of course, I would have assumed they were coplanar in the first place. If you tell me it’s a quadrilateral, yeah, it’s going to be there in the plane and not in 3-space.

PH: I love coordinate geometry, and I definitely have that instinct to run to coordinates when I want to prove something. One of those things you have to be careful of in the high school class is making sure they understand all the assumptions that underly the use of coordinates, and un-derstanding the nature of an arbitrary figure. Going back to one of the first things I said, if you ask kids to draw a quadrilateral, they’re going to draw a square, or if you ask them to draw an arbitrary quadrilateral, they’re often going to draw a square or rectangle. If you ask them to draw an arbitrary quadrilateral in the plane, they might make assumptions about where those coordi-nates are likely to be.

EL: Yeah.

KK: Your students are lucky to have you.

PH: That’s what I tell them!

KK: Really, to give this much thought to something like this and show all these different per-spectives and how you might come at it in all these different ways, my high school geometry class, I mean I had a fine teacher, but we never saw anything with this kind of sophistication at all.

PH: It’s fun. I would like to present it as if I sat around and thought deeply about it and had this really thoughtful approach to it, but it just kind of happened. I think, again, that’s why this is one of my favorite theorems. You can just put this in front of students and have them play and just run with this. It’ll just go in so many different directions.

EL: So what have you chosen to pair with this theorem? What do you enjoy experiencing along with the glory of this theorem?

PH: This was a tricky one. I feel like when I think of Varignon’s theorem, really focusing on the name, it really transports me to France. I feel like it’s a hearty stew, like boeuf Varignon or something like that. I think you need some crusty bread and a glass of red wine with Varignon’s theorem]. Not my students.

EL: Crusty bread and grape juice for them. Yeah, I just got back from living in France for six months, and actually I didn’t have any boeuf bourgignon, or Varignon, while I was there, but I did enjoy quite a few things with crusty bread and a glass of red wine. I highly recommend it.

KK: This has been great fun.

PH: Yeah, I’ve enjoyed this. You seem to enjoy talking about this more than my students, so this was great for me.

KK: It helps to be talking to a couple of mathematicians, yeah.

EL: So, we like to let guests plug websites or anything. So would you like to tell people about your blog or any things you’re involved in that you’d like to share?

PH: Yeah, sure. I blog, less frequently now than I used to, but still pretty regularly. I blog at mrhonner.com. You can generally find out about what I’m doing at my personal website, pat-rickhonner.com. I’m pretty active on Twitter, @mrhonner.

KK: Lots of good stuff on Patrick’s blog, especially after the Regents exams. You have a lot to say.

PH: Not everybody thinks it’s good stuff. I’m glad some people do.

KK: I don’t live in New York. It’s fine with me.

EL: Yeah, he has a series kind of taking apart some of the worst questions on the New York Regents exams for math. It can be a little frustrating.

PH: We just wrapped up Regents season here. Let’s just say there are some posts in the works about what we’re facing. You know, I enjoy it. It always sparks interesting mathematical conver-sations. My goal is just to raise awareness about the toll of these tests and how sometimes it seems like not enough attention is given to making sure these tests are of high quality and are valid.

KK: I don’t think it’s just a problem in New York, either.

PH: It is not just a problem in New York.

KK: Well thanks for joining us, Patrick. This was really great. I learned something today.

EL: Yeah, me too.

PH: It was my pleasure. Thanks for having me. Thanks for giving me an opportunity to think about my favorite theorem and come on and talk about it. And maybe Varignon’s theorem will appear in a couple more geometry classes next year because of it.

KK: Let’s hope.

EL: Yeah, I hope so.

KK: Take care.

PH: Thanks. Bye.

Episode 12 - Candice Price

Kevin Knudson: Welcome to My Favorite Theorem. I am Kevin Knudson, professor of mathematics at the University of Florida, and I am joined by my cohost.

Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah.

KK: How’s it going?

EL: Yeah, it’s going okay. It’s a bit smoky here from the fires in the rest of the west. A few in Utah, but I think we’re getting a lot from Montana and Oregon and Washington, too. You can’t see the mountains, which is a little sad. One of the nice things about living here.

KK: Yeah. Well, Hurricane Irma is bearing down on Florida. I haven’t been to the grocery store yet, but apparently we’re out of water in town. So I might have waited a couple days too late.

EL: Fill up those bathtubs, I guess.

KK: I guess. I don’t know. I’m dubious. You know, I lived in Mississippi when Katrina happened, and the eye came right over where we lived, and we never even lost Direct TV. I’m trying not to be cavalier, but we’ll see. Fingers crossed. It’s going to be bad news in south Florida, for sure. I really hope everybody’s OK.

EL: Yeah, definitely.

KK: Anyway.

EL: Fire, brimstone, and water recently.

KK: Anyway, we’re not here to talk about that. We’re here to talk about math. Today we’re thrilled to have Candice Price with us. Candice, want to say hi?

Candice Price: Hi everyone!

KK: Tell us a little bit about yourself.

CP: Sure. I’m currently an assistant professor of mathematics at the University of San Diego. I got my Ph.D. at the University of Iowa, and I study DNA topology, so knot theory applied to DNA, applied to biology.

EL: So that’s knot with a ‘k.’

CP: Yeah, knot.

KK: San Diego is a big switch from Iowa.

CP: Yeah, it is. In fact, I had a stopover in New York and a stopover in Texas before getting here. All over.

EL: You’ve really experienced a lot of different climates and types of people and places.

CP: Yeah. American culture, really.

KK: All right. You’ve told us. Evelyn and I know what your favorite theorem is, and I actually had to look this up, and I’m intrigued. So, Candice, what’s your favorite theorem?

CP: Sure. My favorite theorem is actually John H. Conway’s basic theorem on rational tangles. It’s a really cool theorem. What Conway states, or shows, is that there’s a one-to-one correspondence between the extended rational numbers, so rational numbers and infinity, and what are known as rational tangles. What a rational tangle basically is, is you can take a 3-ball, or a ball, an open ball, and if you put strings inside the ball and attach the strings to the boundary of the ball, so they’re loose in there but fixed, and you add these twists to the strings inside, if you take a count to how many twists you’ve added in these different directions, maybe the direction of west and the direction of south, and if you just write down how many twists you’ve done, first going west and then going south, and then going west, going south, all of those, all the different combinations you can do, you can actually calculate a rational number, and that rational number is attributed to that tangle, to that picture, that three-dimensional object.

It’s pretty cool because as you can guess, these tangles can get very complicated, but if I gave you a rational number, you could draw that tangle. And you can say that any tangle that has that same rational number, I should be able to just maneuver the strings inside the ball to look like the other tangles. So it’s actually pretty cool to say that something so complicated can just be denoted by fractions.

EL: Yeah. So how did you encounter this theorem? I encountered it from John Conway at this IAS program for women in math one year, and I don’t think that’s where we met. I don’t remember if you were there.

CP: I don’t think so.

EL: Yeah, I remember he did this demonstration. And of course he’s a very engaging, funny speaker. So yeah, how did you encounter it?

CP: It’s pretty cool, so he has this great video, the rational tangle dance. So it’s fun to show that. I started my graduate work as a master’s student at San Francisco State University, and I had learned a little bit about knot theory (with a ‘k’) as an undergrad. And so when I started my master’s I was introduced to Mariel Vazquez, who studies DNA topology. So she actually uses rational tangles in her research. That was the first time I had even heard that you could do math and biology together, which is a fascinating idea. She had introduced to me the idea of a rational tangle and showed me the theorem, and I read up on the proof, and it’s fascinating and amazing that those two are connected in that way, so that was the first time I saw it.

KK: Since I hadn’t heard of this theorem before, I looked it up, and I found this really cool classroom activity to do with elementary school kids. You take four kids, and you hand them two ropes. You allow them to do twists, the students on one end of the ropes interchange, and there’s a rotation function.

CP: Yeah.

KK: And then when you’re done you get a rational number, and it leads students through these explorations of, well, what does a twist do to an integer? It adds one. The rotate is a -1/x kind of thing.

CP: Right.

KK: So I was immediately intrigued. This really would be fun. Not just for middle school kids, maybe my calculus students would like it. Maybe I could find a way to make it relevant to my undergrads. I thought, what great fun.

CP: Yeah. I think it’s even a cool way to show students that with just a basic mathematical entity, fractions or rational numbers, you can perform higher mathematics. It’s pretty cool.

KK: This sort of begs the question: are there non-rational tangles? There must be.

CP: Yes there are! It categorizes these rational tangles, but there is not yet a categorization for non-rational tangles. There are two types. One is called prime, and one is called locally knotted. So the idea of locally knotted is that one of the strands just has a knot on it. A knot is exactly what you think about where you have a knot in your shoestring. Then prime, which is great, is all of the tangles that are not rational and not locally knotted. So it’s this space where we’ve dumped the rest of the tangles.

KK: That’s sort of unfortunate.

CP: Yeah, especially the choice of words.

KK: You would think that the primes would be a subset, somehow, of the rational tangles.

CP: You would hope.

EL: So how do these rational tangles show up in DNA topology?

CP: That’s a great question. So your DNA, you can think of as long, thin strings. That’s how I think about it. And it can wrap around itself, and in fact your DNA is naturally coiled around itself. That’s where that twisting action comes, so you have these two strings, and each string, we know, is a double helix. But I don’t care about the helical twist. I just care about how the DNA wraps around itself. These two strings can wind around, just based on packing issues, or a protein can come about and add these twists to it, and naturally how it just twists around. Visually, it looks like what is happening with rational tangles. Visually, the example that Kevin was mentioning, that we have the students with the two ropes, and they’re sort of twisting the ropes around, that’s what your DNA is doing. It turns into a great model, visually and topologically, of your DNA.

KK: Very cool.

CP: I like it.

KK: Wait, where does infinity come from, which one is that? It’s the inverse of 0 somehow, so you rotate the 0 strand?

CP: Yes, perfect. Very good.

KK: So you change your point of view, like when I’m proving the mean value theorem in calculus, I just say, well, it’s Rolle’s theorem as Forrest Gump would look at it, how he tilts his head.

CP: Right. I’m teaching calculus. I might have to use that. That’s good. I mean, hopefully they’ll know who Forrest Gump is.

KK: Well, right. You’re sort of dating yourself.

CP: That’s also a fun conversation to have with them.

KK: Sure. So another fun conversation on this podcast is the pairing. We ask our guests to pair their theorem with something. What have you chosen to pair Conway’s theorem with?

CP: So I thought a lot about this. So being in California, right, what I paired this with is a Neapolitan shake from In n Out burger. And the reason for that is, you’ve sort of taken these three different flavors, equally delicious on their own, right, rational numbers, topology, and DNA, and you put them together in this really beautiful, delicious shake. So the Neapolitan shake from In n Out burger is probably my favorite dessert, so for me, it’s a good pairing with Conway’s rational tangle theorem.

KK: I’ve only eaten at In n Out once in my life, sadly, and I didn’t have that shake, but I’m trying to picture this. So they must not mix it up too hard.

CP: They don’t, not too hard. So there’s a possibility of just getting strawberry, just getting vanilla, just getting chocolate, but then you can at some point get all three flavors together, and it’s pretty amazing.

KK: So I can imagine if you mix it too much, it would just be, like, tan. It would just be this weird color.

CP: Maybe not as delicious looking as it is tasting.

KK: That’s an interesting idea.

CP: It’s pretty cool.

KK: So we also like to give our guests a chance to plug anything they’re working on. Talk about your blog, or anything going on.

CP: Sure. I am always doing a lot of things. I am hoping I can take this time to plug, in February we have a website—we is myself, Shelby Wilson, Raegan Higgins, and Erica Graham—a website called Mathematically Gifted and Black where we showcase or spotlight every day a contemporary black mathematician and their contributions to mathematics, and we’re working on that now. We’ll have an article in the AMS Notices in February coming up. It’s up now so you can see it. We launched in February 2017. It’s a great website. We’re really proud of it.

EL: Yeah. Last year it was a lot of fun to see who was going to be coming on the little calendar each time and read a little bit about their work. You guys did a really nice job with that.

CP: Thanks. We’re very proud, and I think the AMS will put a couple of posters around the website as well.

KK: Great. Well, Candice, thanks for joining us.

CP: Thank you.

KK: This has been good fun. I like learning new theorems. Thanks again.

CP: Yeah, of course. Thank you. I enjoyed it.


Episode 11 - Jeanne Nielsen Clelland

Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m flying solo in this episode. I’m at the Geometry in Gerrymandering workshop at Tufts University, sponsored by the Metric Geometry, what is it called, Metric Geometry and Gerrymandering Group, MGGG. It’s been a fantastic week. I’m without my cohost Evelyn Lamb in this episode because I’m on location, and I’m currently sitting in the lobby of my bed and breakfast with my very old friend, not old as in age, just going way back, friend, Jeanne Nielsen Clelland.

Jeanne Clelland: Hi Kevin. Thanks for having me.

KK: So you’re at the University of Colorado, yes?

JC: University of Colorado at Boulder, yes.

KK: Tell everyone about yourself.

JC: Well, as you said, we’re old friends, going all the way back to grad school.

KK: Indeed. Let’s not say how long.

JC: Let’s not say how long. That’s a good idea. We went to graduate school together. My area is differential geometry and applications of geometry to differential equations. I’m a professor at the University of Colorado at Boulder, and I’m also really enjoying this gerrymandering conference, and I’m really happy to be here.

KK: Let’s see if we can solve that problem. Although, as we learned today, it appears to be NP-hard.

JC: Right.

KK: That shouldn’t be surprising in some sense. Anyway, hey, let’s put math to work for democracy. Whether we can solve the problem or not, maybe we can make it better. So I know your favorite theorem, but why don’t you tell our listeners. What’s your favorite theorem?

JC: My favorite theorem is the Gauss-Bonnet theorem.

KK: That’s awesome because if anybody’s gone to our Facebook page, My Favorite Theorem, or our Twitter feed, @myfavethm, the banner picture, the theorem stated there is the Gauss-Bonnet theorem. That’s accidental. I just thought the statement looked pretty.

JC: Yeah, and when I first looked at your page, I saw that. And I thought, well, I guess my favorite theorem is already taken since it’s your banner page, so I was really excited to hear that I could talk about it.

KK: No, no, no. In fact, I was doing one last week, and the person mentioned they might do Gauss-Bonnet, and I said no, no, no. I have an expert on Gauss-Bonnet who’s going to do it for us. So why don’t you tell us what Gauss-Bonnet is?

JC: OK. So Gauss-Bonnet is about a relationship between, so it’s in differential geometry. It comes from the geometry of surfaces, and you can start with surfaces in 3-dimensional space that are easy to visualize. And there are several notions of curvature for surfaces. One of these notions is called the Gauss curvature, and roughly it measures whether a surface is bowl-shaped or saddle-shaped. So if the Gauss curvature is positive, then you think the surface looks more like a bowl, like a sphere is the prototypical example of positive Gauss curvature. If the Gauss curvature is negative, then your surface is shaped more like a saddle, and if the Gauss curvature is zero, then you think your surface, well the prototypical example is a plane, a surface that’s flat, but in fact this is a notion that is metrically invariant, which means if you take a surface and bend it without stretching it, you won’t change the Gauss curvature.


JC: So for instance I could take a flat piece of paper and wrap it up into a cylinder.

KK: Yes.

JC: And since that doesn’t change how I measure distance, at least small distances on that piece of paper, a cylinder also has Gauss curvature zero.

KK: So this is a global condition?

JC: No, it’s local.

KK: Right.

JC: It’s a function on the surface, so at every point you can talk about the Gauss curvature at a point. So of course the examples I’ve given you, the sphere, the plane, those are surfaces where the Gauss curvature is constant, but on most surfaces this is a function, it varies from point to point.

KK: Right, so a donut, a torus, on the inside it would be negative, right?

JC: Right.

KK: But on the outside,

JC: That’s exactly right, and that’s a great example. We’re going to come back to the example of the torus.

KK: Good.

JC: So at the other extreme for surface, particularly for compact surfaces, you have topology, which is your area. And there’s a fundamental invariant of surfaces called the Euler characteristic. And the way you can compute this is really fun. You draw a graph, and the mathematical notion of a graph is basically you have points, which are called vertices, you have edges joining your vertices, and then you have regions enclosed by these edges, which are called faces.

KK: Yes.

JC: And if you take a surface, you can draw a graph on it any way you like. You count the number of vertices V, the number of edges E, and the number of faces F. You compute the number V-E+F, and no matter how you drew your graph, that number will be the same for any graph on a given surface.

KK: Which is remarkable enough.

JC: That is remarkable enough, right, that’s hugely remarkable. That’s a very famous theorem that makes this number a topological invariant, so for instance the Euler characteristic is 2, the Euler characteristic of a donut is zero. If you were to take, say, a donut with multiple holes, my son really loves these things called two-tone knots, which are donuts. A two-tone has Euler characteristic of -2, and generally the more holes you add, the more negative the Euler characteristic.

KK: Right, so the formula is 2 minus two times the number of holes, or 2-2g.

JC: Yes, and that’s for a compact surface.

KK: Compact surfaces.

JC: And it gets more complicated for non-compact. So the Gauss-Bonnet theorem in its simplest form, and let me just state it for compact surfaces, so I’m not worried about boundary, it says if you take the Gauss curvature, which is this function, and you integrate that function over the surface, the number that you get is 2π times the Euler characteristic.

KK: This blew my mind the first time I saw it.

JC: This is an incredible relationship, a very surprising relationship between geometry and topology. So for instance, if you take your surface and you wiggle it, you bend it, you can change that Gauss curvature a lot.

KK: Sure.

JC: You can introduce all sorts of wiggles in it from point to point. What this theorem says is that however you do that, all those wiggles have to cancel out because the integral of that function does not change if you wiggle the surface. It’s this absolutely incredible fact.

KK: So for example take a sphere. So we would get 4π.

JC: 4π.

KK: A sphere has constant sectional curvature 1. I guess, can you change that? You can, right?

JC: Sure!

KK: But if you maybe stretch it into an ellipsoid, the curvature is still maybe going to be positive, it’s going to be really steep at the pointy ends but flatter in the middle. So the way I always visualized this was that yeah, you might bend and stretch, which topologists don’t care about, and this integral—and the way we think about integrals is that they’re just big sums, right?—so you increase some of the numbers and decrease some of the numbers, so they’re just canceling out.

JC: Not only that, these numbers are scale invariant. So if you take a big sphere versus a small sphere, the big sphere has more area, but the absolute value of the curvature function is smaller, and those things cancel out. So the integral remains 4π.

KK: Right, so the surface of the Earth, for example, we can’t really see the curvature.

JC: Right.

KK: But it is curved.

JC: It is curved, and the area is so big that the integral of that very small function over that very large area would still be 4π.

KK: Right. So on the donut, right, we’re getting this cancelation. On the inside it’s negative, and it’s going to be 0 in some points, and on the outside it’s positive.

JC: Right. That’s really the amazing thing about the donut. It’s this unique surface where you get zero. So you have this outer part of the donut where the Gauss curvature is positive, the inner part where it’s negative, and no matter what you do to your donut, how irregularly shaped you make it, just the fact that it’s donut shaped means that those regions of positive and negative curvature exactly cancel each other out.

KK: Wow. Yeah, it’s a remarkable theorem. Great connection between geometry and topology. Do you want to talk about the noncompact case?

JC: This also gets interesting for surfaces with boundary. It actually starts, when I teach this in a differential geometry class, where this starts is a very classical idea called the angle excess theorem. And this goes back to Euclidean geometry. So everybody knows in flat Euclidean geometry, if you draw a triangle, what’s the sum of the angles inside the triangle?

KK: Yeah, 180 degrees.

JC: 180 degrees, π, depending on whether you want to work in degrees or radians. This is a consequence of the parallel postulate, and in the history of developing non-Euclidean geometry, what happened is people had developed alternate ideas of geometry with alternate versions of the parallel postulate. So in spherical geometry, imagine you draw a triangle on the sphere. Say you’ve got a globe. Take a triangle with points: one vertex is at the north pole, and two vertices are at the Equator. Say you’ve moved a quarter of the way around the circle, and the straight lines in this geometry are great circles.

KK: Yes.

JC: So draw a triangle between those three points with great circles. That’s a triangle with three right angles.

KK: 270 degrees.

JC: Right, 270 degrees. What the angle excess theorem says is that the difference, and we use radians, so that has 3π/2 angle, instead of π. So it says that the difference of those two numbers is the integral of the Gauss curvature over that triangle.

KK: Oh wow, OK. OK, I believe that.

JC: As we were saying for a sphere, the total Gauss curvature integral is 4π. This triangle I’ve just described takes up an eighth of the sphere, it’s an octant. So its area is π/2, so that’s the difference of its Gauss curvature. So that’s why the difference of sum of those angles and π is π/2. So that’s where this theorem starts, and ultimately the way you prove the angle excess theorem, basically it boils down to Green’s theorem, which I was very excited to hear Amie Wilkinson talk about in one of your previous episodes. It’s really just Green’s theorem to prove the angle excess theorem. So from there, the way you prove the global Gauss-Bonnet theorem is you triangulate your surface. You cut it up into geodesic triangles, you apply the angle excess theorem to each of those triangles, you add them all up, and you count very carefully based on the graph you have drawn of triangles how many vertices, how many edges, and how many faces. And when you count carefully, the Euler characteristic pops out on the other side.

KK: Right, OK.

JC: It’s this very neat combination of classical things, the angle excess theorem and combinatorics. It’s fun teaching an undergraduate course when you tell them counting is hard.

KK: It is hard.

JC: And they don’t believe you until you show them the ways it’s hard.

KK: There’s no way. I can’t count.

JC: So it’s a really fun theorem to do with students. It’s the culmination of the differential geometry class that I teach for undergraduates. I spend the whole semester saying, “Just wait until we get to Gauss-Bonnet! You’re going to think this is really cool!” And when we get there, it really does live up to the hype. They’re really excited by it.

KK: Yeah. So this leads to the question. We like to pair our theorems with something. What have you chosen to pair the Gauss-Bonnet theorem with?

JC: Well the obvious thing would be donuts.

KK: Sure.

JC: And in fact I do sometimes bring in donuts to class to celebrate the end of the class, but you know, this is such a culminating theorem, I really wanted to pair it with something celebratory, like a fireworks display or some sort of very celebratory piece of music.

KK: I can get on with that. It’s true, donuts seem awfully pedestrian.

JC: They do. Donuts are great because of the content of the theorem. They’re a little too pedestrian.

KK: So a fireworks display with what, 1812 Overture?

JC: Something like that.

KK: Really, this is the end. Bang!

JC: I think it deserves the 1812 Overture.

KK: That’s a really good one, OK. And maybe we’ll try to get that into the podcast.

JC: That would be great.

KK: A nice public domain thing if I can find it.

[1812 Overture plays]

JC: Sounds great.

KK: So we like to give our guests a chance to plug something. So you published a book recently?

JC: I did. I recently published a book. It’s called From Frenet to Cartan: The Method of Moving Frames. It’s published in the American Math Society’s graduate series, and it’s basically designed to be a second course in differential geometry, so for advanced undergraduates or beginning graduate students who have had a course in curves and surfaces. Hopefully it’s accessible at that level, and it was really fun. It largely grew out of working with students doing independent study, so I really wrote this book in a way that’s intended to be very student-friendly. It’s informal in style and written the way I would talk to a student in my office. I’m very happy with how it came out, so if this is a topic that’s interesting to any of your listeners, check it out.

KK: That’s great. I took curves and surfaces from your advisor, Robert Bryant, who’s the nicest guy you’ve ever met.

JC: Oh, he’s wonderful.

KK: Everybody loves Robert. That was the last differential geometry course I took, so maybe I should read your book.

JC: Let me give him credit, too. Where this originally came from, when I was a new Ph.D., well relatively new, three years post-Ph.D., Robert invited me to give a series of graduate lectures with him at MSRI, and this book grew out of notes I wrote for that workshop many, many years ago. And Robert, when I very naively said to him, “You know, I have all these lecture notes I should turn into a book,” Robert, having written a book, should have laughed at me, but instead he said, “Yeah, you should!” And it became a back burner project for a long time.

KK: More than a decade, probably.

JC: Yeah, but eventually, I’ve had so much fun working with students on this project.

KK: I’ve written two books, and it’s really, it’s so much work.

JC: You don’t do it for the money.

KK: You really don’t do it for the money, that’s for sure. And of course it’s great you had such a model in Robert, as a teacher and an expositor.

JC: I count myself extremely fortunate to have had him as my advisor.

KK: Well, Jeanne, this has been fun. Thanks for joining us.

JC: Thanks for having me.


Episode 10 - Mohamed Omar

Kevin Knudson: Welcome to My Favorite Theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. And I’m joined by my cohost.

EL: I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah.

KK: Welcome home.

EL: Yeah, thanks. I just got back from Paris a week ago, and I’m almost back on Utah time. So right now I’m waking up very early, but not 3 in the morning, more like 5 or 6.

KK: Wait until you’re my age, and then you’ll just wake up early in the morning because you’re my age.

EL: Yeah. I was talking to my grandma the other day, and I was saying I was waking up early, and she said, Oh, I woke up at 4 this morning.

KK: Yeah, that’s when I woke up. It’s not cool. I don’t think I’m as old as your grandmother.

EL: I doubt it.

KK: But I’m just here to tell you, winter is coming, let me put it that way. We’re pleased today to welcome Mohamed Omar. Mohamed, why don’t you tell everyone a little bit.

MO: Great to be on the podcast. My name is Dr. Mohamed Omar. I’m a professor at Harvey Mudd College. My area of specialty is algebra and combinatorics, and I like pure and applied flavors of that, so theoretical work and also seeing it come to light in a lot of different sciences and computer science. I especially like working with students, so they’re really involved in the work that I do. And I just generally like to be playful with math, you know, have a fun time, and things like this podcast.

KK: Cool, that’s what we aim for.

EL: And I guess combinatorics probably lends itself to a lot of fun games to play, or it always seems like it.

MO: Yeah. The thing I really like about it is that you can see it come to life in a lot of games, and a lot of hobbies can motivate the work that comes up in it. But at the same time, you can see it as a lens for learning a lot of more advanced math, such as, like, abstract algebra, sort of as a gateway to subjects like that. So I love this diversity in that respect.

KK: I always thought combinatorics was hard. I thought I knew how to count until I tried to learn combinatorics. It’s like, wait a minute, I can’t count anything.

MO: It’s difficult when you have to deal with distinguishability and indistinguishability and mixing them, and you sort of get them confused. Yeah, definitely.

KK: Yeah, what’s it like to work at Harvey Mudd? That always seemed like a really interesting place to be.

MO: Harvey Mudd is great. I think the aspects I like of it a lot are that the students are just intrinsically interested and motivated in math and science, and they’re really excited about it. And so it really feels like you’re at a place where people are having a lot of fun with a lot of the tools they learn. So when you’re teaching there, it’s a really interactive, fun experience with the students. There’s a lot of active learning that goes on because the students are so interested in these things. It’s a lot of fun.

KK: Very cool. So, Mohamed, what’s your favorite theorem?

MO: First of all, my favorite theorem is a lemma. Actually a theorem, but usually referred to as a lemma.

KK: Lemmas are where all the work is, right?

MO: Exactly. It’s funny you mention combinatorics because this is actually in combinatorics. It’s called Burnside's Lemma. Yeah, so I love Burnside's Lemma a lot, so maybe I’ll give a little idea of what it is and give an idea in light of what you mentioned, which is that combinatorics can be quite hard. So I’ll start with a problem that’s hard, a combinatorial one that’s hard. So imagine you have a cube. A cube has six faces, right? And say you ask the naive question how many ways are there to paint the faces of the cube with colors red, green, and blue.

KK: Right.

MO: You think, there are six faces, and the top face is either red, or green, or blue, and for every choice of color I use there, another face is red or green or blue, etc. So the number of colorings should be 3x3x3x3x3x3, 3^6.

EL: Right.

MO: But then, you know, you can put a little bit of a twist on this. You can say, how many ways are there to do this if you consider two colorings to be the same if you take the cube and rotate it, take one coloring, rotate the cube, and get another coloring.

EL: Right. If you had the red face on the left side, it could be on the top, and that would be the same.

MO: One naive approach that people tend to think works when they first are faced with this, is they think, OK, there are 6 faces, so maybe I can permute things 6 ways, so I divide the total number by 6.

KK: Wrong.

MO: Exactly. There are a lot of reasons. One is sort of the empirical reason. You said the answer was 3^6 if we’re not caring about symmetry. If you divide that by 6, there’s a little bit of a problem, right?

EL: Yeah.

MO: You can kind of see. If you have a painting where all the faces are red, no matter how you rotate that, you’re going to end up with the same coloring. But as you mentioned, if you color one face red and the rest green, for instance, then you get six different colorings when you rotate this cube around. So you’ve got to do something a little bit different. And Burnside's lemma essentially gives you a nice quick way to approach this by looking at something that’s completely different but easy to calculate. And so this is sort of why I love it a lot. It’s a really, really cool theorem that you can sort of explain at a maybe discrete math kind of level if you’re teaching at a university.

KK: So the actual statement, let’s see if I can remember this. It’s something like the number of colorings would be something like 1 over the order of the group of rotations times the sum of what is it the number of elements in each orbit, or something like that? I used to know this pretty well, and I’ve forgotten it now.

MO: Yeah, so something like that. So a way to think about it is, you have your object, and it has a bunch of symmetries. So if you took a square and you were coloring, say, the edges, this is an analogous situation to the faces of the cube. A square has 8 symmetries. There are the four rotations, but then you can also flip along axes that go through opposite edges, and then axes that go through opposite vertices.

So what Burnside's lemma says is something like this. If you want to know the number of ways to color, up to this rotational symmetry, you can look at every single one of these symmetries that you have. In the square it’s 8, in the cube it turns out to be 24. And for every single symmetry, you ask yourself how many ways are there to color with the three colors you have where the coloring does not change under that symmetry.

KK: The number of things fixed, essentially, right.

MO: Exactly. The number of things fixed by the symmetries. So like I mentioned, the cube has 24 symmetries. So let’s take an example of one. Let’s say you put a rod through the center of two opposite faces of the cube.

KK: Right.

MO: And you rotate 90 degrees along that. So you’re thinking about the top face and the bottom face and just rotating 90 degrees. Let’s just think about the colorings that would remain unchanged by that symmetry. So you’re free to choose whatever you’d like for the top and bottom face. But all the side faces will have to have the same color. Because as soon as you get another face. Whatever was in that face is now rotated 90 degrees as well. So if you count the number of colorings fixed by that rotation about the rod through the opposite faces, you get something like, well you have three choices for those side faces. As soon as you choose the color for one, you’re forced to use the same color for the rest. And then you have freedom in your top and bottom faces. So that’s just one of the symmetries. Now if you did that for every single symmetry and took the average of them, it turns out to be the number of ways to color the faces of the cube up to rotational symmetry in general.

So it’s kind of weird. There’s sort of two things that are going on. One is why in the world would looking at the symmetries and counting the number of colorings fixed under the symmetry have anything to do with the number of colorings in total up to symmetry in general? It’s not clear what the relationship there is at first. But the real cool part is that if you take every single symmetry and count the number of colorings, that’s a systematic thing you can do without having to think too hard. It’s a nice formula you can get at the answer quite quickly even though it seems like a complicated thing that you’re doing.

EL: Yeah. So I guess that naive way we were talking about to approach this where you just say, well I have three choices for this one, three choices for that one, you almost kind of look at it from the opposite side. Instead of thinking about how I’m painting things, I think about how I’m turning things. And then looking at it on a case by case basis rather than looking at the individual faces, maybe.

MO: Exactly. When I first saw this, I saw this as an undergrad, and I was like, “What?!” That was my initial reaction. It was a cool way to make some of this abstract math we were learning really come to life. And I could see what was happening in the mathematics physically, and that gave me a lot of intuition for a lot of the later things we were learning related to that theorem.

EL: Was that in a combinatorics class, or discrete math class?

MO: It was actually in a standalone combinatorics class that I learned this. And now another reason I really like this lemma is that I teach it in a discrete math course that I teach at Harvey Mudd, but then I revisit it in an abstract algebra course because really, you can prove this theorem using a theorem in abstract algebra called the orbit stabilizer theorem. So orbits are all of these different, you take one coloring, spin it around in all possible ways, you get a whole bunch of different ones, and stabilizers you can think of as taking one symmetry and asking what colorings are fixed under that symmetry. So that’s in our example what those two things are. In abstract algebra, there’s this orbit stabilizer theorem that has to do with more general objects: groups, like you mentioned. And then one of the things I really like about this theorem is that it sets the stage for even more advanced math like representation theory. I feel like a lot of the introductory concepts in a representation theory course really come back to things you play with in Burnside’s Lemma. It’s really cool in its versatility like that.

KK: That’s the context I know it in. So I haven’t taught group theory in 10 years or so, but it was in that course. Now I’m remembering all of this. It’s coming back. This is good. I’m glad we’re having this conversation. I’m with you. I think this is a really remarkable theorem. But I never took a combinatorics course that was deep enough where we got this far. I only know it from the groups acting on sets point of view, which is how you prove this thing, right? And as you say, it’s definitely leads into representation theory because, as you say, you can build representations of your groups. You just take a basis for a vector space and let it act this way, and a lot of those character formulas really drop out of this.

MO: Exactly.

KK: Very cool.

EL: So it sounds like you did not have a hard time choosing your favorite theorem. This was really, you sound very excited about this theorem.

MO: The way I tried to think about what my favorite theorem was what theorem to I constantly revisit in multiple different courses? If I do that, I must like it, right? And then I thought, hey, Burnside's Lemma is one that I teach in multiple courses because I like all the different perspectives that you can view it from. Then I had this thought: is Burnside's Lemma really a theorem?

KK: Yeah, it is.

MO: I felt justified in for the following reason, which is I think this lemma’s actually due to Frobenius, not Burnside. I thought, since the Burnside part is not really due to Burnside, then maybe the lemma part really is a theorem.

EL: I must say, Burnside sounds like it should be a Civil War general or something.

MO: Definitely.

EL: So what have you chosen to pair with your theorem?

MO: So I thought a chessboard marble cake would be perfect.

KK: Absolutely.


MO: So first of all, I had a slice of one just about a few hours ago. It was my brother’s birthday recently, and I’m visiting family. There was leftover cake, and I indulged. But then I thought yeah, one of the prototypical problems when playing around with Burnside's Lemma is how many ways are there to color the cells of a chessboard up to rotational symmetry? So when I was eating the cake, I thought, hey, this is perfect!

EL: That’s great.

KK: How big of a chessboard was it?

MO: 8x8.

KK: Wow, that’s pretty remarkable.

MO: It was a big cake. I had a big piece.

KK: So when you sliced into it, was it 8x8 that way, or 8x8 across the top?

MO: Across the top.

KK: I’m sort of imagining, so my sister in law is a pastry chef, and she makes these remarkably interesting looking things, and it’s usually more like a 3x3, the standard if you go vertical.

EL: I’ve never tried to make a chessboard cake. I like to bake a lot, but anything that involves me being fussy about how something looks is just not for me. In baking. Eating I’m happy with.

MO: I’m the same. I really enjoy cooking a lot. I enjoy the cooking and the eating, not the design.

KK: Yeah, I’m right there with you. Well this has been fun. Thanks for joining us, Mohamed.

EL: Yeah.

MO: Thank you. This has been really enjoyable.

KK: Take care.

MO: Thank you.


Episode 9 - Ami Radunskaya

Evelyn Lamb: Welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Salt Lake City. And today I am not joined by my cohost Kevin Knudson. Today I am solo for a very special episode of My Favorite Theorem because I am at MathFest, the annual summer meeting of the Mathematical Association of America. This year it’s in Chicago, a city I love. I lived here for a couple years, and it has been very fun to be back here with the big buildings and the lake and everything. There are about 2,000 other mathematicians here if I understand correctly. It’s a very busy few days with lots of talks to attend and friends to see, and I am very grateful that Ami Radunskaya has taken the time to record this podcast with me. So will you tell me a little bit about yourself?

Ami Radunskaya: Hi Evelyn. Thanks. I’m happy to be here at MathFest and talking to you. It’s a very fun conference for me. By way of introduction, I’m the current president for the Association for Women in Mathematics, and I’m a math professor at Pomona College in Claremont, which is a small liberal arts college in the Los Angeles County. My Ph.D. was in ergodic theory, something I am going to talk about a little bit. I went to Stanford for my doctorate, and before that I was an undergraduate at Berkeley. So I grew up in Berkeley, and it was very hard to leave.

EL: Yeah. You fall in love with the Bay Area if you go there.

AR: It’s a place dear to my heart, but I was actually born in Chicago.

EL: Oh really?

AR: So I used to visit my grandparents here, and it brings back memories of the Museum of Science and Industry and all those cool exhibits, so I’m loving being back here.

EL: Yeah, we lived in Hyde Park when we were here, so yeah, the Museum of Science and Industry.

AR: I think I was born there, Hyde Park.

EL: Oh? Good for you.

AR: My dad was one of the first Ph.D.s in statistics from the University of Chicago.

EL: Oh, nice.

AR: Although he later became an economist.

EL: Cool. So, what is your favorite theorem?

AR: I’m thinking today my favorite theorem is the Birkhoff ergodic theorem. I like it because it’s a very visual theorem. Can I kind of explain to you what it is?

EL: Yeah.

AR: So I’m not sure if you know what ergodic means. I actually first went into the area because I thought it was such a cool word, ergodic.

EL: Yeah, it is a cool word.

AR: I found out it comes from the Greek word ergod for path. So I’ve always loved the mathematics that describes change and structures evolving, so before I was a mathematician I was a professional cellist for about 10 years. Music and math are sort of as one in my mind, and that’s why I think I’m particularly attracted to the kinds of mathematics and the kinds of theory that describes how things change, what’s expected, what’s unexpected, what do we see coming out of a process, a dynamical process? So before I state the theorem, I need to tell you what ergodic means.

EL: Yeah.

AR: It’s an adjective. We’re talking about a function. We say a function is ergodic if it takes points: imagine you put a value into a function, you get out a new value. You put that value back in to the function, you get a new value. Repeat that over and over and over again, and now the function is ergodic if that set of points sort of visits everywhere in the space. So we say more technically a function is ergodic if the invariant sets, the sets it leaves alone, the sets that get mapped to themselves, are either the whole space or virtually nothing. A function is ergodic, a map is ergodic, if the invariant sets either have, we say, full measure or zero measure. So if you know anything about probability, it’s probability 1 or probability zero. I think that’s an easy way to think about measure.

EL: Yeah, and I think I’ve heard people describe ergodic as the time average is equal to the space average, so things are distributing very evenly when you look at long time scales. Is that right?

AR: Well that’s exactly the ergodic theorem. So that’s a theorem!

EL: Oh no!

AR: No, so that’s cool that you’ve heard of that. What I just said was that something is ergodic if the sets that it leaves unchanged are either everything or nothing, so these points, we call them the orbits, go everywhere around the set, but that doesn’t tell you how often they visit a particular piece of your space, whereas the ergodic theorem, so there are two versions of it. My favorite one is the one, they call it the pointwise ergodic theorem, because I think it’s easier to visualize. And it’s attributed to Birkhoff. So sometimes it’s called the Birkhoff ergodic theorem. And it’s exactly what you just said. So if you have an ergodic function, and then we start with a point and we sort of average it over many, many applications of the function, or iterations of the function, so that’s the time average. We think of applying this function once every time unit. The time average is the same as the integral of that function over the space. That’s the space average. So you can either take the function and see what it looks like over the entire space. And remember, that gives you, like, sizes of sets as well. So you might have your space, your function might be really big in the middle of the space, so when you integrate it over that piece, you’ll get a big hump. And it says that if I start iterating at any point, it’ll spend a lot more time in the space where the function is big. So the time average is equal to the space average. So that is the pointwise Birkhoff ergodic theorem. And I think it’s really cool because if you think about, say, if you’ve ever seen pictures of fractal attractors or something, so often these dynamical systems, these functions we’re looking at, are ergodic on their attractor. All the points get sucked into a certain subset, and then on that subset they stay on it forever and move around, so they’re ergodic on that attractor.

EL: Yeah.

AR: So if we just, say, take a computer and start with a number and plug it in our function and keep iterating, or maybe it’s a two-dimensional vector, or maybe it’s even a little shape, and you start iterating, you see a pattern appear because that point is visiting that set in exactly the right amount. Certain parts are darker, certain parts are lighter, and it’s as if, I don’t know in the old days, before digital cameras, we would actually develop photographs. Imagine you put that blank page in the developing fluid, and you sort of see it gradually appear. And it’s just like that. The ergodic theorem gives us that magical appearance of these shapes of these attractors.

EL: Yeah. That’s a fun image. I’m almost imagining a Polaroid picture, where it slowly, you know, you see that coming out.

AR: It’s the same idea. If you want to think about it another way, you’re sort of experiencing this process. You are the point, and you’re going around in your life. If your life is ergodic, and a lot of time it is, it says that you’ll keep bumping into certain things more often than others. What are those things you’ll bump into more often? Well the things that have higher measure for you, have higher meaning.

EL: Yeah. That’s a beautiful way to think about it. You kind of choose what you’re doing, but you’re guided.

AR: I call it, one measure I put on my life is the fun factor.

EL: That’s a good one.

AR: If your fun factor is higher, you’ll go there more often.

EL: Yeah. It also says something like, if you know what you value, you can choose to live your life so that you do visit those places more. That’s a good lesson. Let the ergodic theorem guide you in your life. OK, so what have you chosen to pair with this theorem?

AR: So the theorem has a lot of motion in it. A lot of motion, a lot of visualization. I think as far as music, it’s not so hard to think of an ergodic musical idea. Music is, after all, structures evolving through space.

EL: Exactly.

AR: I think I would pair Steve Reich’s Violin Phase. Do you know that piece?

EL: Yeah, yeah.

AR: So what it is, it’s a phrase on the violin, then you hear another copy of it playing at the same time. It’s a repetitive phrase, but one of them gets slightly out of phase with the other, and more and more and more and more. And what you hear are how those two combine in different ways as they get more and more and more and more out of phase. And if you think of that visually, you might think of rotating a circle bit by bit by bit, and in fact, we know irrational rotations of the circle are ergodic. You visit everywhere, so you hear all these different combinations of those patterns. So Steve Reich Violin Phase. He has a lot of pattern music. Some of it is less ergodic, I mean, you only hear certain things together. But I think that continuous phase thing is pretty cool.

EL: Yeah. And I think I’ve heard it as Piano Phase more often than Violin Phase.

AR: It’s a different piece. He wrote a bazillion of them.

EL: Yeah, but I guess the same idea. I really like your circle analogy. I almost imagine, maybe the notes are gears sticking out of the circle, and they line up sometimes. Because even when it’s not completely back in phase, sometimes the notes are playing at the same time but at a different part of the phrase. They almost lock in together for a little while, and then turn a little bit more and get out again and then lock in again at a different point in the phrase. Yeah, that’s a really neat visual. Have you performed much Steve Reich music?

AR: I’ve performed some, mostly his ensemble pieces, which are really fun because you have to focus. One of my favorites of his is called Clapping Music because you can do it with just two people. It’s the same idea as the Violin Phase, but it’s a discrete shift each time, so a shift by an eighth note. So the pattern is [clapping]. One person claps that over and over and over, and the other person claps that same rhythm but shifts it by one eighth note each time. So since that pattern is 12 beats long, you come back to it after 12 beats. So it’s discretized. You do each one twice, so it’s 24, so it’s long enough.

EL: So that’s a non-ergodic one, a periodic transformation.

AR: Exactly. So that one I do a lot when I give talks about how we can describe mathematics with its musical manifestations, but we can also describe music mathematically.

EL: Just like you, music is one of my loves too. I played viola for a long time. I’ve never performed any Steve Reich, and I’m glad you didn’t ask me to spontaneously perform Clapping Music with you. I think that would be tough to do on the spot.

AR: We can do that offline.

EL: Yeah, we’ll do that once we hang up.

AR: As far as foods, I think there are some great pairings of foods with the ergodic theorem. In fact, I think we apply the ergodic theorem often in cooking. You know, you mix stuff up. So one thing I like to do sometimes is make noodles, with a roller thing.

EL: Oh, from scratch?

AR: Yeah. You just get some flour, get some eggs, or if you’re vegan, you get some water. That’s the ingredients. You mix it up and you put it through this roller thing, so you can imagine things are getting quite mixed up. What’s really cool, I don’t know if you’ve ever eaten something they call in Italy paglia e fieno, straw and hay.

EL: No.

AR: And all it is is pasta colored green, so they put a little spinach in one of them. So you’ve got white and green noodles. So when you cook some spinach, you’ve got your dough. You put some blobs of spinach in. You start mushing it around and cranking it through, and you see the blobs make these cool streaks, and the patterns are amazing, until it’s uniformly, more or less, green.

EL: Yeah.

AR: So I’d say, paglia e fieno, we put on some Steve Reich, and there you go.

EL: That’s great. A double pairing. I like it.

AR: You can think of a lot of other things.

EL: Yeah, but in the pasta, you can really see it, almost like taffy. When you see pulling taffy. You can almost see how it’s getting transformed.

AR: It’s getting all mushed around.

EL: Thank you so much for talking to me about the Birkhoff ergodic theorem. And I hope you have a good rest of MathFest.

AR: You too, Evelyn. Thank you.


Episode 8 - Justin Curry

Kevin Knudson: Welcome to MFT. I'm Kevin Knudson, your host, professor of mathematics at the University of Florida. I am without my cohost Evelyn Lamb in this episode because I'm on location at the Banff International Research Station about a mile high in the Canadian Rockies, and this place is spectacular. If you ever get a chance to come here, for math or not, you should definitely make your way up here. I'm joined by my longtime friend Justin Curry. Justin.

Justin Curry: Hey Kevin.

KK: Can you tell us a little about yourself?

JC: I'm Justin Curry. I'm a mathematician working in the area of applied topology. I'm finishing up a postdoc at Duke University and on my way to a professorship at U Albany, and that's part of the SUNY system.

KK: Contratulations.

JC: Thank you.

KK: Landing that first tenure-track job is always

JC: No easy feat.

KK: Especially these days. I know the answer to this already because we talked about it a bit ahead of time, but tell us about your favorite theorem.

JC: So the theorem I decided to choose was the classification of regular polyhedra into the five Platonic solids.

KK: Very cool.

JC: I really like this theorem for a lot of reasons. There are some very natural things that show up in one proof of it. You use Euler's theorem, the Euler characteristic of things that look like the sphere, R=2.

There's duality between some of the shapes, and also it appears when you classify finite subgroups of SO(3). You get the symmetry groups of each of the solids.

KK: Oh right. Are those the only finite subgroups of SO(3)?

JC: Well you also have the cyclic and dihedral groups.

KK: Well sure.

JC: They embed in, but yes. The funny thing is they collapse too because dual solids have the same symmetry groups.

KK: Did the ancient Greeks know this, that these were the only five? I'm sure they suspected, but did they know?

JC: That's a good question. I don't know to what extent they had a proof that the only five regular polyhedra were the Platonic solids. But they definitely knew the list, and they knew they were special.

KK: Yes, because Archimedes had his solids. The Archimedean ones, you are allowed different polygons.

JC: That's right.

KK: But there's still this sort of regularity condition. I can never remember the actual definition, but there's like 13 of them, and then there's 5 Platonics. So you mentioned the proof involving the Euler characteristic, which is the one I had in mind. Can we maybe tell our listeners how that might go, at least roughly? We're not going to do a case analysis.

JC: Yeah. I mean, the proof is actually really simple. You know for a fact that vertices minus edges plus faces has to equal 2. Then when you take polyhedra constructed out of faces, those faces have a different number of edges. Think about a triangle, it has 3 edges, a square has 4 edges, a pentagon is at 5. You just ask how many edges or faces meet at a given vertex? And you end up creating these two equations. One is something like if your faces have p sides, then p times the number of faces equals 2 times the number of edges.

KK: Yeah.

JC: Then you want to look at this condition of faces meeting at a given vertex. You end up getting the equation q times the number of vertices equals 2 times the number of edges. Then you plug that into Euler's theorem, V-E+F=2, and you end up getting very rigid counting. Only a few solutions work.

KK: And of course you can't get anything bigger than pentagons because you end up in hyperbolic space.

JC: Oh yeah, that's right.

KK: You can certainly do this, you can make a torus. I've done this with origami, you sort of do this modular thing. You can make tori with decagons and octagons and things like that. But once you get to hexagons, you introduce negative curvature. Well, flat for hexagons.

JC: That's one of the reasons I love this theorem. It quickly introduces and intersects with so many higher branches of mathematics.

KK: Right. So are there other proofs, do you know?

JC: So I don't know of any other proofs.

KK: That's the one I thought of too, so I was wondering if there was some other slick proof.

JC: So I was initially thinking of the finite subgroups of SO(3). Again, this kind of fails to distinguish the dual ones. But you do pick out these special symmetry groups. You can ask what are these symmetries of, and you can start coming up with polyhedra.

KK: Sure, sure. Maybe we should remind our readers about-readers-I read too much on the internet-our listeners about duality. Can you explain how you get the dual of a polyhedral surface?

JC: Yeah, it's really simple and beautiful. Let's start with something, imagine you have a cube in your mind. Take the center of every face and put a vertex in. If you have the cube, you have six sides. So this dual, this thing we're constructing, has six vertices. If you connect edges according to when there was an edge in the original solid, and then you end up having faces corresponding to vertices in the original solid. You can quickly imagine you have this sort of jewel growing inside of a cube. That ends up being the octahedron.

KK: You join two vertices when the corresponding dual faces meet along an edge. So the cube has the octahedron as its dual. Then there's the icosahedron and the dodecahedron. The icosahedron has 20 triangular faces, and the dodecahedron has 12 pentagonal faces. When you do the vertex counts on all of that you see that those two things are dual. Then there's the tetrahedron, the fifth one. You say, wait a minute, what's its dual?

JC: Yeah, and well it's self-dual.

KK: It's self-dual. Self-dual is a nice thing to think about. There are other things that are self-dual that aren't Platonic solids of course. It's this nice philosophical concept.

JC: Exactly.

KK: You sort of have two sides to your personality. We all have this weird duality. Are we self-dual?

JC: I almost like to think of them as partners. The cube determines, without even knowing about it, its soulmate the octahedron. The dodecahedron without knowing it determines its soulmate the icosahedron. And well, the tetrahedron is in love with itself.

KK: This sounds like an algorithm for match.com.

JC: Exactly.

KK: I can just see this now. They ask a question, “Choose a solid.” Maybe they leave out the tetrahedron?

JC: Yeah, who knows?

KK: You don't want to date yourself.

JC: Maybe you do?

KK: Right, yeah. On our show we like to ask our guests to pair their theorem with something.

JC: It's a little lame in that it's sort of obvious, but Platonic solids get their name from Plato's Timaeus. It's his description of how the world came to be, his source of cosmogeny. In that text he describes an association of every Platonic solid with an element. The cube is correspondent with the element earth. You want to think about why would that be the case? Well, the cube can tessellate three-space, and it's very stable. And Earth is supposed to be very stable, and unshakeable in a sense. I don't know if Plato actually knew about duality, but the dual solid to the cube is the octahedron, which he associated with air. So you have this earth-sky symbolic dualism as well.

Then unfortunately I think this kind of analogy starts to break down a bit. You have the icosahedron, the one made of triangle sides. This is associated to water. And if you look at it, this one sort of looks like a drop of water. You can imagine it rolling around and being fluid. But it's dual to the dodecahedron, this oddball shape. They only thought of four elements: earth, fire, wind, water. What do you do with this fifth one? Well that was for him ether.

KK: So the tetrahedron is fire?

JC: Yeah, the tetrahedron is fire.

KK: Because it's so pointy?

JC: Exactly.

KK: It's sort of rough and raw, or that They Might Be Giants Song “Triangle Man.” It's the pointiest one. Triangle wins every time.

JC: The other thing I like is that fire needs air to breathe. And if you put tetrahedra and octahedra together, they tessellate 3-space.

KK: So did they know that?

JC: I don't know. That's why this is fun to speculate about. They obviously had an understanding. It's unclear what was the depth or rigor, but they definitely knew something.

KK: Sure.

JC: We've known this for thousands of years.

KK: And these models, are they medieval, was it Ptolemy or somebody, with the nested?

JC: The way the solar system works.

KK: Nested Platonic solids. These things are endlessly fascinating. I like making all of them out of origami, out of various things. You can do them all with business cards, except the dodecahedron.


KK: It's hard to make pentagons. You can take these business cards and you can make these. Cubes are easy. The other ones are all triangular faces, and you can make these triangular modules where you make two triangles out of business cards with a couple of flaps. And two of them will give you a tetrahedron. Four of them will give you an octahedron. The icosahedron is tricky because you need, what, 10 business cards. I have one on my desk. It's been there for 10 years. It's very stable once it's together, but you have to use tape along the way and then take the tape off. It's great fun. There's this great book by Thomas Hull, I forgot the name of it [Ed note: it's called Project Origami: Activities for Exploring Mathematics], a great origami book by Thomas Hull. I certainly recommend all of that.

Anything else you want to add? Anything else you want to tell us about these things? You have all these things tattooed on your body, so you must be

JC: I definitely feel pretty passionate. It's one of those things, if I have to live with this for 30 years, I'll know the Platonic solid won't change. There won't be suddenly a new one discovered.

KK: Right. It's not like someone's name, you might regret it later. But my tattoos are, this is man, woman, and son. My wife and I just had our 25th anniversary, so this is still good. I don't expect to have to get rid of that.

Anyway, well thanks, Justin. This has been great fun. Thanks for taking a few minutes out of your busy schedule. This is a really cool conference, by the way.

JC: I love it. We're bringing together some of the brightest minds in applied topology, and outside of applied topology, to see how topology can inform data science and how algebra interacts in this area, what new foundations we need and aspects of algebra.

KK: Yeah, it's very cool. Thanks again, and good luck in your new job.

JC: Thanks, Kevin.


Episode 7 - Henry Fowler

Evelyn Lamb: Welcome to My Favorite Theorem, the show where we ask mathematicians what their favorite theorem is. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I had to wear a sweater yesterday.

EL: Oh my goodness! Yeah, I’ve had to wear a sweater for about a month and a half, so.

KK: Yeah, yeah, yeah.

EL: Maybe not quite that long.

KK: Well, it’ll be hot again tomorrow.

EL: Yeah. So today we’re very glad to have our guest Henry Fowler on. Henry, would you like to tell us a little bit about yourself?

Henry Fowler: I’m a Navajo Indian. I live on the Navajo reservation. I live by the Four Corners in a community, Tsaile, Arizona. It’s a small, rural area. We have a tribal college here on the Navajo Nation, and that’s what I work for, Diné College. I’m a math faculty. I’m also the chair for the math, physics, and technology. And my clan in Navajo is my maternal clan is Bitterwater and my paternal clan is Zuni Edge Water.

EL: Yeah, and we met at the SACNAS conference just a couple weeks ago in Salt Lake City, and you gave a really moving keynote address there. You talked a little bit about how you’re involved with the Navajo Math Circles.

HF: Yes. I’m passionate about promoting math education for my people, the Navajo people.

EL: Can you tell us a little bit about the Navajo Math Circles?

HF: The Navajo Math Circles started seven years ago with a mathematician from San Jose State University, and her name is Tatiana Shubin. She contacted me by email, and she wanted to introduce some projects that she was working on, and one of the projects was math circles, which is a collection of mathematicians that come together, and they integrate their way of mathematical thinking for grades K-12 working with students and teachers. Her and I, we got together, and we discussed one of the projects she was doing, which was math circles. And it was going to be here on the Navajo Nation, so we called it Navajo Math Circles. Through her project and myself here living on the Navajo Nation, we started the Navajo math circles.

KK: How many students are involved?

HF: We started first here at Diné College, we started first with a math summer camp, where we sent out applications, and these were for students who had a desire or engaged themselves to study mathematics, and it was overwhelming. Over 50 students applied for only 30 slots that were open because our grant could only sustain 30 students. So we screened the students and with the help of their regular teachers from junior high or high school, so they had recommendation letters that were also presented to us. So we selected the first 30 students. Following that we expanded our math circle to the Navajo Nation public school system, and there’s also contract schools and grant schools. Now we’re serving, I would say over 1,000 students now.

KK: Wow. That’s great. I assume these students have gone on to do pretty interesting things once they finish high school and the circle.

HF: Yes. We sort of strategized. We wanted to work with lower grades a little bit. We wanted to really promote a different way of thinking about math problems. We started off with the first summer math camp at the junior high or the middle school level, and also the students that were barely moving to high school, their freshman year or their 10th grade year. That cohort, the one that we started off with, they have a good rate of doing very well with their academic work, especially in math, at their high school and junior high school. We have four that have graduated recently from high school, and all four of them are now attending a university.

KK: That’s great.

EL: And some of our listeners may have seen there’s a documentary about Navajo math circles that has played on PBS some, and we’ll include a link to that for people to learn a little bit about that in the show notes for the episode. We invited you here to My Favorite Theorem, of course, because we like to hear about what theorems mathematicians enjoy. So what have you selected as your favorite theorem?

HF: I have quite a few of them, but something that is simple, something that has been an awe for mathematicians, the most famous theorem would be the Pythagorean theorem because it also relates to my cultural practices, to the Navajo.

KK: Really?

HF: The Pythagorean theorem is also how Navajo would construct their traditional home. We would call it a Navajo hogan. The Navajo would use the Pythagorean theorem charting how the sun travels in the sky, so they would open their hogan door, which is always constructed facing east. So once the sun comes out, it projects its energy, the light, into the hogan. The Navajo began to study that phenomenon, how that light travels in space in the hogan. They can predict the solstice, the equinox. They can project how the constellations are moving in the sky, so that’s just a little example.

EL: Oh, yeah. Mathematicians, we call it the Pythagorean theorem, but like many things in math, it’s not named after the first person ever to notice this relationship. The Pythagorean theorem is a2+b2=c2, the relationship between the lengths of the legs of a right triangle and the hypotenuse of a right triangle, but it was known in many civilizations before, well before Pythagoras was born, a long time ago. In China, India, the Middle East, and in North America as well.

HF: Yes, Navajo, we believe in a circle of life. There’s time that we go through our process of life and go back to the end of our circle, and it’s always about to give back, that’s our main cultural teaching, to give back as much as you can, back to the people, back to nature, back to your community, as well as what you want to promote, what you’re passionate about, to give back to the people that way. Our way is always interacting with circles, that phenomenon, and how the Navajo see the relationship to space, the relationship to sunlight, how it travels, how they capture it in their hogan. Also they can related it to defining distance, how they relate the Pythagorean theorem to distance as well as to a circle.

KK: What shape is the hogan? I’m sort of curious now. When the light comes in, what sort of shadows does it cast?

HF: The Navajo hogan is normally a nine-sided polygon, but it Navajo can also capture what a circle means by regular polygons, the more sides they have, drawing to become a circle. They understand that event, they understand that phenomenon. The nine sides is in relationship to when a child is conceived and then delivered, it’s nine months. The Navajo call it nine full moons because they only capture what’s going on within their environment, and they’re really observant to how the sky and constellations are moving. Their monthly calendar is by full moon. And so that’s how, when the light travels in, when they open that hogan door, it’s like a semi-circle. In that space they feel like they are also secure and safe, and that hogan is also a representation that they are the child of Mother Earth, and that they are the child of Father Sky. And so that hogan is structured in relationship to a mother’s womb, when a child is being conceived and that development begins to happen. Navajos say that the hogan is a structure where in relationship there are four seasons, four directions, and then there are four developments that happen until you enter old age. There will be the time of your birth, the time when you become an adult, mid-life, and eventually old age. So using that concept, when that door is open, they harvest that sunlight when it comes in. Now we are moving to the state of winter solstice. That happens, to western thinking, around December 22. To the Navajo, that would be the 13th full moon, so when that light comes in that day, it will be a repeated event. They will know where. When the light comes into the hogan, when the door is opened, it will project on the wall of the hogan. When it projects on that wall, they mark it off. Every time, each full moon, they capture that light to see where it hits on the wall. That’s how they understand the equinox, that’s how they understand the solstice, in relationship to how the light is happening.

KK: Wow, that’s more than my house can do.

HF: Then they also use a wood stove to heat the hogan. There’s an opening at the center of the hogan, they call the chimney. They capture that sunlight, and they do every full moon. Sometimes they do it at the middle of that calendar, they can even divide that calendar into quarters. When they divide it into quarters, they chart that light as it comes in through the chimney. They find out that the sun travels through the sky in a figure eight in one whole year. They understand that phenomenon too.

KK: Ancient mathematics was all about astronomy, right? Every culture has tried to figure this out, and this is a really ingenious solution, with the chimney and the light. That’s really very cool.

HF: The practice is beginning not to be learned by our next generation because now our homes are more standardized. We’re moving away from that traditional hogan. Our students and our young people are beginning not to interact with how that light travels in the hogan space.

KK: Did you live in a hogan growing up?

HF: Yes. People around probably my age, that was how they were raised, was in a traditional hogan. And that was home for us, that construction. Use the land, use nature to construct your home with whatever is nearby. That’s how you create your home. Now everything is standardized in relation to different building codes.

EL: So what have you chosen to pair with your theorem?

HF: I guess I pair my Pythagorean theorem to my identity, who I am, as a Navajo person. I really value my identity, who I am as an indigenous person. I’m very proud of my culture, my land, where I come from, my language, as well as I compare it to what I know, the ancient knowledge of my ancestors is that I always respect my Navajo elders.

KK: Very cool. Do you think that living in a hogan, growing up in a hogan, did that affect you mathematically? Do you think it sort of made you want to be a mathematician? Were you aware of it?

HF: I believe so. We did a lot of our own construction, nothing so much that would be store-bought. If you want to play with toys, you’d have to create that toy on your own. So that spatial thinking, driving our animals from different locations to different spots, and then bringing our sheep back at a certain time. You’d calculate this distance, you’d estimate distance. You’d do a lot of different relationships interacting with nature, how it releases patterns. You’d get to know the patterns, and the number sense, the relationships. I really, truly believe that my culture gave me that background to engage myself to study mathematics.

KK: Wow.

EL: Yeah, and now you’re making sure that you can pass on that knowledge and that love for mathematics to younger people from your community as well.

HF: That’s my whole passion, is to strengthen our math education for my Navajo people. Our Navajo reservation is as large as West Virginia.

EL: Oh, wow, I didn’t realize that.

HF: And there’s no leader that has stood up to say, “I’m going to promote math education.” Right now, in my people, I’m one of the leaders in promoting math eduction. It’s strengthening our math K-12 so we build our infrastructure, we build our economy, we build better lives for my Navajo people, and that we build our own scientists, we build our own doctors and nurses, and we want to promote our own students, to show interests or take the passion and have careers in STEM fields. We want to build our own Navajo professors, Navajo scholars, Navajo researchers. That all takes down to math education. If we strengthen the education, we can say we are a sovereign nation, a sovereign tribe, where we can begin to build our own nation using our own people to build that nation.

EL: Wow, that’s really important work, and I hope our listeners will go and learn a little bit more about the Navajo math circles and the work you do, and other teachers and everyone are doing there.

HF: It’s wonderful because we have so many social ills, social problems among my people. There’s so much poverty here. We have near 50 percent unemployment. And we want my people to have the same access to opportunity just like any other state out there. And the way, from my perspective, is to promote math education, to bring social justice and to have access to a fair education for my people. And it’s time that the Navajo people operate their own school system with their own indigenous view, create our own curriculum, create our own math curriculum, and standardize our math curriculum in line to our elders’ thinking, to our culture, to our language, and that’s just all for my Navajo people to understand their self-identity, so they truly know who they are, so they become better people, and they get that strength, so that that motivation comes. To me, that’s what my work is all about, to help my people as a way to combat the social problems that we’re having. I really believe that math kept me out of problems when I was growing up. I could have easily joined a gang group. I would not have finished my education, my western education, but math kept me out of problems, out of trouble growing up.

KK: You’re an inspiration. I feel like I’m slacking. I need to do something here.

EL: Yeah. Thank you so, so much for being on the podcast with us. I really enjoyed talking with you.

KK: Yeah, this was great, Henry. Thank you.

HF: You’re welcome.


Episode 6 - Eriko Hironaka

This transcript is provided as a courtesy and may contain errors.

EL: Welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Paris for a few more days, but after that I’ll be based in Salt Lake City, Utah. And this is your other host.

KK: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida, where it’s raining. It’s been raining for a week. After a spring of no rain, now it’s raining. But that’s OK.

EL: I probably shouldn’t tell you that it’s absolutely gorgeous, sunny and 75 degrees in Paris right now.

KK: You really shouldn’t.

EL: OK, then I won’t. OK. Each episode we invite a mathematician on to find out about their favorite theorem. Today we’re very happy to welcome Eriko Hironaka onto the show. So would you like to tell us a little bit about yourself, Eriko?

EH: Yes, thank you, first of all, for having me on this show. It’s very flattering and exciting. I worked at Florida State University for almost twenty years. I was a professor there, and I recently moved to the American Mathematical Society. I’ve been working there for two years. One year, I guess, full time, so far. I work in the book program. I’m somebody who is a mathematician but is doing it from various angles.

EL: Yeah, I was really interested in having you on the podcast because I think that’s a cool perspective to have where you’ve been in the research world for a long time, but now you’re also seeing a broader view, maybe, of math, or kind of looking at it from a different angle than before. Do you mind telling us a little bit about what you do as a book person for the AMS?

EH: Yeah, what do I do? Actually I was thrown into this job, in a way. They said, OK, you’re going to work in the book program. Your job is basically to talk to people about books, and see if anybody wants to write a book, and if they do, you keep talking with them, and when they finally submit something, you prepare, maybe the real job part is to, once there’s a submission, start it through a review process, then also what’s kind of exciting is to convince the publishing group to actually publish the book. That part requires me to think about how this book fits into mathematics and mathematical literature and then also how much it’ll cost to produce the book and what’s involved in selling the book. Who is the audience and how can it be presented in the best possible way? I think of myself as sort of the connector between the author, who is thinking about the mathematics, and the publishers, who are thinking about. The AMS is a nonprofit, but to cover costs and make this a reasonable project.

EL: You see a lot of different aspects of this, then.

EH: Yeah, so I don’t know if I was more naive than most mathematicians, but I think most mathematicians don’t think beyond proving theorems and conveying and communicating their ideas to other people. Maybe they also think about what to write on their vita, and things like that. That kind of thing is very different. Right now I don’t really have a reason to keep up my vita in the same way that I used to. That was a big change for me.

KK: Right.

EH: I still do mathematics, I still do research, give talks, and things like that. I still write papers. But that’s really become something just for me. Not for me, it’s for math, I guess. But it’s not for an institute.

KK: It’s not for the dean.

EH: It’s not for the dean. Exactly.

KK: That’s really liberating, I would think, right?

EH: It’s super liberating, actually. It’s really great.

KK: Very cool. I dream about that. One of these days.

EH: I feel like I’m supporting mathematics kind of from the background. Now I think about professors as being on the battlefield. They’re directly communicating with people, with students, with individuals. And working with the deans. Making their curriculum and program and everything work.

EL: So what have you chosen as your favorite theorem?

EH: OK, Well, I thought about that question. It’s very interesting. I’ve even asked other people, just to get their reaction. It’s a very interesting question, and I’m curious to know what other people have answered on your podcast. When I think of a theorem I think about not just the statement, but more the proof. I might think of proofs I like, theorems whose proofs I like, or I might think about how this theorem helped me because I really needed something. It’s actually kind of utilitarian, but a favorite theorem should be more like what made you feel great. I have to say for that, it’s a theorem of my own.

KK: Cool, great.

EH: So I have a favorite theorem, or the theorem that made me so excited, and it was the first theorem I ever proved. The reason it’s my favorite theorem is because of a mixture not just of feeling great that I’d proved the theorem but also feeling like it was a big turning point in my life. I felt like I had proved myself in a way. That’s this theorem, I think of it as a polynomial periodicity theorem, and what it says. Do you want me to say what the theorem is?

KK: Yeah, throw it out there, and we’ll unpack it.

EH: So the theorem in most generality, it says that if you have a finite CW complex, if you have a sort of nice space, in my case I was looking at quasiprojective varieties, but any kind of reasonably nice space, you can take a sequence of coverings, of regular coverings, corresponding to a choice of map from the fundamental group of the space to some, say, free abelian group, and the way you get the sequence of coverings is you take that map and compose it with the map from that free abelian group to the free abelian group tensored with Z mod n. So if everything is finitely generated, that gives you a surjective map from the fundamental group of your space to a finite abelian group. And now by the general theorem of covering spaces gives you a sequence of finite coverings of your space. And then if you have that space having a natural completion you can talk about natural branch coverings associated to those for each n. My theorem was what happens to the first Betti numbers of these things? The rank of the first homology of these coverings. I showed that this sequence actually has a pattern. In fact, there is a polynomial for every set base space and map from the fundamental group of the base space to a free abelian group, there is a polynomial with possibly periodically changing coefficients so that the first Betti number is that polynomial evaluated at n.

KK: Wow.

EH: So n is the degree of the covering. The Betti numbers are changing periodically, the polynomials are changing periodically, but it’s a pattern, it’s a nice pattern, and there’s a single polynomial telling you what all of these Betti numbers are doing.

EL: So what was the motivation behind this theorem?

EH: This problem of understanding the first Betti number of coverings comes from work of Zariski back in the early 1900s. His goal was to understand moduli of plane curves with various kinds of singularities. Simply put, what he did was he tried to distinguish curves by looking at topology, blending topology with algebraic geometry. This was kind of a new idea. This is not very well known about Zariski, but one of his innovations was bringing in topology to the study of algebraic geometry.

KK: That’s why it’s called the Zariski topology, right? I don’t know. One assumes.

EH: In a way. Not really!

KK: I’m not a historian. My bad.

EH: He brought geometry topology in. The Zariski topology is more of an algebraic definition. What he did was he was interested, for example, he showed that, what he was interested in when you’re talking about moduli of plane curves is whether or not you can get from one plane curve with prescribed singularities, say a sextic curve, a degree six curve, in C2, in the complex plane, with exactly six simple cusps. So six points in the plane can either lie on a conic or not. General position means it doesn’t lie on lines, doesn’t lie on a conic. But if the six points lie on a conic, it turns out you cannot move within the space of sextics with six cusps to a sextic with six cusps not lying on a conic.


EH: They’re two distinct families. You’d have to leave that family to get from one to another. You can’t deform them in the algebraic category. To prove this, he said, well, basically, even though the idea of fundamental groups and studying fundamental groups was really new still and was just starting to be considered a tool for knot theory, for example, that came a little bit later. But he said, you can tell they’re different because their topology is different. For example, take coverings. Take your curve, and say it’s given by the equation F(x,y)=0. So F(x,y) is a polynomial. Take the polynomial z^n=F(x,y). You get a surface in three-dimensional space, and now look at the first Betti number of that. So the first Betti number, the first homology, can kind of be described algebraically in terms of other things, divisors and things like that. You can think of it as a very algebraic invariant, but you can also think of it as a topological invariant. Forget algebra, forget complex analysis, forget everything. And he showed that if you take the sextics with six cusps and you took z^n=F(x,y), you get things with first Betti number nontrivial, and by the way, periodically changing with n. In fact, when 6 divides n, it’s nontrivial. It jumps. Every time 6 divides n, it jumps. Otherwise I can’t remember, I think it’s zero. But in the case that the cusps are in general position, the first Betti numbers are always zero.


EH: So that must mean that the topology is different. And if the topology is different, they can’t be algebraically equivalent. So that was the process of thinking, that topology can tell you something about algebraic geometry. And that kind of topology is what geometric topologists study now, fundamental groups, etc. But this was all a very new idea.

EL: So that’s kind of the environment that your theorem lives in, this intersection between topology and algebraic geometry.

EH: That’s right. So my theorem, Sarnak conjectured, I was working on fundamental groups of complements of plane curves, especially with multiple components, for my thesis. And Peter Sarnak was looking at certain problems coming from number theory that had to do with arithmetic subgroups of GL(n) and looking at what happens when you take your fields to be finite, and things like that. You get these finite fields. Somehow in his work, something coming from number theory, he wondered, hearing about what I was doing with fundamental groups and Alexander polynomials, which have to do with Betti numbers of coverings, he asked, “Can you show that the Betti numbers of coverings are periodic or polynomial periodic,” which is that other thing. I thought, OK, I’ll do this, and since I was already working topologically, I could get the topological part by looking at the unbranched coverings, and then I had to complete it. To understand the completion, the rest, there’s a difference between the Betti numbers of the unbranched coverings and the Betti numbers of the branched coverings, to understand that, I needed to understand intersections of curves on the surface, to sort of understand intersection theory of algebraic curves. And these have very special properties, nice properties coming from the fact that we’re talking about varieties.

KK: Right.

EH: And I used that to complete the proof. It was a real blend of topology and algebraic geometry. That’s what made it really fun.

KK: That’s a lot of mathematics going in. And I love your confidence. Peter Sarnak said, “Hey, can you do this?” and you said, “Yeah, I can do this.”

EH: Right, well I was feeling pretty desperate. It was really a time of: Should I do math? Should I not do math? Do I belong here? And I thought, “OK, I’ll try this. If it works, maybe that’s a sign.”

EL: So what have you chosen to pair with this theorem?

EH: As it happens, after I proved this theorem and I showed it to Sarnak, I basically wrote a three-page outline of the proof. I showed it to him, and he looked at it carefully and said, “Yeah, this looks right.” Also, you know, you can feel it when you have it. Suddenly everything has become so clear. I was glowing with this and driving from Stanford to Berkeley, which is about an hour drive, and I usually took a nicer route through the hills to the west, so you can imagine driving with these vales and woods, and it was beautiful sunshine and everything, and the Firebird suite starts out very quiet, and it just perfectly represented what it feels like to prove a theorem. It starts really quiet, and then it gets really choppy and frenzied.

KK: And scary.

EH: Exactly, scary. The struggling bird, he’s anxious and frightened, really, really unsettling. And then there’s this real gentleness, feeling like it’s going to be OK, it’s going to be OK. But that also is a bit disturbing. There’s something about it that’s disturbing. So it keeps you listening, even though it’s very sweet and the themes are developed, it’s a very beautiful theme. Then there’s this bang and then it becomes really frenzied again, super frenzied, but excited. And then it becomes bolder and bolder. And then that melody comes in, and it starts to really come together. And it starts to feel like you’re running, like there’s a direction, and then finally it gets quiet again. There’s this serenity. And this time the serenity is real. All this stuff has built up to it, and that starts to build and the beautiful theme comes out in the end. It’s just this glorious wonder at the very end. It was like all my excitement was just exemplified in this piece of music.

EL: I love that picture of you driving through California, blasting Firebird.

EH: Yes, exactly.

EL: With this triumphant proof that you’d just done. That’s really a great picture.

KK: So my son just finished high school, and he wants to be a composer. He’s going to go to college and study composition. And I actually sort of credit that piece, Firebird suite, as one of the pieces that really motivated him to become a composer. That and Rhapsody in Blue.

EH: It really tells a story.

KK: Yes, it does. It’s really spectacular. So I think maybe a lot of our listeners don’t know that you have a rather famous father.

EH: Yes.

KK: Your father won the Fields medal for proving resolution of singularities in characteristic zero, right?

EH: Yes.

KK: What was that like?

EH: Yeah, so I had a really strange relationship with mathematics. Because I grew up with a mathematician father, I avoided math like the plague, you know. Partly because my father was a mathematician, and I thought that was kind of strange, that it didn’t fit in with the rest of the world that I knew. I grew up in the suburbs. It wasn’t a particularly intellectual background. For me, the challenge to my life was to figure out how to fit in, which I was failing at miserably. But I thought that was my challenge. Doing well in math was not the way to fit in in school. I would kind of deliberately add in mistakes to make sure that I didn’t get a good grade.

KK: Really? Wow.

EH: I would kick myself if I forgot and I would get a high grade and everybody would say, “How did she do that?” You know what I mean? I thought of math as this embarrassment, in a way, to tell the truth, strangely enough. But on the other hand, through my father and his friends and colleagues, I knew that mathematics also had this very beautiful side, and the people who did it were very happy people, it seemed. I saw that other side as well. And I think that was an advantage because I knew that math was really cool. It’s just that that wasn’t my thing. I didn’t want to do that. Also, my teachers were not very exciting. The math teachers seemed to make math as boring as possible. So I had this kind of split personality when it came to math, or split feeling about what math was.

EL: Yeah.

EH: But then when I started to do math, I started somehow accidentally to do math in college, and I actually got much more attracted to it. It was after vaguely stumbling through calculus and things like that. So I never really learned calculus, but I started skipping through calculus, and I took more advanced classes, and it just really clicked, and I got hooked. I learned calculus in graduate school, as some people do, by teaching it.

KK: Well that’s when you really learn it anyway, that’s right.

EH: Some people have this impression that lots of mathematicians had the advantage of having access to mathematics from a young age, but I think it’s not obvious how that’s an advantage. In some cases it could be that they were nurtured in mathematics. I mean, I talk to my kids about mathematics, and it’s a fun thing we do together. But I don’t think that’s necessarily the case of people with mathematical parents. In my case it certainly wasn’t the case for me. But still it was an advantage because I knew that there was this thing called mathematics, and many people don’t know that.

EL: Yeah. And, like you said, you knew that mathematicians were happy with their work, and just even knowing that there’s still math to prove. That was something, when I started to do math, I didn’t really understand that there was still more math to do, it wasn’t just learning calculus really well. But going and finding and exploring these new things.

KK: I had that same experience. I remember when I was in high school, telling people I was going to go to graduate school and be a math professor, and they said, “Well, what do you do?” I said, “I don’t know, I guess you write another calculus book.” Which we certainly do not need, right?

EH: Or we need different kinds.

KK: So, I say that, but I’m actually writing one, so you know.

EH: Oh, are you?

KK: Just in my spare time, right? I have so much of it these days.

EH: I think there is a need for calculus books, it’s just maybe different kinds.

KK: Well now that I know someone at the publishing house at the AMS…

EH: Absolutely. I’m going to follow up on this.

KK: Oh, wow. Well this has been fun.

EL: Yeah, thank you so much.

EH: Well thank you for asking me. It gave me a chance to think about different things, and it’s been fun talking to people about, “What’s you’re favorite theorem?”

EL: Good math conversation starter.

EH: Yeah, absolutely.

KK: Thanks for joining us, Eko.

EH: Thank you.

KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Del Mitchell, and Bao-xian Lin. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at myfavoritetheorem@gmail.com. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.

Episode 5 - Dusa McDuff

This transcript is provided as a courtesy and may contain errors.

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer based in Salt Lake City, but I’m currently recording in Chicago at the Mathematical Association of America’s annual summer meeting MathFest. Because I am on location here, I am not joined by our cohost Kevin Knudson, but I’m very honored to be in the same room as today’s guest, Dusa McDuff. I’m very grateful she took the time to talk with me today because she’s pretty busy at this meeting. She’s been giving the Hendrick Lecture Series and been organizing some research talk sessions. So I’m very grateful that she can be here. The introductions at these talks have been very long and full of honors and accomplishments, and I’m not going to try to go through all that, but maybe you can just tell us a little bit about yourself.

Dusa McDuff: OK. Well, I’m British, originally. I was born in London and grew up in Edinburgh, where I spent the first twenty years or so of my life. I was an undergraduate at Edinburgh and went to graduate study at Cambridge, where I was working in some very specialized area, but I happened to go to Moscow in my third year of graduate study and studied with a brilliant mathematician called Gelfand [spelling], who opened my eyes to lots of interesting mathematics, and when I came back, he advised that I become a topologist, so I tried to become a topologist. So that’s more what I’ve been doing recently, gradually moving my area of study. And now I study something called symplectic topology, or symplectic geometry, which is the study of space with a particular structure on it which comes out of physics called a symplectic structure.

EL: OK. And what is your favorite theorem?

DM: My favorite theorem at the moment has got to do with symplectic geometry, and it’s called the nonsqueezing theorem. This is a theorem that was discovered in the mid-80s by a brilliant mathematician called Gromov who was trying to understand. A symplectic structure is a strange structure you can put on space that really groups coordinates in pairs. You take two coordinates (x1,y1) and another two coordinates (x2,y2), and you measure an area with respect to the first pair, an area with respect to the second pair, and add them. You get this very strange measurement in four-dimensional space, and the question is what are you actually measuring? The way to understand that is to try to see it visually. He tried to explore it visually by saying, “Well, let’s take a round ball in four-dimensional space. Let’s move it so we preserve this strange structure, and see what we end up with.” Can we end up with arbitrary curly shapes? What happens? One thing you do know is that you have to preserve volume, but apart from that, nothing else was known.

So his nonsqueezing theorem says that if you took a round ball, say the radii were 1 in every direction, it’s not possible to move it so that in two directions the radii are less than 1 and in the other directions it’s arbitrary, as big as you want. The two directions where you’re trying to squeeze are these paired directions. It’s saying you can’t move it in such a way.

I’ve always liked this theorem. For one thing, it’s very important. It characterizes the structure in a way that’s very surprising. And for another thing, it’s so concrete. It’s just about shapes in four dimensions. Now four dimensions is not so easy to understand.

EL: No, not for me, at least!

DM: Thinking in four dimensions is tricky, and I’ve spent many, many years trying to understand how you might think about moving things in four dimensions, because you can’t do that.

EL: And to back up a little bit, when you say a round ball, are you talking about a two-dimensional ball that’s embedded in four-dimensional space, or a four-dimensional ball?

DM: I’m talking about a four-dimensional ball.


DM: It’s got radius 1 in all directions. You’ve got a center point and move in distance 1 in every direction, that gives you a four-dimensional shape, it’s boundary is a three-dimensional sphere, in fact.

EL: Right, OK.

DM: Then you’re trying to move that, preserving this rather strange structure, and trying to see what happens.

EL: Yeah, so this is saying that the round ball is very rigid in some way.

DM: It’s very round and rigid, and you can’t squeeze it in these two related directions.

EL: At least to preserve the symplectic structure. Of course, you can do this and preserve the volume.

DM: Exactly.

EL: This is saying that symplectic structures are

DM: Different, intrinsically different, in a very direct way.

EL: I remember one of the pictures in your talk kind of shows this symplectic idea, where you’re basically projecting some four-dimensional thing onto two different two-dimensional axes. It does seem like a very strange way to get a volume on something.

DM: It’s a strange measurement. Why you have that, why are you interested in two directions? It’s because they’re related. This structure came from physics, elementary physics. You’re looking at the movement, say, of particles, or the earth around the sun. Each particle has got a position coordinate and a velocity coordinate. It’s a pairing of position and velocity for each degree of freedom that gives this measurement.

EL: And somehow this is a very sensible thing to do, I guess.

DM: It’s a very sensible thing to do, and people have used the idea that the symplectic form is fundamental in order to calculate trajectories, say, of rockets flying off. You want to send a probe to Mars, you want to calculate what happens. You want to have accurate numerical approximations. If you make your numerical approximations preserve the underlying symplectic structure, they just do much better than if you just take other approximation methods.


DM: That was another talk, that was a fascinating talk at this year’s MathFest telling us about this, showing even if you’re trying to approximate something simple like a pendulum, standard methods don’t do it very well. If you use these other methods, they do it much better.

EL: Oh wow, that’s really interesting. So when did you first learn about the nonsqueezing theorem?

DM: Well I learned about it essentially when it was discovered in the mid-1980s.


DM: I happened to be thinking about some other problem, but I needed to move these balls around preserving the symplectic structure. I just realized there was this question and I couldn’t necessarily do this when Gromov showed that one really could not do this, that there’s a strict limit. So I’ve always been interested in questions, many other questions coming from that.

EL: Another part of this podcast is that we like to ask our guests to pair their theorem with another delight in life, a food, beverage, piece of art or music, so what have you chosen to pair with the nonsqueezing theorem?

DM: Well you asked me this, and I decided I’d pair it with an avocado because I like avocados, and they have a sort of round, pretty spherical big seed in the middle. The seed is sort of inside the avocado, which surrounds it.

EL: OK. I like that. And the seed can’t be squeezed. The avocado’s seed cannot be squeezed. Is there anything else you’d like to say about the nonsqueezing theorem?

DM: Only that it’s an amazing theorem, that it really does underlie the whole of symplectic geometry. It’s led to many, many interesting questions. It seems to be a simple-minded thing, but it means that you can define what it means to preserve a symplectic structure without using derivatives, which means you can try and understand much more general kinds of motions, which are not differentiable but which preserve the symplectic structure. That’s a very little-understood area that people are trying to explore. What’s the difference between having a derivative and not having a derivative? It’s a sort of geometric thing. You actually see surprising differences. That’s amazing to me.

EL: Yeah. That’s a really interesting aspect to this that I hadn’t thought about. In the talk that you gave today was that the ball can’t be squeezed but the ellipsoids can. It’s this really interesting difference, also, between the ellipsoids and the ball.

DM: Right. So you have to think that somehow an ellipsoid, which is like a ball, but one direction is stretched, it’s got certain planes, there are certain discrete things you can do. You can slice it and then fold it along that slice. It’s a discrete operation somehow. That gives these amazing results about bending these ellipsoids.

EL: That’s another fascinating aspect to it. You I’m sure don’t remember this, but we actually met nine years ago when I was at the Institute for Advanced Study’s summer program for women in math. I’m pretty sure you don’t remember because I was too shy to actually introduce myself, but I remember you gave a series of lectures there about symplectic geometry. I studied Teichmüller theory, something pretty far away from that, and so I didn’t know if I was going to be interested in those. I remember that you really got me very interested in doing that many years ago. I was really excited when I saw that you were here and I’d be able to not be quite so shy this year and actually get to talk to you.

DM: That’s the thing, overcoming shyness. I used to be very shy and didn’t talk to people at all. But now I’m too old, I’ve given it all up.

EL: Well thank you very much for being on this podcast, and I hope you have a good rest of MathFest.

DM: Thank you.

Episode 4 - Jordan Ellenberg

This transcript is provided as a courtesy and may contain errors.

Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, a mathematician at the University of Florida. I’m joined by my other cohost.

Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a freelance writer currently based in Paris.

KK: Currently based in Paris. For how much longer?

EL: Three weeks. We’re down to the final countdown here. And luckily our bank just closed our account without telling us, so that’s been a fun adventure.

KK: Well, who needs money, right?

EL: Exactly.

KK: You’ve got pastries and coffee, right? So in this episode we are pleased to welcome Jordan Ellenberg, professor of mathematics at the University of Wisconsin. Jordan, want to tell everyone about yourself?

Jordan Ellenberg: Hi. Yes, this is Jordan Ellenberg. I’m talking to you from Madison, Wisconsin today, where we are enjoying the somewhat chilly, drizzly weather we call spring.

KK: Nice. I’ve been to Madison. It’s a lovely place. It’ll be spring for real in a little while, right?

JE: It’ll be lovely. It’s going to be warm this afternoon, and I’m going to be down at the Little League field watching my son play, and it’s as nice as can be.

KK: What position does he play?

JE: He’s 11, so they mix it up. They don’t have defined positions.

KK: I have an 11-year-old nephew who’s a lefty, and they want him to pitch all the time. He’s actually pretty good.

JE: It’s same thing as asking a first-year graduate student what their field is. They should move around a little bit.

KK: That’s absolutely true.

JE: 11 is to baseball as the first year of grad school is to math, I think. Roughly.

KK: That’s about right. Well now they start them so young. We’re getting off track. Never mind. So we’re here to talk about math, not baseball, even though there’s a pretty good overlap there. So Jordan, you’re going to surprise us. We don’t actually know what your favorite theorem is. So why don’t you lay it on us. What’s your favorite theorem?

JE: It is hard to pick your favorite theorem. I think it’s like trying to pick your favorite kind of cheese, though I think in Wisconsin you’re almost required to have one. I’m going to go with Fermat’s Little Theorem.


EL: This is a good theorem. Can you tell us what that is?

JE: I’m not even going to talk about the whole theorem. I’m going to talk about one special case, which I find very beautiful, which is that if you take a prime number, p, and raise 2 to that power, and then you divide by p, then the remainder is 2. In compact terms, you would say 2 to the p is congruent to 2 mod p. Shall we do a couple?

KK: Sure.

JE: For instance, 2^5 is 32. Computing the remainder when you divide by 5 is easy because you can just look at the last digit. 32 is 2 more than 30, which is a multiple of 5. This persists, and you can do it. Should we do one more? Let’s try. 2 to the 7th is 128, and 126 is a multiple of 7, so 128 is 2 mod 7.

KK: Your multiplication tables are excellent.

JE: Thank you.

KK: I guess being a number theorist, this is right up your alley. Is this why you chose it? How far back does this theorem go?

JE: Well, it goes back to Fermat, which is a long time ago. It goes back very early in number theory. It also goes back for me very early in my own life, which is why I have a special feeling for it. One thing I like about it is that there are some theorems in number theory where you’re not going to figure out how to prove this theorem by yourself, or even observe it by yourself. The way to get to the theorem, and this is true for many theorems in number theory, which is a very old, a very deep subject, is you’re going to study and you’re going to marvel at the ingenuity of whoever could have come up with it. Fermat’s Little Theorem is not like that. I think Fermat’s Little Theorem is something that you can, and many people do, and I did, discover at least that it’s true on your own, for instance by messing with Pascal’s Triangle, for example. It’s something you can kind of discover. At least for me, that was a very formative experience, to be like, I learned about Pascal’s triangle, I was probably a teenager or something. I was messing around and sort of observed this pattern and then was able to prove that 2 to the p was congruent to 2 mod p, and I thought this was great. I sort of told a teacher who knew much more than me, and he said, yeah, that’s Fermat’s Little Theorem.

I was like, “little theorem?” No, this was a lot of work! It took me a couple days to work this out. I felt a little bit diminished. But to give some context, it’s called that because of course there’s the famous Fermat’s Last Theorem, poorly named because he didn’t prove it, so it wasn’t really his theorem. Now I think nowadays we call this theorem, which you could argue is substantially more foundational and important, we call it the little theorem by contrast with the last theorem.

EL: Going back to Pascal’s triangle, I’m not really aware of the connection between Fermat’s Little Theorem and Pascal’s triangle. This is an audio medium. It might be a little hard to go through, but can you maybe explain a little bit about how those are connected?

JE: Sure, and I’m going to gesticulate wildly with my hands to make the shape.

EL: Perfect.

JE: You can imagine a triangle man dance sort of thing with my hands as I do this. So there’s all kinds of crazy stuff you can do with Pascal’s triangle, and of course one thing you can do, which is sort of fundamental to what Pascal’s triangle is, is that you can add up the rows. When you add up the rows, you get powers of two.

EL: Right.

JE: So for instance, the third row of Pascal’s triangle is 1-3-3-1, and if you add those up, you get 8, which is a power of 2, it’s 2^3. The fifth row of Pascal’s triangle is 1-5-10-10-5-1. I don’t know, actually. Every number theorist can sort of rattle off the first few rows of Pascal’s triangle. Is that true of topologists too, or is that sort of a number theory thing? I don’t even know.

KK: I’m pretty good.

JE: I don’t want to put you on the spot.

EL: No, I mean, I could if I wrote them down, but they aren’t at the tip of my brain that way.

JE: We use those binomial coefficients a lot, so they’re just like right there. Anyway, 1-5-10-10-5-1. If you add those up, you’ll get 32, which is 2^5. OK, great. Actually looking at it in terms of Pascal’s triangle, why is it the case that you get something congruent to 2 mod 5? And you notice that actually most of those summands, 1-5-10-10-5-1, I’m going to say it a few times like a mantra, most of those summands are multiples of 5, right? If you’re like, what is this number mod 5, the 5 doesn’t matter, the 10 doesn’t matter, the 10 doesn’t matter, the 5 doesn’t matter. All that matters is the 1 at the beginning and the 1 at the end. In some sense Fermat’s Little Theorem is an even littler theorem, it’s the theorem that 1+1=2. That’s the 2. You’ve got the 1 on the far left and the 1 on the far right, and when the far left and the far right come together, you either get the 2016 US Presidential election, or you get 2.

KK: And the reason they add up to powers of 2, I guess, is because you’re just counting the number of subsets, right? The number of ways of choosing k things out of n things, and that’s basically the order of the power set, right?

JE: Exactly. It’s one of those things that’s overdetermined. Pascal’s triangle is a place where so many strands of mathematics meet. For the combinatorists in the room, we can sort of say it in terms of subsets of a set. This is equivalent, but I like to think of it as this is the vertices of a cube, except by cube maybe I mean hypercube or some high-dimensional thing. Here’s the way I like to think about how this works for the case p=3, right, 1-3-3-1. I like to think of those 8 things as the 8 vertices of a cube. Is everybody imagining their cube right now? We’re going to do this in audio. OK. Now this cube that you’re imagining, you’re going to grab it by two opposite corners, and kind of hold it up and look at it. And you’ll notice that there’s one corner in one finger, there’s one corner on your opposite finger, and then the other six vertices that remain are sort of in 2 groups of 3. If you sort of move from one finger to the other and go from left to right and look at how many vertices you have, there’s your Pascal’s triangle, right? There’s your 1-3-3-1.

One very lovely way to prove Fermat’s Little Theorem is to imagine spinning that cube. You’ve got it held with the opposite corners in both fingers. What you can see is that you can sort of spin that cube 1/3 of a rotation and that’s going to group your vertices into groups of 3, except for the ones that are fixed. This is my topologist way. It’s sort of a fixed point theorem. You sort of rotate the sphere, and it’s going to have two fixed points.

EL: Right. That’s a neat connection there. I had never seen Pascal’s triangle coming into Fermat’s little theorem here.

JE: And if you held up a five-dimensional cube with your five-dimensional fingers and held opposite corners of it, you would indeed see as you sort of when along from the corner a group of 5, and then a group of 10, and then a group of 10, and then a group of 5, and then the last one, which you’re holding in your opposite finger.

EL: Right.

JE: And you could spin, you could spin the same way, a fifth of a rotation around. Of course the real truth, as you guys know, as we talk about, you imagine a five-dimensional cube, I think everyone just imagines a 3-dimensional cube.

KK: Right. We think of some projection, right?

JE: Exactly.

KK: Right. So you figured out a proof on your own in the case of p=2?

JE: My memory is that I don’t think I knew the slick cube-spinning proof. I think I was thinking of the Pascal’s triangle. This thing I said, I didn’t prove, as we were just discussing, I mean, you can look at any individual row and see that all those interior numbers in the triangle are divisible by 5. But that’s something that you can prove if you know that the elements of Pascal’s triangle are the binomial coefficients, the formula is n!/k!(n-k)!. It’s not so hard to prove in that case that if n is prime, then those binomial coefficients are all divisible by p, except for the first and last. So that was probably how I proved it. That would be my guess.

KK: Just by observation, I guess. Cool.

EL: We like to enjoy the great things in life together. So along with theorems, we like to ask our guests to pair something with this theorem that they think complements the theorem particularly well. It could be a wine or beer, favorite flavor of chocolate…

JE: Since you invited somebody in Wisconsin to do this show, you know that I’m going to tell you what cheese goes with this theorem.

EL: Yes, please.

KK: Yes, absolutely. Which one?

JE: The cheese I’ve chosen to pair with this, and I may pronounce it poorly, is a cheese called gjetost.

EL: Gjetost.

JE: Which is a Norwegian cheese. I don’t know if you’ve had it. It almost doesn’t look like cheese. If you saw it, you wouldn’t quite know what it was because it’s a rather dark toasty brown. You might think it was a piece of taffy or something like that.

EL: Yeah, yeah. It looks like caramel.

JE: Yes, it’s caramel colored. It’s very sweet. I chose it because a, because like Fermat’s Little Theorem, I just really like it, and I’ve liked it for a long time; b, because it usually comes in the shape of a cube, and so it sort of goes with my imagined proof. You could, if you wanted to, label the vertices of your cheese with the subsets of a 3-element set and use the gjetost to actually illustrate a proof of Fermat’s Little Theorem in the case p=3. And third, of course, the cheese is Norwegian, and so it honors Niels Henrik Abel, who was a great Norwegian mathematician, and Fermat’s Little Theorem is in some sense the very beginning of what we would now call Abelian group theory. Fermat certainly didn’t have those words. It would be hundreds of years before the general apparatus was developed, but it was one of the earliest theorems proved about Abelian groups, and so in that sense I think it goes with a nice, sweet Norwegian cheese.

EL: Wow, you really thought this pairing through. I’m impressed.

JE: For about 45 seconds before we talked.

EL: I’ve actually made this cheese, or at least some approximation of this. I think it’s made with whey, rather than milk.

JE: On purpose? What happened?

EL: Yeah, yeah. I had some whey left over from making paneer, and so I looked up a recipe for this cheese, and I had never tried the real version of it. After I made my version, then, I went to the store and got the real one. My version stood up OK to it. It didn’t taste exactly the same, but it wasn’t too bad.

JE: Wow!

KK: Experiments in cheesemaking.

JE: In twelve years, I’ve never made my own cheese. I just buy it from the local dairy farmers.

EL: Well it was kind of a pain, honestly. It stuck to everything. Yeah.

JE: Someone who lives in Paris should not be reduced to making their own cheese, by the way. I feel like that’s wrong.

EL: Yes.

KK: I’m not surprised you came up with such a good pairing, Jordan. You’ve written a novel, right, years ago, and so you’re actually a pretty creative type. You want to plug your famous popular math book? We like to let people plug stuff.

JE: Yes. My book, which came out here a few years ago, it’s called How Not to Be Wrong. It’ll be out in Paris in two weeks in French. I just got to look at the French cover, which is beautiful. In French it’s called, I’m not going to be able to pronounce it well, like “L’art de ne dire n’importe pas”, [L’art de ne pas dire n’importe quoi] which is “The art of not saying whatever nonsense,” or something like this. It’s actually hard work to translate the phrase “How not to be wrong” in French. I was told that any literal translation of it sounds appallingly bad in French.

This book is kind of a big compendium of all kinds of things I had to say with a math angle. Some of it is about pure math, and insights I think regular people can glean from things that pure mathematicians think about, and some are more on the “statistical news you can use” side. It’s a big melange of stuff.

KK: I’ve read it.

JE: I’m a bit surprised people like it and have purchased it. I guess the publishing house knew that because they wouldn’t have published it, but I didn’t know that. I’m surprised people wanted it.

KK: I own it in hardback. I’ll say it. It’s really well done. How many languages is it into now?

JE: They come out pretty slowly. I think we’ve sold 14 or 15. I think the number that are physically out is maybe []. I think I made the book hard to translate by having a lot of baseball material and references to US cultural figures and stuff like that. I got a lot of really good questions from the Hungarian translator. That one’s not out, or that one is out, but I don’t have a copy of it. It just came out.

KK: Very cool.

JE: The Brazilian edition is very, very rich in translator’s notes about what the baseball words mean. They really went the extra mile to be like, what the hell is this guy talking about?

KK: Is it out in Klingon yet?

JE: No, I think that will have to be a volunteer translator because I think the commercial market for Klingon popular math books is not there. I’m holding out for Esperanto. If you want my sentimental favorite, that’s what I would really like. I tried to learn Esperanto when I was kid. I took a correspondence course, and I have a lifelong fascination for it. But I don’t think they publish very many books in Esperanto. There was a math journal in Esperanto.

EL: Oh wow.

KK: That’s right, that’s right. I sort of remember that.

JE: That was in Poland. I think Poland is one of the places where Esperanto had the biggest popularity. I think the guy who founded it, Zamenhof, was Polish.

KK: Cool. This has been fun. Thanks, Jordan.

JE: Thank you guys.

EL: Thanks a lot for being here.

KK: Thanks a lot.

KK: Thanks for listening to My Favorite Theorem, hosted by Kevin Knudson and Evelyn Lamb. The music you’re hearing is a piece called Fractalia, a percussion quartet performed by four high school students from Gainesville, Florida. They are Blake Crawford, Gus Knudson, Dell Mitchell, and Baochau Nguyen. You can find more information about the mathematicians and theorems featured in this podcast, along with other delightful mathematical treats, at Kevin’s website, kpknudson.com, and Evelyn’s blog, Roots of Unity, on the Scientific American blog network. We love to hear from our listeners, so please drop us a line at myfavoritetheorem@gmail.com. Or you can find us on Facebook and Twitter. Kevin’s handle on Twitter is @niveknosdunk, and Evelyn’s is @evelynjlamb. The show itself also has a Twitter feed. The handle is @myfavethm. Join us next time to learn another fascinating piece of mathematics.