Episode 52 - Ben Orlin

Kevin Knudson: Welcome to My Favorite Theorem, a math podcast. I'm Kevin Knudson, professor of mathematics at the University of Florida. And here is your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer, usually based in Salt Lake City, but currently still in Providence. I'll be leaving from this semester at ICERM in about a week. So trying to eat the last oysters that remain in the state before I leave and then head back.

KK: Okay, so you actually like oysters.

EL: Oh, I love them. Yeah, they're fantastic.

KK: That is one of those, it’s a very binary food, right? You either love them—and I do not like them at all.

EL: Oh, I get that, I totally get it.

KK: Sure.

EL: They’re like, in some sense objectively gross, but I actually love them.

KK: Well, I'm glad you've gotten your fill in. Probably—I imagine they're a little more difficult to get in Salt Lake City.

EL: Yeah, you can but it’s not like you can get over here.

KK: Might be slightly iffy. You don't know how long they've been out of the water, right?

EL: Yeah. So there's one place that we eat oysters sometimes there, yeah, that's the only place.

KK: Yeah, right. Okay. Well, today we are pleased to welcome Ben Orlin. Ben, why don't you introduce yourself?

Ben Orlin: Yeah, well, thanks so much for having me, Kevin and Evelyn. Yes, I'm Ben Orlin. I’m a math teacher, and I write books about math. So my first book was called Math with Bad Drawings, and my second one is called Change Is the Only Constant.

EL: Yeah, and you have a great blog of the same name as your first book, Math with Bad Drawings.

BO: Yeah, thank you. And I think our blogs are, I think almost birthday, not exactly but we started them within months of each other, right? Roots of Unity and Math with Bad Drawings.

EL: Oh, yeah.

BO: Began in, like, spring of 2013 which was a fertile time for blogs to begin.

EL: Yeah. Well, in a few years ago, you had some poll of readers of like, what other things they read and, and stuff and my blog was like, considered the most similar to yours, by some metric.

BO: Yeah, I did a reader survey and asked people, right, what what other sources they read, and mostly I was looking for reading recommendations. So what else do they consider similar? Overwhelmingly it was XKCD. Not so much—just because XKCD, it’s like if you have a little light that you're holding, a little candle you're holding up, and you're like, what does this remind you of? And like a lot of people are going to say the sun because they look up, and that’s where they see visible light.

KK: Sure.

BO: But I think in terms of actually similar writing, I think Toots of Unity is not so different, I think.

EL: Yeah. So I thought that was interesting because I have very few drawings on on mine. Although the ones that I do personally create are definitely bad. So I guess there’s that similarity.

BO: That’s the key thing, committing to the low quality.

KK: Yeah, but that's just it. I would argue they're actually not bad. So if I tried to draw like you draw, it would be worse. So I guess my book should just be Math with Worse Drawings.

BO: Right.

KK: You actually get a lot of emotion out of your characters, even though they're they're simple stick figures, right? There’s some skill there.

BO: Yeah, yeah. So I tried. I tried to draw them with a very expressive faces. Yeah, they're definitely still bad drawings is my feeling. Sometimes people say like, “Oh, but they've gotten so much better since you started the blog,” which is true, but it's one of these things where they could they could get a lot better every five-year interval for the next 50 years and still, I think not look like professional drawings by the end of it.

EL: Right. You're not approaching Rembrandt or anything.

KK: All right, so we asked you on here, because you do have bad drawings, but you also have thoughts about mathematics and you communicate them very well through your drawings. So you must have a favorite theorem. What is it?

BO: Yeah. So this one is drawn from my second book, actually, the second book is about calculus. And I have to confess I already kind of strayed from the assignment because it's not so much a favorite theorem as a favorite construction.

KK: Oh, that’s cool.

EL: You know, we get rule breakers on here. So yeah, it happens.

BO: Yeah, I guess that's the nature of mathematicians, they like to bend the rules and imagine new premises. So pretending that this were titled My Favorite cCnstruction, I would pick Weierstrass’s function. So that you know, first introduced in 1872. And the idea is it's this function which is continuous everywhere and differentiable nowhere.

EL: Yeah. Do you want to describe maybe what this looks like for anyone who might not have seen it yet?

BO: Yeah, sure. So when you're picturing a graph, right, you're probably picturing—it varies. I teach secondary school. So students are usually picturing a fairly small set of possibilities, right? Like you're picturing a line, maybe you're thinking of a parabola, maybe something with a few more squiggles, maybe as many squiggles as a sine wave going up and down. But they all have a few things in common one is that almost anything that students are going to picture is continuous everywhere. So basically, it's made of one unbroken line. You can imagine drawing it with your pencil without picking the pencil up. And then the other feature that they have is that they—this one's a little subtler, but there will be almost no points that are jagged, or sort of crooked, or, you know, if I picture an absolute value graph, right, it sort of is a straight line going down to the origin from the left, and then there's a sharp corner at the origin, and then it rises away from that sharp corner. And so those kind of sharp corners, you may have one or two in a graph a student would draw, but that's sort of it. You know, like sharp corners are weird. You don't can't draw all sharp corners. It feels like between any two sharp corners on your graph, there's going to have to be some some kind of non-sharp stuff connecting it, some kind of smooth bits going between them.

KK: Right.

BO: And so what sort of wild about about Weierstrass’s function is that you look at it, and it just looks very jagged. It’s got a lot of sharp corners. And you start zooming in, and you see that even between the sharp corners, there are more sharp corners. And you keep zooming in and there's just sharp corners all the way down. It's what we today call it fractal. Although back then that word wasn't around. And it's just it's the entire thing. Every single point along this curve is in some sense, a sharp corner.

EL: Yeah, it kind of looks like an absolute value everywhere.

BO: Yeah, exactly. It has that cusp at every single point you could look at.

KK: Right? So very pathological in nature. And, you know, I'm sure I've seen the construction of this. Is it easy to say what the construction is? Or is this going to be too technical for an audio format?

BO: It’s actually not hard to construct. There are there whole families of functions that have the same property. But Weierstrass’s is pretty simple. He starts with basically just a cosine curve. So you sort of have cosine of πx. So picture, you know, a cosine wave that has a period of two. And then you do another one that has a much shorter period. So you can sort of pick different numbers. But let's say the next one that you add on has a period that's 21 times faster. So it's sort of going up and down much quicker. And it's shorter, though, we've shrunk the amplitude also. So it's only about a third, let's say, as tall. And so you add that onto your first function. So now we've got—we started with just a nice, gentle wave. And now we've got a wave that has lots of little waves kind of coming off of it. And then you keep repeating that process. So the next, the second one in the iteration has a period of 21 cycles for two units. The next one has 212 cycles. And it's 1/9 the height of the original.

KK: Okay.

BO: And then after that, you're going to do you know, 213 cycles in the same span, 214 cycles. And so it goes—I don't know if you can hear my daughter is crying in the background, because I think she she finds it sort of upsetting to imagine the function that's has this kind of weird property.

EL: Fair.

BO: Especially because it's such a simple construction. Right? It's just, like, little building blocks for her that we're putting together. And one of the things I like about the construction, is it at no step, do you have any non-differentiable points, actually. It's a wave with a little wave on top of it and lots of little waves on top of that, and then tons and tons of little waves on top of that, but these are all smooth, nice, curving waves. And then it's only in the limit, sort of at the at the end of that infinite bridge, that suddenly it goes from all these little waves to its differentiable nowhere.

KK: I mean, I could see why that would be true, right?

BO: Yeah, right. Right. It feels like it's getting worse. And you can do—Weierstrass’s function is really a whole family of functions. He came up with some conditions that you need, basically that’s the basic idea. You need to pick an odd number for the number of cycles and then a geometric series for for the amplitude.

KK: So what's so appealing about this to you? It's just you can't draw it well, like you have to draw it badly?

KK: Yeah, that's one thing, right. Exactly. I try to push people into my corner, force them to have to drop badly. I do like that this is something—right, graphs of functions are so concrete. And yet this one you really can't draw. I've got it in my book, I have a picture of the first few iterations. And already, you can't tell the difference between the third step and the fourth step. So I had to, I had to, you know, do a little box and an inset picture and say, actually, in this fourth step, what looks like one little wave is really made up of 21 smaller waves. So I do sort of like that, how quickly we get into something kind of unimaginable and strange. And also, you know, I'm not a historian of mathematics. And so I always wind up feeling like I'm peddling sort of fairy tales about about mathematical history more than the complicated truth that is history. But the role that this function played in going from a world where it felt like functions were kind of nice and were something we had a handle on, into opening up this world where, like, oh no, there are all these pathological things going on out there. And there are just these monsters that lurk in the world of possibility.

KK: Yeah.

EL: Right. And was this it—Do you know, was this maybe one of the first, or the first step towards realizing that in some measure sense, like, all functions are completely pathological? Do you know kind of where it fell there, or, like, what the purpose was of creating it in the first place?

BO: Yeah, I think that's exactly right. I don't know the ins and outs of that story. I do know that, right, if you look in spaces of functions, that they sort of all have this property, right, among continuous functions, I think it's only a set of measure zero that doesn't have this property. So the sort of basic narrative as I understand it, leading from kind of the start of the 19th century to the end of the 19th century, is basically thinking that we can mostly assume things are good, to realizing that sometimes things are bad (like this function), culminating in the realization that actually basically everything is bad. And the good stuff is just these rare diamonds.

EL: Yeah, I guess maybe this slight, I don't know, silver lining, is that often we can approximate with good things instead. I don't know if that's like the next step on the evolution or something.

BO: Right. Yeah, I guess that's right. Certainly, that's a nice way to salvage some a silver lining, salvage a happy message. Because it's true, right? Even though, a simpler example, the rationals are only a set of measure zero and the reals, you know, they're everywhere, they're dense. So at least, you know, if you have some weird number, you can at least approximate it with a rational.

EL: Yeah, I was just thinking when you were saying this, how it has a really nice analogy to the rationals. And, and even algebraic numbers and stuff like, “Okay, start naming numbers,” you'll probably name whole numbers, which are, you know, this sparse set of measure zero. It’s like, o”h, be more creative,” like, “Okay, well, I'll name some fractions and some square roots and stuff.” But you're still just naming sets of measure zero, you’re never naming some weird transcendental function that I can't figure out a way to compute it.

BO: Yeah, it is funny, right? Because in some sense, right? We've imagined these things called numbers and these things called functions. And then you ask us to pick examples. And we pick the most unlikely, nicest hand-picked, cherry-picked examples. And so the actual stuff—we’ve imagined this category called functions, and most of what's in that category that we developed, we came up with that definition, most of what's in there is stuff that's much too weird for us to begin to picture.

EL: Yeah.

BO: Which says something about, I guess, our reach exceeding our grasp or something. I don't really know, but they are our definitions can really outrun our intuition.

EL: Yeah. So where did you first encounter this function?

BO: That’s a good question. I feel like probably as a kind of folklore bit in maybe 12th grade math. I feel like when I was probably first learning calculus, it was sort of whispered about. You know, my teacher sort of mentioned it offhand. And that was very enticing, and in some sense, that's actually where my whole second book comes from, is all these little bits of folklore, not exactly the thing you teach in class, but the little, I don't know, the thing that gets mentioned offhand. And you go “Wait, what, what was that?” “Oh, well, don't worry. You'll learn about that in your real analysis class in four years.” I don't want to learn about that in four years. Tell me about that now. I want to know about that weird function. And then I think the first proper reading I did was probably in a William Dunham’s book The Calculus Gallery, which is a nice book going through different bits of historical mathematics, beginning with the beginnings of calculus through through like the late 19th century. And he has the here's a nice discussion of the function and its construction.

KK: So when we were preparing for this, you also mentioned there are connections to Brownian motion here. Do you want to mention those for our audience?

BO: Yeah, I love that this turns out—so I have some quotes here from right when this function was sort of debuted, right when it was introduced to the world. You have Émile Picard, his line was, “If Newton and Leibniz had thought that continuous functions do not necessarily have a derivative, the differential calculus would never have been invented.” Which I like. If Newton and Leibniz knew what you were going to do to their legacy, they would never have done this! They would have rejected the whole premise. And then Charles Hermite? [Pronounced “her might, wonders if the pronunciation is correct]

KK: Hermite. [Pronounced “her meet”]

BO: That sounds better. Sounds good. Sure. Right. His line was, and I don't know what the context was, but, “I turn away with fright and horror from this lamentable evil of functions that do not have derivatives.” Which is really layering on I like the way people spoke in the 19th century. There was more, a lot more flavor to their their language.

EL: Yeah.

BO: And Poincaré also, he was saying 100 years ago prior to Weierstrass developing it, such a function would have been regarded as an outrage to common sense. Anyway, so I mention all those. You mentioned Brownian motion, right? The instinct when you see this function is that this is utterly pathological. This is math just completely losing touch with physical reality and giving us these weird intellectual puzzles and strange constructions that can't possibly mean anything to real human beings. And then it turns out that that's not true at all, that Brownian motion—so you look at pollen dancing around on the surface of some water, and it's jumping around in these really crazy aggressive ways. And it turns out our best models of that process, you know, of any kind of Brownian motions—you know, coal dust in the air or pollen on water—our best model to a pretty good approximation has the same property. The path is so jagged and surprising and full of jumps from moment to moment that it's nowhere differentiable, even though the particle obviously sort of has to be continuous. It can’t be discontinuous, I mean, it's jumping, like literally transporting from one place to another. So that's not really the right model. But it is non-differentiable everywhere, which means, weirdly, that it doesn't have a speed, right? Like, a derivative is a is a velocity.

EL: So that means maybe an average speed but not a speed at any time.

BO: Yeah, well, actually, even—I think it depends how you measure. I’d have to looked back at this, because what it means sort of between any two moments according to the model, between any two points in time, is traversing an infinite distance. So I guess it could have an average velocity, but the average speed I think winds up being infinite rates. Over a given time interval, you can just take how far it travels that time interval and divide by time, but I think the speed, if you take the absolute value of the magnitude? I think you sort of wind up with infinite speed, maybe? But really, it's just that you can’t—speed is no longer a meaningful notion. It's moving in such an erratic way. that n you can't even talk about speed.

KK: Well, because that tends to imply a direction. I mean, you know, it’s really velocity. That always struck me as that's the real problem, is that you can't figure out what direction it's going, because it's effectively moving randomly, right?

BO: Yeah, I think that's fair. Yeah. The only way I can build any intuition about it is to picture a single—imagine a baseball having a single non-differentiable moment. So like, you toss it up in the air. And usually what would happen is that it goes up in the air, it kind of slows down and slows down and slows down. There's that one moment when it's kind of not moving at all. And then it begins to fall. And so the non-differentiable version would be, like, you throw it up in the air, it's traveling up at 10 meters per second, and then a trillionth of a second later, it's traveling down at 10 meters per second. And what's happening at that moment? Well, it's just unimaginable. And now for Brownian motion, you've got to picture that that moment is every moment.

KK: Right. Yeah. Weird, weird world.

BO: Yeah.

KK: So another thing we like to do on this podcast is ask our guests to pair their, well in your case construction, with something. What does the Weierstrass function pair with?

BO: Yeah. So I think, I have two things in mind, both of them constructions of new things that kind of opened up new new possibilities that people could not have imagined before. So the first one, maybe I should have picked a specific dish, but I'm picturing basically just molecular gastronomy, this movement in in cooking where you take—one example I just saw recently in a book was, I think it was WD-50, a sort of famous molecular gastronomy restaurant in New York, where they had taken, the comes to you and it looks like a small, poppyseed bagel with lox. And then as it gets closer, you realize it's not a poppyseed bagel with lox, it's ice cream that looks almost identical to a poppyseed bagel with lox. So that's sort of weird enough already. And then you take a taste and you realize that actually, it tastes exactly like a poppyseed bagel with lox, because they've somehow worked in all the flavors into the ice cream.

KK: Hmm.

BO: Anyway, so molecular gastronomy basically is about imagining very, very weird possibilities of food that are outside our usual traditions, much in the way that Weierstrass’s function kind of steps outside the traditional structures of math.

EL: Yeah, I like this a lot. It's a good one. Partly because I'm a little bit of a foodie. And like, when I lived in Chicago, we went to this restaurant that had this amazing, like, molecular gastronomy thing. I’m trying to remember one of the things we had was this frozen sphere of blue cheese. And it was so weird and good. Yeah, you’d get you get like puffs of air that are something, and there’s, like, a ham sandwich, but it was like the bread was only the crust somehow there's like nothing inside. Yeah, it was all these weird things. Liquefied olive that was like in inside some little gelatin thing, and so it was just like concentrated olive taste that bursts in your mouth. So good.

BO: That sounds awesome to me the the molecular gastronomy food. I have very little experience of it firsthand.

KK: So you mentioned a second possible pairing. What would that be?

BO: Yeah, so the other one I had in mind is music. It's a Beatles album, Revolver.

KK: Great album.

BO: One of my favorite albums, and much like molecular gastronomy shows that the foods that we're eating are actually just a tiny subset of the possible foods that are out there, similarly what revolver did for for pop music and in ’65 whenever it came out.

KK: ’66.

BO: Okay. 66 Alright, thank you for that.

EL: I am not well-versed in albums of The Beatles. You know, I am familiar with the music of the Beatles, don’t worry. But I don't know what's on what album. So what is this album?

BO: So Kevin and I can probably go to track by track for you.

KK: I’d have to think about it, but it's got Norwegian Wood on it, for example.

BO: Oh, that's rubber sole, actually.

KK: Oh, that’s Rubber Soul. You're right. Yeah, I lost my Beatles cred. That's right. My bad. I mean, some would argue that—so Revolver was, some people argue, was the first album. Before that, albums had just been collections of singles, even in the case of the Beatles, but Revolver holds together as a piece.

BO: Yeah, that’s one thing. Which again, there's probably some an analogy to Weierstrass’s function there. Also, it begins with this kind of weird countdown where, I don’t remember if it's John or George, but they’re saying 1234 in the intro into Taxman.

KK: Yeah. Into Taxman, which is probably, it's not my favorite Beatles song, but it's certainly among the top four. Right.

BO: Yeah. So that one, already right there it’s a pop song about taxes, which is already, so lyrically, we're exploring different parts of the possibility space than musicians were before. Track two is Eleanor Rigby, which is, the only instrumentation is strings. Which again is something that you didn't really hear in pop. You know, Yesterday had brought in some strings, that was sort of innovative. Other bands have done similar things but, but the idea of a song that’s all strings, and then I’m Only Sleeping as the third track, which has this backwards guitar. They recorded the guitar and just played it backwards. And then Yellow Submarine, which is, like, this weird Raffi song that somehow snuck onto a Beatles album. Yeah, and then For No One has this beautiful French horn solo. Yes, every track is drawn from sort of a distant corner of this space of possible popular music, these kind of corners that had not been explored previously. Anyway, so my recommendation is, is think about the Weierstrass function while eating, you know, a giant sphere of blue cheese and listening to Taxman.

EL: Great. Yeah. I strongly urge all of our listeners to go do that right now.

BO: Yeah, if anyone does it, it'll probably be the first time that that set of activities has been done in conjunction.

EL: Yeah. But hopefully not the last.

BO: Hopefully not the last. That's right. Yeah. And most experiences are like that, in fact.

KK: So we also like to let our guests plug things. You clearly have things to plug.

BO: I do. Yeah. I'm a peddler of wares. Yes, so the prominent thing is my blog is Math with Bad Drawings, and you're welcome to come read that. I try to post funny, silly things there. And then my two books are Math with Bad Drawings, which kind of explores how math pops up in lots of different walks of life, like, you know, in thinking about lottery tickets or thinking about the Death Star is another chapter, and then Change Is the Only Constant is my second book, and it's all about calculus, and it’s sort of calculus through stories. Yeah, that one just came out earlier this year, and I'm quite proud of that one. So you should check it out.

KK: Yeah, so I own both of them. I've only read Math with Bad Drawings. I've been too busy so far to get to Change Is the Only Constant.

EL: And there were there been a slew of good pop—or I assume good because I haven't read most of them yet—pop math books that have come out recently, so yeah I feel like my stack is growing. It’s a fall of calculus or something.

BO: It’s been a banner year. And exactly, calculus has been really at the forefront. Steve Strogatz’s Infinite Powers was a New York Times bestseller, and then David Bressoud [Calculus Reordered] and others who I'm blanking on right now have had one. There was another graphic, like, cartoon calculus that came out earlier this year. So yeah, apparently calculus is kind of having a moment.

EL: Well, and I just saw one about curves.

KK: Curves for the Mathematically Curious. It's sitting on my desk. Many of these books that you've mentioned are sitting on my desk.

EL: So yeah, great year for reading about calculus, but I think Ben would prefer that you start that reading with Change Is the Only Constant.

BO: It's very frothy, it's very quick and light-hearted and should be—you can use it as your appetizer to get into the the, the cheesier balls of the later books.

KK: But it's highly non-trivial. I mean, you talk about really interesting stuff in these books. It's not some frothy thing. I mean it's lighthearted, but it's not simple.

BO: I appreciate that. Yeah, the early draft of the book I was doing pretty much a pretty faithful march through the AP Calculus curriculum. And then that draft wasn't really working. And I realized that part of what I wasn't doing that should be doing was since I'm not teaching, you know, you had to execute calculus maneuvers. I'm not teaching how to take derivatives. I can talk about anything as long as I can explain the ideas. So we've got Weierstrass’s function in there. And there's a little bit even on Lebesgue integration, and other sort of, some stuff on differential equations crops up. So since I'm not actually teaching a calculus course and I don't need to give tests on it, I just got to tell stories.

EL: Well, yeah, I hope people will check that out. And thanks for joining us today.

BO: Yeah, thanks so much for having me.

KK: Yeah. Thanks, Ben.

[outro]

Our guest on this episode, Ben Orlin, is a high school math teacher best-known for his blog and popular math books. He told us about Weierstrass’s construction of a function that is continuous everywhere but differentiable nowhere. Here is a short collection of links that might be interesting.

Ben’s Blog, Math with Bad Drawings

Math with Bad Drawings, the book

Change is the Only Constant

Episode 51 - Carina Curto

Evelyn Lamb: Hello, and welcome to My Favorite Theorem, the math theorem with no test at the end. I think I decided I liked that tagline. [Editor’s note: Nope, she really didn’t notice that slip of the tongue!]

Kevin Knudson: Okay.

EL: So we’re going to go with that. Yeah. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is the other host.

KK: I’m Kevin Knudson, a professor of mathematics at the University of Florida. How are you doing?

EL: I’m doing well. Yeah, not not anything too exciting going on here. My mother-in-law is coming to visit later today. So the fact that I have to record this podcast means my husband has to do the cleaning up to get ready.

KK: Wouldn’t he do that anyway? Since it’s his mom?

EL: Yeah, probably most of it. But now I've got a really good excuse.

KK: Yeah, sure. Well, Ellen and I had our 27th anniversary yesterday.

EL: Oh, congratulations.

KK: Yeah, we had a nice night out on the town. Got a hotel room just to sit around and watch hockey, as it turns out.

EL: Okay.

KK: But there's a pool at the hotel. And you know, it's hot in Florida, and we don't have a pool. And this is absurd—which Ellen reminds me of every day, that we need a pool—and I just keep telling her that we can either send the kid to college or have a pool. Okay.

EL: Yeah.

KK: I mean, I don't know. Anyway, we're not here talking about that, we're talking about math..

EL: Yes. And we're very excited today to have Carina Curto on the show. Hi, Carina, can you tell us a little bit about yourself?

Carina Curto: Hi, I'm Carina, and I'm a professor of mathematics at Penn State.

EL: Yeah, and I think I first—I don't think we've actually met. But I think the first time I saw you was at the Joint Meetings a few years ago. You gave a really interesting talk about, like, the topology of neural networks, and how your brain has these, like, basically kind of mental maps of spaces that you interact with. It was really cool. So is that the kind of research you do?

CC: Yeah, so that was—I remember that talk, actually, at the Joint Meetings in Seattle. So that was a talk about the uses of typology for understanding neural codes. And a lot of my research has been about that. And basically, everything I do is motivated in some way by questions in neuroscience. And so that was an example of work that's been motivated by neuroscience questions about how your brain encodes geometry and topology of space.

KK: Now, there's been a lot of a lot of TDA [topological data analysis] moving in that direction these last few years. People have been finding interesting uses of topology in neuroscience and studying the brain and imaging stuff like that, very cool stuff.

CC: Yeah.

EL: And did you come from more of a neuroscience background? Or have you been kind of picking that up as you go, coming from a math background?

CC: So I originally came from a mathematical physics background.

EL: Okay.

CC: I was actually a physics major as an undergrad. But I did a lot of math, so I was effectively a double major. And then I wanted to be a string theorist.

KK: Sure, yeah.

CC: I started grad school in 2000. So this is, like, right after Brian Greene’s The Elegant Universe came out.

EL: Right. Yeah.

CC: You know, I was young and impressionable. And so I kind of went that route because I loved physics, and I loved math. And it was kind of an area of physics that was using a lot of deep math. And so I went to grad school to do mathematical string theory in the math department at Duke. And I worked on Calabi-Yaus and, you know, extra dimensions and this kind of stuff. And it was, the math was mainly algebraic geometry, is what right HD thesis was in So this had nothing to do with neuroscience.

EL: Right.

CC: Nothing. And so basically about halfway through grad school—I don't know how better to put it, then I got a little disillusioned with string theory. People laugh now when I say that because everybody is.

KK: Sure.

CC: But I started kind of looking for other—I always wanted to do applied things, interdisciplinary things. And so neuroscience just seemed really exciting. I kind of discovered it randomly and started learning a lot about it and became fascinated. And so then when I finished my PhD, I actually took a postdoc in a neuroscience lab that had rats and, you know, was reporting from the cortex and all this stuff, because I just wanted to to learn as much neuroscience as possible. So I spent three years working in a lab. I didn't actually do experiments. I did mostly computational work and data analysis. But it was kind of a total cultural immersion sort of experience, coming from more of a pure math and physics background.

EL: Right. Yeah, I bet that was a really different experience

CC: It was really different. So I kind of left math in a sense for my first postdoc, and then I came back. So I did a second postdoc at Courant at NYU, and then started getting ideas of how I could tackle some questions in neuroscience using mathematics. And so ever since then, I've basically become a mathematical neuroscientist. I guess I would call myself.

KK: So 2/3 of this podcast is Duke alums. That's good.

CC: Oh yeah? Are you a Duke alum?

KK: I did my degree there too. I finished in ’96.

CC: Oh, yeah.

KK: Okay. Yeah.

CC: Cool.

EL: Nice. Well, so what is your favorite theorem?

CC: So I have many, but the one I chose for today is the Perron-Frobenius theorem.

KK: Nice.

EL: All right.

CC: And so you want to know about it, I guess?

KK: We do. So do our listeners.

CC: So it's actually really old. I mean, there are older theorems, but Perron proved it, I think in 1907 and Frobenius in 1912, so it carries both of their names. So it's over 100 years old. And it's a theorem and linear algebra. So it has to do with eigenvectors and eigenvalues of matrices.

KK: Okay.

CC: And so I'll just tell you quickly what it is. So, if you have a square matrix, so like an n×n square matrix with all positive—so there are many variations of that theorem. I'm going to tell you the simplest one—So if all the entries of your matrix are positive, then you are guaranteed that your largest eigenvalue is unique and real and is positive, so a positive real part. So eigenvalues can be complex. They can come in complex conjugate pairs, for example, but when we talk about the largest one, we mean the one that has the largest real part.

EL: Okay.

KK: All right.

CC: And so one part of the theorem is that that eigenvalue is unique and real and positive. And the other part is that you can pick the corresponding eigenvector for it to be all positive as well.

EL: Okay. And we were talking before we started taping that I'm not actually remembering for sure whether we've used the words eigenvector and eigenvalue yet on the podcast, which, I feel like we must have because we've done so many episodes, but yeah, can we maybe just say what those are for anyone who isn't familiar?

CC: Yeah. So when you have a matrix, like a square matrix, you have these special vectors. So the matrix operates on vectors. And so a lot of people have learned how to multiply a matrix by a vector. And so when you have a vector, so say your matrix is A and your vector is x, if A times x gives you a multiple of x back—so you basically keep the same vector, but maybe scale it—then x is called an eigenvector of A. And the scaling factor, which is often denoted λ, is called the eigenvalue associated to that eigenvector.

KK: Right. And you want x to be a nonzero vector in this situation.

CC: Yes, you want x to be nonzero, yes, otherwise it's trivial. And so I like to think about eigenvectors geometrically because if you think of your matrix operating on vectors in some Euclidean space, for example, then what it does, what the matrix will do, is it will pick up a vector and then move it to some other vector, right? So there's an operation that takes vectors to vectors, called linear transformations, that are manifested by the matrix multiplication. And so when you have an eigenvector, the matrix keeps the eigenvector on its own line and just scales, or it can flip the sign. If the eigenvalue is negative, it can flip it to point the other direction, but it basically preserves that line, which is called the eigenspace associated. So it has a nice geometric interpretation.

EL: Yeah. So the Perron-Frobeius theorem, then, says that if your matrix only has positive entries, then there's some eigenvector that's stretched by a positive amount.

CC: So yeah, so it says there's some eigenvector where the entries of the vector itself are all positive, right, so it lies in the positive orthant of your space, and also that the the corresponding eigenvalue is actually the largest in terms of absolute value. And the reason this is relevant is because there are many kind of dynamic processes that you can model by iterating a matrix multiplication. So, you know, one simple example is things like Markov chains. So if you have, say, different populations of something, whether it be, say, animals in an ecosystem or something, then you can have these transition matrices that will update the population. And so, if you have a situation where if your matrix that's updating your population has—whatever the leading eigenvalue is of that matrix is going to control somehow the long-term behavior of the population. So that top eigenvalue, that one with the largest absolute value, is really controlling the long-term behavior of your dynamic process.

EL: Right, it kind of dominates.

CC: It is dominating, right. And you can even see that just by hand when you sort of multiply, if you take a matrix times a vector, and then do it again, and then do it again. So instead of having A times x, you have A squared times x or A cubed times x. So it's like doing multiple iterations of this dynamic process. And you can see how, then, what’s going to happen to the to the vector if it's the eigenvector. Well, if it's an eigenvector, well, what's going to happen is when you apply the matrix once, A times x, you're going to get λ times x. Now apply A again. So now you're applying A to the quantity λx, but the λ comes out, by the linearity of the of the matrix multiplication, and then you have Ax again, so you get another factor of λ, so you get λ^2 times x. And so if you keep doing this, you see that if I do A^k times x, I get λ^k times x. And so if that λ is something, you know, bigger than 1, right, my process is going to blow up on me. And if it's less than 1, it's going to converge to zero as I keep taking powers. And so anyway, the point is that that top eigenvector is really going to dominate the dynamics and the behavior. And so it's really important if it's positive, and also if it's bigger or less than 1, and the Perron-Frobenius theorem basically tells you that you have, it gives you control over what that top eigenvalue looks like and moreover, associates it to an all-positive eigenvector, which is then a reflection of maybe the distribution of population. So it's important that that be positive too because lots of things we want to model our positive, like populations of things.

KK: Negative populations aren't good. Yeah,

CC: Yes, exactly. And so this is one of the reasons it's so, useful is because a lot of the things we want to model are—that vector that we apply the matrix to is reflecting something like populations, right?

KK: So already this is a very non-obvious statement, right? Because if I hand you an arbitrary matrix, I mean, even like a 2×2 rotation matrix, it doesn't have any eigenvalues, any real eigenvalues. But the entries aren't all positive, so you’re okay.

CC: Right. Exactly.

KK: But yeah, so a priori, it's not obvious that if I just hand you an n×n matrix with all real entries that it even has a real eigenvalue, period.

CC: Yeah. It's not obvious at all, and let alone that it's positive, and let alone that it has an eigenvector that's all positive. That's right. And the positivity of that eigenvector is really important, too.

EL: Yeah. So it seems like if you're doing some population model, just make sure your matrix has all positive entries. It’ll make your life a lot easier.

CC: So there's an interesting, so do you do you know what the most famous application of the Perron-Frobenius theorem is?

EL: I don't think I do.

KK: I might, but go ahead.

CC: You might, but I’ll go ahead?

KK: Can I guess?

CC: Sure.

KK: Is it Google?

CC: Yes. Good. Did you Google it ahead of time?

KK: No, this is sort of in the dark recesses of my memory that essentially they computed this eigenvector of the web graph.

CC: Right. Exactly. So back in the day, in the late ‘90s, when Larry Page and Sergey Brin came up with their original strategy for ranking web pages, they used this theorem. This is like, the original PageRank algorithm is based on this theorem, because they're, they have again the Markov process where they imagine some web—some animal or some person—crawling across the web. And so you have this graph of websites and edges between them. And you can model the random walk across the web as one of these Markov processes where there's some matrix that that reflects the connections between web pages that you apply over and over again to update the position of the of the web crawler. And and so now if you imagine a distribution of web crawlers, and you want to find out in the long run what pages do they end up on, or what fraction of web crawlers end up on which pages, it turns out that the Perron-Frobenius theorem gives you precisely the existence of this all-positive eigenvector, which is a positive probability that you have on every website for ending up there. And so if you look at the eigenvector itself, that you get from your web matrix, that will give you a ranking of web pages. So the biggest value will correspond to the most, you know, trafficked website. And smaller values will correspond to less popular websites, as predicted by this random walk model.

EL: Huh.

CC: And so it really is the basis of the original PageRank. I mean, they do fancier things now, and I'm sure they don't reveal it. But the original PageRank algorithm was really based on this. And this is the key theorem. So I think it's a it's kind of a fun thing. When I teach linear algebra, I always tell students about this.

KK: Linear Algebra can make you billions of dollars.

CC: Yes.

KK: That’ll catch students’ attention.

CC: Yes, it gets students’ attention.

EL: Yes. So where did you first encounter the Perron-Frobenius theorem?

CC: Probably in an undergrad linear algebra class, to be honest. But I also encountered it many more times. So I remember seeing it in more advanced math classes as a linear algebra fact that becomes useful a lot. And now that I'm a math biologist, I see it all the time because it's used in so many biological applications. And so I told you about a population biology application before, but it also comes up a lot in neural network theory that I do. So in my own research, I study these competitive neural networks. And here I have matrices of interactions that are actually all negative. But I can still apply the theorem. I can just flip the sign.

EL: Oh, right.

CC: And apply the theorem, and I still get this, you know, dominant eigenvalue and eigenvector. But in that case, the eigenvalue is actually negative, and I still have this all-positive eigenvector that I can choose. And that's actually important for proving certain results about the behavior of the neural networks that I study. So it's a theorem I actually use in my research.

EL: Yeah. So would you say that your appreciation of it has grown since you first saw it?

CC: Oh for sure. Because now I see it everywhere.

EL: Right.

CC: It was one of those fun facts, and now it’s in, you know, so many math things that I encounter. It's like, oh, they're using the Perron-Frobenius theorem. And it makes me happy.

EL: Yeah, well, when I first read the statement of the theorem, it's not like it bowled me over, like, “Oh, this is clearly going to be so useful everywhere.” So probably, as you see how many places it shows up, your appreciation grows.

CC: Yeah, I mean, that's one of the things that I think is really interesting about the theorem, because, I mean, many things in math are like this. But you know, surely when Perron and Frobenius proved it over 100 years ago, they never imagined what kinds of applications it would have. You know, they didn't imagine Google ranking web pages, or the neural network theory, or anything like this. And so it's one of these things where it's like, it's so basic. Maybe it could look initially like a boring fact of linear algebra, right? If you're just a student in a class and you're like, “Okay, there's going to be some eigenvector, eigenvalue, and it's positive, whatever.” And you can imagine just sort of brushing it off as another boring fact about matrices that you have to memorize for the test, right? And yet, it's surprisingly useful. I mean, it has applications in so many fields of applied math and in pure math, and so it's just one of those things that gives you respect for even seemingly simple and not obviously, it doesn't bowl you over, right, you can see the statement and you're not like, “Wow, that's so powerful!” But it ends up that it's actually the key thing you need in so many applications. And so, you know, it's earned its place over time. It's aged nicely.

EL: And do you have a favorite proof of this theorem?

CC: I mean, I like the elementary proofs. I mean, there are lots of proofs. So I think there's an interesting proof by Birkhoff. There are some proofs that involve the Brouwer fixed point theorem, which is something maybe somebody has chosen already.

EL: Yes, actually. Two people have chosen it!

CC: Two people have chosen the Brouwer fixed point theorem. Yeah, I would imagine that's a popular choice. So, yeah, there are some proofs that rely on that, which I think is kind of cool. So those are more modern proofs of it. That's the other thing I like about it, is that it has kind of old-school elementary proofs that an undergrad in a linear algebra class could understand. And then it also has these more modern proofs. And so it's kind of an interesting theorem in terms of the variety of proofs that it admits.

KK: So one of the things we like to do on this podcast is we like to invite our guests to pair their theorem with something. So I'm curious, I have to know what pairs well with the Perron-Frobenius theorem?

CC: I was so stressed out about this pairing thing!

KK: This is not unusual. Everybody says this. Yeah.

CC: What is this?

KK: It’s the fun part of the show!

CC: I know, I know. And so don't know if this is a good pairing, but I came up with this. So I went to play tennis yesterday. And I was playing doubles with some friends of mine. And I told them, I was like, I have to come up with a pairing for my favorite theorem. So we chatted about it for a while. And as I was playing, I decided that I will pair it with my favorite tennis shot.

EL: Okay.

CC: So, my favorite shot in tennis is a backhand down the line.

KK: Yes.

CC: Yeah?

KK: I never could master that!

CC: Yeah. The backhand down the line is one of the basic ground strokes. But it's maybe the hardest one for amateur players to master. I mean, the pros all do it well. But, you know, for amateurs, it's kind of hard. So usually people hit their backhand cross court. But if you can hit that backhand down the line, especially when someone's at the net, like in doubles, and you pass them, it's just very satisfying, kind of like, win the point. And for my tennis game, when my backhand down the line is on, that's when I'm playing really well.

EL: Nice.

CC: And I like the linearity of it.

EL: Right, it does seem like, you know, you're pushing it down.

CC: Like I'm pushing that eigenvector.

KK: It’s very positive, everything's positive about it.

CC: Everything’s positive. The vector with the tennis ball, just exploding down the line. It's sort of maybe it's a stretch, but that's kind of what I decided.

EL: A…stretch? Like with an eigenvalue and eigenvector?

CC: Right, exactly. I needed to find a pairing that was a stretch.

EL: I think this is a really great pairing. And you know, something I love about the pairing thing that we do—other than the fact that I came up with it, so of course, I'm absurdly proud of it—is that I think, for me at least it's built all these bizarre connections with math and other things. It's like, now when I see the mean value theorem, I'm like, “Oh, I could eat a mango.” Or like, all these weird things. So now when I see people playing tennis, I'll be like, “Oh, the Perron-Frobenius theorem.”

CC: Of course.

EL: So are you a pretty serious tennis player?

CC: I mean, not anymore. I played in college for a little bit. So when I was a junior, I was pretty serious.

EL: Nice. Yeah, I’m not really a tennis person I've never played or really followed it. But I guess there's like some tennis going on right now that's important?

CC: The French Open?

EL: That’s the one!

KK: Nadal really stuck it to Federer this morning. I played obsessively in high school, and I was never really any good, and then I kind of gave it up for a long time, and I picked up again in my 30s and did league tennis when I lived in Mississippi. And my team at our level—we were just sort of very intermediate players, you know—we won the state championship two years in a row.

CC: Wow.

KK: And then and then I gave it up again when I moved to Florida. My shoulder can't take it anymore. I was one of these guys with a big booming serve and a pretty good forehand and then nothing else, right?

CC: Yeah.

KK: So you know, if you work my backhand enough you're going to destroy me.

EL: Nice. Oh, yeah, that's a that's a lot of fun. And I hope other our tennis appreciator listeners will now have have an extra reason to enjoy this theorem too. So yeah, we also like to give our guests a chance, like if they have a website or book or anything they want to mention—you know, if people want to find them online and chat about tennis or linear algebra— is there anything you want to mention?

CC: I mean, I don't have a book or anything that I can plug, but I guess I wanted to just plug linear algebra as a subject.

KK: Sure.

CC: I feel like linear algebra is one of the grand achievements of humanity in some ways. And it should really shine in the public consciousness at the same level as calculus, I think.

EL: Yeah.

KK: Maybe even more.

CC: Yeah, maybe even more. And now, everybody knows about calculus. Every little kid knows about calculus. Everyone is like, “Oh, when when are you going to get to calculus?” You know, calculus, calculus. And linear algebra—it also has kind of a weird name, right, so it sounds very elementary somehow, linear and algebra—but it's such a powerful subject. And it's very basic, like calculus, and it's used widely and so I just want to plug linear algebra.

EL: Right. I sometimes feel like there are basically—so math can boil down to like, doing integration by parts really well or doing linear algebra really. Like, I joked with somebody, like, I didn't end up doing a PhD in a field that used a lot of linear algebra, but I sort of got my PhD in applied integration by parts, it's just like, “Oh, yeah. Figure out an estimate based on doing this.” And I think linear algebra, especially now with how important social media and the internet are, it is really an important field that, I agree, more people should know about. It is one of the classes that when I took it in college, it's one of the reasons I—at that time, I was trying to get enough credits to finish my math minor. And I was like, “Oh, yeah, actually, this is pretty cool. Maybe I should learn a little more of this math stuff.” So, yeah, great class.

CC: And you know, it's everywhere. And you know, there are all these people, almost more people have heard of algebraic topology than linear algebra, outside, you know, because it's this fancy topology or whatever. But when it comes down to it, it's all linear algebra tricks. With some vision of how to package them together, of course, I’m not trying to diminish the field, but somehow linear algebra doesn't get it’s—it’s the workhorse behind so much cool math and yeah, doesn't get its due.

EL: Yes, definitely agree.

KK: Yeah. All right. Well, here's to linear algebra.

EL: Thanks a lot for joining us.

CC: Thank you.

KK: It was fun.

[outro]

Our guest on this episode, Carina Curto, is a mathematician at Penn State University who specializes in applications in biology and neuroscience. She talked about the Perron-Frobenius theorem. Here are some links you may find useful as you listen to this episode.

Curto’s website
A short video of Curto talking about how her background in math and physics is useful in neuroscience and a longer interview in Quanta Magazine
An article version of Curto’s talk a few years ago about topology and the neural code
Curto ended the episode with a plug for linear algebra as a whole. If you’re looking for an engaging video introduction to the subject, check out this playlist from 3blue1brown.

Episode 50 - aBa

Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer, usually in Salt Lake City, Utah, currently in Providence, Rhode Island. And this is your other host.

Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics, almost always at the University of Florida these days. How's it going?

EL: All right. We had hours of torrential rain last night, which is something that just doesn't happen a whole lot in Utah but happens a little more often in Providence. So I got to go to sleep listening to that, which always feels so cozy, to be inside when it's pouring outside.

KK: Yeah, well, it's actually finally pleasant in Florida. Really very nice today and the sun's out, although it's gotten chilly—people can't see me doing the air quotes—it’s gotten “chilly.” So the bugs are trying to come into the house. So the other night we were sitting there watching something on Netflix and my wife feels this little tickle on her leg and it was one of those big flying, you know, Florida roaches that we have here.

EL: Ooh

KK: And our dog just stood there wagging at her like, “This is fun.” You know?

EL: A new friend!

KK: “Why did you scream?”

EL: Yeah, well, we’re happy today to invite aBa to the show. ABa, would you like to introduce yourself?

aBa Mbirika: Oh, hello. I’m aBa. I'm here in Wisconsin at the University of Wisconsin Eau Claire. And I have been here teaching now for six years. I tell them where I'm from?

EL: Yeah.

KK: Sure.

aM: Okay. I am from, I was born and raised in New York City. I prefer never to go back there. And then I moved to San Francisco, lived there for a while. Prefer never to go back there. And then I went up to Sonoma County to do some college and then moved to Iowa, and Iowa is really what I call home. I'm not a city guy anymore. Like Iowa is definitely my home.

EL: Okay.

KK: So Southwestern Wisconsin is also okay?

aM: Yeah, it's very relaxing. I feel like I'm in a very small town. I just ride my bicycle. I still don't know how to drive, like all my friends from New York and San Francisco. But I don't need a car here. There's nowhere to go.

EL: Yeah.

aM: But can I address why you just called me aBa, as I asked you to?

EL: Yeah.

aM: Yeah, because maybe I'll just put this on the record. I mean, I don't use my last name. I think the last time I actually said some version of my last name was grad school, maybe? The year 2008 or something, like 10 years ago was the last time anyone's ever heard it said. And part of the issue is that it's It's pronounced different depending on who's saying it in my family. And actually it's spelled different depending on who’s in the family. Sometimes they have different letters. Sometimes there's no R. Sometimes it’s—so in any case, if I start to say one pronunciation, I know Americans are going to go to town and say this is the pronunciation. And that's not the case. I can't ask my dad. He's passed now, but he didn’t have a favorite. He said it five different ways my whole life, depending on context. So he doesn't have a preference, and I'm not going to impose one. So I'm just aBa, and I'm okay with that.

EL: Yeah, well, and as far as I know, you're currently the only mathematician named aBa. Or at least spelled the way yours is spelled.

aM: Oh yeah, in the arXiv. Yeah, on Mathscinet that it’s. Yeah, I'm the only one there. Recently someone invited me to a wedding and they were like, what's your address? And I said, “aBa and my address is definitely enough.”

EL: Yeah, so what theorem would you like to tell us about?

aM: Oh, okay, well I was listening actually to a couple of you shows recently, and Holly didn’t have a favorite theorem, Holly Krieger. I'm exactly the same way. I don't even have a theorem of, like, the week. She was lucky to have that. I have a theorem of the moment. I would like to talk about something I discovered when I was in college, that’s kind of the reason. but can I briefly say some of my like, top hits just because?

EL: Oh yeah.

KK: We love top 10 lists. Yeah, please.

aM: Okay. So I'm in combinatorics, loosely defined, but I have no reason—I don't know why people throw me in that bubble. But that's the bubble that that I've been thrown in. But my thesis—actually, I don’t ever remember the title, so I have to read it off a piece of paper—Analysis of symmetric function ideals towards a combinatorial description of the cohomology ring of Hessenberg varieties.

KK: Okay.

aM: Okay, all those words are necessary there. But my advisor said, “You're in combinatorics.” Essentially, my problem was, we were studying an object and algebraic geometry, this thing called a Hessenberg variety. To study this thing we used topology. We looked at the cohomology ring of this, but that was very difficult. So we looked at this graded ring from the lens of commutative algebra. And I studied the algebra the string by looking at symmetric functions, ideals of symmetric functions, and hence that's where my advisor said, “You're in combinatorics.” So it was the main tool used to study a problem an algebraic geometry that we looked at topology. Whatever, so I don't know what I am. But any case for top 10 hits, not top 10, but diagram chasing. Love it. Love it.

EL: Wow, I really don't share that love, but I’m glad somebody does love it.

aM: Oh, it's just so fun for students.

KK: So the snake lemma, right?

aM: The snake lemma, yes. It's a little bit maybe above the level of our algebra two class that I teach here for undergrads, but of course I snuck it in anyways. And the short five lemma. Those are like, would be my favorites if the moment was, like, months ago. In number theory I have too many faves, but I’m going to limit it to Euler-Fermat’s theorem that if a and n are coprime, then a to the power of the Euler totient function of n is congruent to 1 mod n. But that leads to Gauss’s epically cool awesome theorem on the existence of primitive roots. Now, this is my current craze.

EL: Okay.

aM: And this is just looking at the group of units in Z mod nZ, or more simply the multiplicative group of units of integers modulo n. When is this group cyclic? And Gauss said it's only cyclic when n is the numbers 2, or 4, or an odd prime to a k power, or twice an odd prime to some k-th power. And basically, those are very few. I mean, those are very little numbers in the broad spectrum of the infinity of the natural numbers. So this is very cool. In fact, I'm doing a non-class right now with a professor who retired maybe 10 years ago from our university, and I emailed him and said, “Want to have fun on my like my research day off?” And we’re studying primitive roots because I don't know anything about it. Like, my favorite things are things I know nothing about and I want to learn a lot about.

EL: Yeah, I don't think I've heard that theorem before. So yeah, I'll have to look that up later.

aM: Yes. And then the last one is from analysis, and I did hear Adrianna Salerno talked about it and in fact, I think also someone before her on your podcast, but Cantor’s theorem on uncountability of the real numbers.

EL: Yeah, that's that's a real classic.

aM: I just taught that two days ago in analysis, and like, it's like waiting for their heads to explode. And I think, I don't know, my students’ heads weren't all exploding. But I was like, “This is so exciting! Why are you not feeling the excitement?” So yeah, yeah, it was only my second time teaching analysis. So maybe I have to work on my sell.

EL: Yeah, you'll get them next time.

aM: Yeah. It's so cool! I even mentioned it to my class that’s non-math majors, just looking at sets, basic set theory. And this is my non-math class. These students hate math. They're scared of math. And I say, “You know, the infinity you know, it's kind of small. I mean, you're not going to be tested on this ever. But can I please take five minutes to like, share something wonderful?” So I gave them the baby version of Cantor’s theorem. Yeah, but that's it. I just want to throw those out there before I was forced to give you my favorite theorem.

EL: Yes. So now…

KK: We are going to force you, aBa. What is your favorite theorem?

EL: We had the prelude, so now this is the main event.

aM: Okay, main event time. Okay, you were all young once, and you remember—oh, we’re all young, all the time, sorry—but divisibility by 9. I guess when we're in high school—maybe even before that—we know that the number 108 is divisible by 9 because 1+0+8 is equal to 9. And that's divisible by 9. And 81 is divisible by 9 because 8+1 is 9, and 9 is divisible by 9. But not just that, the number 1818 is divisible by 9 because 1+8+1+8 is 18. And that's divisible by 9. So when we add up the digits of a number, and if that sum is divisible by 9, then the number itself is divisible by 9. And students know this. I mean, everyone kind of knows that this is true. I guess I was a sophomore in college. That was maybe a good 4 to 6 years after I started college because, well, that was hard. It's a different podcast altogether, but I made some choices to meet friends who made it really hard for me to go to school consistently in San Francisco—part of the reason why I'm kind of okay not going back there much anymore. Friends got into trouble too much.

But I took a number theory course and learned a proof for that. And the proof just blew my mind because it was very simple. And I wasn't a full-blown math major yet. I think I was in physics— I had eight majors, different majors through the time—I wasn't a math person yet. And I was on a bus going from—Oh, this is in Sonoma County. I went to Sonoma State University as my fourth or fifth college that I was trying to have a stable environment in. And this one worked. I graduated from there in 2004. It definitely worked. So I was on a bus to visit some of my bad friends in San Francisco—who I love, by the way, I'm just saying of the bad habits—and I was thinking about this theorem of divisibility by 9 and saying, what about divisibility by 7? No one talks about that. Like, we had learned divisibility by 11. Like the alternating sum of the digits, if that's divisible by 11, then the number is divisible by 11. But what about 7? You know, is that doable? Or why is it not talked about?

EL: Yeah.

aM: So it was an hour and a half bus ride. And I figured it out. And it was extremely, like, the same exact proof as the divisibility by 9, but boiled down to one tiny little change. But it's not so much that I love this theorem. I actually haven't even told it to you yet. But that I did the proof, that it changed my life. I really—that’s the only thing I can go back to and say why am I an associate professor at a university in Wisconsin right now. It was the life-changing event. So let me tell you the theorem.

EL: Yeah.

aM: It’s hardly a theorem, and this is why I don't know if it even belongs on this show.

EL: Oh, it totally does!

aM: Okay, so I don't even think I had calc 2 yet when I discovered this little theorem. All right, so here we go. So look at the decimal representation of some natural number. Call it n.

EL: I’ve got my pencil out. I'm writing this down.

aM: Oh, okay. Oh, great. Okay, I'm reading off a piece of paper that I wrote down.

EL: Yeah, you said something about it to us earlier. And I was like, “I'm going to need to have this written down.” It’s funny that I do a podcast because I really like looking at things that are written down. That helps me a lot. But let's podcast this thing.

aM: Okay, so say we have a number with k+1 digits. And so I'm saying k+1 because I want to enumerate the digits as follows: the units digit I'm going to call a0, the tens digit I’ll call a1 the hundreds place digit a2 etc, etc, down to the k+1st digit, which we’ll call ak. So read right to left, like in Hebrew, a0, a1 a2 … (or \cdots, you LaTeX people) ak-1 then the last far left digit ak.

EL: Yeah.

aM: So that is a decimal representation of a number. I mean, we're just, you know, like number 1008. That would be a0 is the number 8, a1 is the number 0, a2 is number 0, a3 is the number 1. So we just read right to left. So we can represent this number, and everybody knows this when you're in junior math, I guess in elementary school, that we can write the number—now I'm using a pen—123 as 3 times 1 plus—how many tens do we have? Well, we have two tens. So 2 times 10. How many hundreds do we have? Well, we have one of those. So 1 times 100. So just talking about, yeah, this is mathematics of the place value system in base 10. No surprise here. But a nicer way to write it as a fat sum, where i, the index goes from 0 to k of ai times 10i.

EL: Yeah.

aM: That’s how we in our little family of math nerds, how we compactly write that. So when we think about when does this number divisible by 7? It suffices to think about when what is the remainder when each of these summands is—when we divide each of these summands by 7, and then add up all those remainders and then take that modulo 7. So the key and crux of this argument is that what is 10 congruent to mod 7? Well, 10 leaves the remainder of 3 when you divide by 7. In the great language of concurrences—Thank you, Gauss—10≡3 mod 7. So now we can look at this, all of these tens we have. We have a0 ×100+ a1 ×101 + a2 ×102, etc, etc. When we divide this by 7, this number really is now a0 ×30 — because I can replace my 100 with 30 —plus a1 ×31 instead of—because 101 is the same as 31 in modulo 7 land—plus a2 ×32, etc. etc…. to the last one, ak ×3k. Okay, here I am on the bus thinking, “This is only cool if I know all my powers of 3.”

EL: Yeah. Which are not really that much easier than figuring it out in the first place.

aM: Okay, but I'm young mathematically and I'm just really super excited. So one little example, I guess this is not, I can't remember what I did on the bus, but 1008 is is a number that's divisible by seven. And let's just perform this check, using this check on this number. So is 1008 really divisible by 7? What we can do is according to this, I take the far right digit, the units digit, and that's 8 ×30, so that's just the number 8, 8×1, plus 0×31. Well, that's just 0, thankfully. Then the next, the hundreds place, that’s 0×32. So that's just another 0. And then lastly, the thousands place, 1×33 and that's 27. Add up now my numbers 8+0+0+27. And that's 35. And that's easy to know that the divisibility of. 7 divides 35 and thus 7 divides 1008. And, yeah, I don't know, I’m traveling back in time, and this is not a marvelous thing. But everybody, unfortunately, who I saw in San Francisco that day, and the next day, learned this. I just had to teach all my friends because I was like, “Well, this is not what I'm doing for college. This is something I figured out on the bus. This math stuff is great.”

EL: Yeah, just the fact that you got to own that.

aM: Yeah. And that also it wasn't in the book, and actually it wasn't in subsequently any book I've ever looked in ever since. But it's still just cute. I mean, it's available. And what it did, I guess it just touched me in a way, where I guess I didn't know about research, I didn't know about a PhD program. My end goal was to get a job, continue at the photocopy place that was near the college, where I worked. I really told my boss that, and I really believed that I was going to do that. And our school never really sent people to graduate programs. I was one of the first. And I don't know, it just changed me. And there were a lot of troubles in my life before then. And this is something that I owned. And that's my favorite theorem on that bus that day.

KK: It’s kind of an origin story, right?

aM: Yes, because people ask me, how did you get interested in math? And I always say the classic thing. Forget this story, but I'm also not speaking to math people. My usual thing is the rave scene. I mean, that was what I was involved in in San Francisco, and then, I don't know if you know what that is, but electronic dance music parties that happen in beaches and fields and farms and houses.

EL: What, you don’t think we go to a lot of raves?

aM: I don’t know if raves still happen!

EL: You have accurately stereotyped me.

aM: Okay. Now, I have to admit my parents were worried about that. And they said, “Ecstasy! Clubs!” and I was like, “No, Mom. That's a different rave. My people are not indoors. We’re outdoors, and we're not paying for stuff, and there's no bar, and there's no drinking. We're just dancing and it's daytime. It was a different thing. But that's really why I got involved in this math thing. In some sense, I wanted to know how all of that music worked, and that music was very mathematical.

EL: Oh.

aM: But then I kind of lost interest in studying the math of that because I just got involved in combinatorics and all the beautiful, theoretical math that fills my spirit and soul. But the origin story is a little bit rave, but mostly that bus.

EL: Yeah. A lot of good things happen on buses.

aM: You guys know about the art gallery theorem? Guarding a museum.

EL: Yeah. Yeah.

aM: What’s the minimum number of guards? Okay, I took the seat of someone—my postdoc was at Bowdoin college, and sadly the person who passed away shortly before I got the job was a combinatorialist named Steve Fisk (I hope I’ve got the name right). In any case, he's in the Proofs from the Book, for coming up with a proof for that art gallery theorem. You know, the famous Proofs from the Book, the idea that all the beautiful proofs are in some book? But yeah, guess where he came up with that, he told the chair of the math department when I started there: on a bus! And he was somewhere in Eastern Europe on a bus, and that's where he came up with it. And it's just like, yeah, things can happen on a bus, you know?

EL: Yeah. Now I want our listeners to, like, write in with the best math they've ever done on a bus or something. A list of bus math.

aM: You also have to include trains, I think, too.

EL: Yeah. Really long buses.

aM: All public transportation.

EL: Yeah. So something that we like to do on this podcast is ask our guests to pair their theorem with something. So what have you chosen to pair with your favorite theorem?

aM: Oh my gosh, I was supposed to think about that. Yes. Okay. Oh, 7.

EL: I feel like you have so many interests in life. You must you must have something you can think of.

aM: Oh, no, it's not a problem. I do currently a lot of mathematics. I'm in my office, sadly, a lot of hours of the day, but sometimes I leave my office and go to the pub down the road. And I call it a pub because it's really empty and brightly lit and not populated by students. It's kind of like a grown up bar. But I do a lot of recreational math there, especially on primitive roots recently. So I think I would pair my 7 theorem with seven sips of Michelob golden draft light. It's just a boring domestic beer. And then I would go across the street to the pizza place that's across from my tavern, and I would eat seven bites of a pizza with pepperoni, sausage, green pepper, and onion.

EL: Nice.

aM: I have a small appetite. So seven people would say yes, he can probably do seven bites before he’s full and needs to take a break.

EL: Or you could you could share it with seven friends.

aM: Yes. Oh, I'm often taking students down there and buying pizza for small sections of research students or groups of seven. Yes.

EL: Nice. So I know you wanted to share some other things with us on this podcast. So do you want to talk about those? Or that? I don't know exactly what form you would like to do this in.

aM: Oh, I wrote a poem. Yeah, I just want to share a poem that I wrote that maybe your listeners might find cute.

EL: Yeah. And I'd like to say I think the first time—I don't think we actually met in person that time, but the first time I saw you—was at the poetry reading at a Joint Math Meeting many years ago.

aM: Oh my gosh! I did this poem, probably.

EL: You might have. I’ll see I remember you. Many people might have seen you because you do stand out in a crowd. You know, you dress in a lot of bright colors and you have very distinctive glasses and hair and everything. So you were very memorable at the time. Yes, right now it's pink, red, and yeah, maybe just different shades of pink.

aM: Yes.

EL: But yeah, I remember seeing you do a poem at this this joint math poetry thing and then kept seeing you at various things and then we met, you know, a few years ago when I was at Eau Claire, I guess, we actually met in person then. But yeah, go ahead, please share your poem with us.

aM: Okay, this is part of the origin story again. This was just shortly after this seven thing from the bus. I was introduced to a proofs class, and they were teaching bijective functions. And I really didn't get the book. It was written by one of my teachers, and I was like, you know, I wrote a poem about it. And I think I understand my poem a little bit more than what you wrote in your book. And like, they actually sing this song now. So they recite it, so say the teachers at Sonoma State, each year to students who are taking this same course. But here it is, I think it's sometimes called a rap because I kind of dance around the room when I sing it. So it's called the Bijection Function Poem. And here you go. Are you ready?

EL: Yes.

KK: Let’s hear it.

aM: All right.

And it clearly follows that the function is bijective
Let’s take a closer look and make this more objective
It bears a certain quality – that which we call injective
A lovin’ love affair, Indeed, a one-to-one perspective.
Injection is the stuff that bonds one range to one domain
For Mr. X in the domain, only Miss Y can take his name
But if some other domain fool should try to get Miss Y’s affection,
The Horizontal Line Police are here to check for 1 to 1 Injection.

(Okay, that’s a little racy.)

Observe though, that injection does not alone grant one bijection
A function of this kind must bear Injection AND Surjection
Surjection!? What is that? Another math word gone surreal
It’s just a simple concept we call “Onto”. Here‟s the deal:
If for EVERY lady ‘y’ who walks the codomain of f
There exists at least one ‘x’ in the Domain who fancies her as his sweet best.
So hear the song that Onto sings – a simple mathful melody:
“There ain’t a Y in Codomain not imaged by some X, you see!”
So there you have it 2 conditions that define a quality.
If it’s injective and surjective, then it’s bijective, by golly!

(So this is the last verse. And there's some homework problems in my last verse, actually.)

Now if you’re paying close attention to my math-poetic verse
I reckon that you’ve noticed implications of Inverse
Inverse functions blow the same tune – They biject oh so happily
By sheer existence, inverse functions mimic Onto qualities (homework problem 1)
And per uniqueness of solution, another inverse golden rule (homework problem 2)
By gosh, that’s one-to-one & Onto straight up out the Biject School!
Word!

aM: Yeah, I never tire that one. I love teaching a proofs class.

EL: Yeah. And you said you use it in your class every time you teach it?

aM: Every time I have to say bijection. I mean, the song works, though. My only drawback in recent times is my wording long ago for “Mr. X in the domain” and “Miss Y can take his name” and the whole binary that this thing is doing. So I do have versions, I have a homosexual version, I have a this version—this is the hetero version—then I have the yet-to-be-written binary-free version, which I don't know how to make that because I was thinking for “Person X in the domain, only Person Y can take his name,” but you know person doesn't work. It's too long syllabically so I'm working on that one.

EL: Yeah.

aM: I’m working on that one.

EL: Well, yeah, modernize it for for the times we live in now.

aM: Yes. I kind of dread reading and reciting this is purely hetero version, you know? And also there's not necessarily only one Miss Y that can take Mr. X’s name. I mean, you know, there's whole different relation groups these days.

EL: Yeah.

aM: But I'm talking about the injection and surjection.

EL: Yeah, the polyamorous functions are a whole different thing.

KK: Those are just relations, they’re not functions. It’s a whole thing.

aM: Oh, yes, relations aren't necessarily functions, but certain ones that be called that right?

EL: Yeah. Well, thank you so much for joining us. Is there anything else you would like to share? I mean, we often give our guests ways to find—give our listeners ways to find our guests online. So if there's anything, you know, a website, or anything you’d like to share.

aM: Can you just link my web page or should I tell you it? [Webpage link here] Actually googling “aBa UWEC math.” That's all it takes. UWEC aBa math. Whenever students can’t find our course notes, I just say like, “I don't know, Google it. There's no way you cannot find our course notes if you remember the name of your school, what you're studying and my name.” Yeah.

EL: We’ll put a link to that also in the show notes for people.

aM: Yeah, one B, aBa, for the listeners.

EL: Yes, that's right. We didn't actually—I said it was the only one spelled that way but we didn't spell it. It's aBa, and you capitalize the middle, the middle and not the first letter, right?

aM: No, yes, that's fine. It looks more symmetric that way.

EL: Yeah. You could even reverse one of them.

aM: I usually write the B backwards. Like the band, but I can't do that usually, though. I don't want to be overkill to the people that I work around. But yes, at the bottom of my webpage, I have the links to videos of me singing various songs to students, complex analysis raps, PhD level down to undergraduate level, just different raps that I wrote for funs.

And I wanted to plug one thing at JMM. I mean, not that it's hard to find it in the program, but I'm an MAA invited speaker this time, and I'm actually scared pooless a little bit to be speaking in one of those large rooms. I don't know how I got invited. But I said yes.

KK: Of course you said yes!

aM: Well, I'm excited to share two research projects that I've been doing with students. Because I like doing research just for the sheer joy of it. And I think the topic of my talk is “A research project birthed out of curiosity and joy” or something like that, because one of the projects I'm sharing wasn't even a paid research project. I just had a student that got really excited to study something I noticed in Pascal's triangle, and these tridiagonal real symmetric matrices. I mean, it was finals week, and I was like, “You want to have fun?” And we spent the next year and a half having fun, and now she's pursuing graduate school, and it's great. It's great, research for fun. But one thing I'm talking about that I'm really excited about is the Fibonacci sequence. And I know that's kind of overplayed at times, but I find it beautiful. And we're looking at the sequence modulo 10. So we're just looking at the last, the units digits.

EL: Yeah, last digits.

aM: And whenever you take the sequence mod anything, it's going to repeat. And that's an easy proof to do. And actually Lagrange knew that long, long ago. But recently, in 1960, a paper came out studying these Fibonacci sequences modulo some natural number, and proved the periodicity bit and proved—there’s tons of papers in the Fibonacci Quarterly related to this thing. But what I'm looking at in particular is a connection to astrology—which actually might clear the room, but I'm hoping not—but the sequence has a length of periods 60. So if you lay that in a circle, it repeats and every 15th value in the Fibonacci number ends in 0. That's something you can see with the sequence, but it’s a lot easier to see when you're just looking at it mod 10. and that's something probably people didn't know now. Every 15th Fibonacci number ends in 0.

KK: No, I didn't know that.

aM: And if it ends in 0, it's a 15th Fibonacci number. And so, it’s an if and only if. And every 5th Fibonacci number is a multiple of five. So in astrology, we have the cardinal signs: Aries, Cancer, Libra and Capricorn. And you and you lay those on the zeros. Those are the zeros. And then the fixed and mutable signs, like Taurus, Gemini, etc, etc. As you move after the birth of the astrological seasons, those ones lay on the fives, and then you can look at aspects between them. Actually, I'm not going to say much astrology, by the way, in this talk. So people who are listening, please still come. It's only math! But I'm going to be looking at sub-sequences, but it got inspired by some videos online that I saw by a certain astrologer. And I—there was no mathematics in the videos and I was like, “Whoa, I can fill these gaps.” And it's just beautiful. Certain sub-sequences in the Fibonacci sequence mod 10 give the Lucas sequences mod 10. The Lucas sequence, and I don't know if your listeners or you guys know what the Lucas sequence is, but it's the Fibonacci sequence, but the starting values are 2 and then 1.

KK: Right.

aM: Instead of zero and one.

EL: Yeah.

aM: And Edward Lucas is the person, actually, who named the Fibonacci sequence the Fibonacci sequence! So this is a big player. And I am really excited to introduce people to these beautiful sub-sequences that exist in this Fibonacci sequence mod 10. It's like, just so sublime, so wonderful.

EL: I guess I never thought about last digits of Fibonacci numbers before, but yeah, I hope to see that, and we'll put some information about that in the show notes too. Yeah, have a good rest of your day.

aM: All right, you too, both of you. Thank you so much for this invitation. I’m happy to be invited.

EL: Yeah, we really enjoyed it. v KK: Thanks, aBa.

aM: All right. Bye-bye.

On this episode of My Favorite Theorem, we talked with aBa Mbirika, a mathematician at the University of Wisconsin Eau Claire. He told us about several favorite theorems of the moment before zeroing in on one of his first mathematical discoveries: a way to determine whether a number is divisible by 7.

Here are some links you may find interesting after listening to the episode.

aBa’s website at UWEC
Snake lemma
Short five lemma
Euler-Fermat’s theorem
Gauss’s primitive roots
Adriana Salerno’s episode of the podcast
Steve Fisk’s “book proof” of the art gallery theorem
Information on aBa’s MAA invited address at the upcoming Joint Mathematics Meetings

Episode 49 - Edmund Harriss

Kevin Knudson: Welcome to My Favorite Theorem, math podcast and so much more. I'm Kevin Knudson, professor of mathematics at the University of Florida, and I am joined today by your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer, usually based in Salt Lake City, but today coming from the Institute for Computational and Experimental Research in Mathematics at Brown University in Providence, Rhode Island, where I am in the studio with our guest, Edmund Harriss.

KK: Yeah. this is great. I’m excited for this, this new format where we're, there's only two feeds to keep up with instead of three.

EL: Yeah, he even had a headphone splitter available at a moment's notice.

KK: Oh, wow.

EL: So yeah, this is—we’re really professional today.

KK: That’s right.

EL: So yeah, Edmund, will you tell us a little bit about yourself?

Edmund Harriss: I was going to say I'm the consummate unprofessional. But I'm a mathematician at the University of Arkansas. And as Evelyn was saying, I'm currently at ICERM for the semester working on illustrating mathematics, which is an amazing program that's sort of—both a delightful group of people and a lot of very interesting work trying to get these ideas from mathematics out of our heads, and into things that people can put their hands on, people can see, whether they be research mathematicians or other audiences.

EL: Yeah. I figured before we actually got to your theorem, maybe you could say a little bit about what the exact—or some of the mathematical illustration that you yourself do.

EH: So, yeah, well, one of the big pieces of illustration I've done will come up with a theorem,

EL: Great.

EH: But I consider myself a mathematician and artist. And a part of the artistic aspect, the medium—well, both the medium but more than that, the content, is mathematics. And so thinking about mathematical ideas as something that can be communicated within artwork. And one of the main tools I've used for that is CNC machines. So these are basically robots that control a router, and they can move around, and you can tell it the path to move on and carve anything you like. So even controlling the machine is an incredibly geometric operation with lots of exciting mathematics to it. When I first came across—so one of the sorts of machine you can have is called a five-axis machine. That's where you control both the position, but also the direction that you're cutting in. So you could change the angle as its as its cutting. And so that really brings in a huge amount of mathematics. And so when I first saw one of these machines, I did the typical mathematician thing, and sort of said, “Well, I understand some aspects of how this works really well. How hard can the stuff I don't understand be?” It took me several years to work out just how hard some of the other problems were. So I've written software that can control these machines and turn—in fact, even turn a hand-drawn path into a something the machine can cut. And so to bring it back to the question, which was about illustrating mathematics: One of the nice things about that idea is it takes a sort of hand-drawn path—which is something that's familiar to everyone, especially people in architecture or art, who are often wanting to use these machines, but not sure how—and the mathematics comes from the notion that we take that hand-drawn path, and we make a representation of that on the computer. And so you've got a really interesting function, they're going from the hand drawn path through to the the computer representation, you can then potentially manipulate it on the computer before then passing it again back to the machine. And so now the output of the machine is something in the real world. The initial hand-drawn path was in the real world, and we sort of saw this process of mathematics in the middle.

Amongst other things, I think this is a really sort of interesting view on a mathematical model. you have something in the real world, you pull it into an abstract realm, and then you take that back into the world and see what it can tell you. In this case, it's particularly nice because you get a sense of really what's happening. You can control things, both in the abstract and in the world. And I think, you know, to me that really speaks to the power of thinking and abstraction of mathematics. Of course, also controlling these machines allows you to make mathematical models and objects. And so a lot of my my work is sort of creating mathematical models through that, but I think the process is a more interesting, in many ways, mathematical idea, illustration of mathematics, that the objects that come out

KK: Okay, pop quiz. What's the configuration space of this machine? Do you know what it is?

EH: Well, it depends on which machine.

KK: The one you were describing, where you can where you can have the angles changing. That must affect the topology of the configuration space.

EH: So it’s R3 crossed with a torus.

KK: Okay.

EH: And so even though you're changing the angle of the bit, you really need to think about a torus. It's really also a subset of a torus because you can't reach all angles.

KK: Sure, right.

EH: But it is a torus and not a sphere.

KK: Yeah. Okay.

EH: So if you think about how to get from one position of the machine to another, you really want to—if you think about moving on a sphere, it's going to give you a very odd movement for the machine, whereas moving along a torus gives the natural movement.

KK: Sure, right. All right. So, what's your favorite theorem?

EH: So my favorite theorem is the Gauss-Bonnet.

KK: All the way with Gauss-Bonnet!

EL: Yes. Great theorem. Yeah.

EH: And I think in many ways, because it speaks to what I was saying earlier about the question: as we move to abstraction, that starts to tell us things about the real world. And so the Gauss-Bonnet theorem comes at this sort of period where mathematics is becoming a lot more abstract. And it's thinking about how space works, how we can work with things. You're not just thinking about mathematics as abstracted from the world, but as sort of abstraction in its own right. On the artist side, a bit later you have discussion of concrete art, which is the idea that abstract art starts with reality and then strips things away until you get some sort of form, whereas concrete art starts from nothing and tries to build form up. And I think there's a huge, nice intersection with mathematics. And in the 19th century, you've got that distinction where people were starting to think about objects in their own right. And as that happens, suddenly this great insight, which is something that can really be used practically—you can think about the gospel a theorem, and it's something that tells you about the world. So I guess I should now say what it is.

EL: Yeah, that would be great. Actually, I guess it must have been almost two years ago at this point, we had another guest who did choose the Gauss-Bonnet theorem, but in case someone has not religiously listened to every single episode—

KK: Right, this was some time ago.

EL: Yeah, we should definitely say it again.

EH: So the gospel out there links the sort of behavior of a surface to what happens when you walk around paths on that surface. So the simplest example is this: I start off, I’m on a sphere, and I start at the North Pole and I walk to the equator. At the equator, I turn 90 degrees, I walk a quarter of the way around the Earth, I turn 90 degrees again, and I walk back to the North Pole. And if I turn a final 90 degrees, I’m now back where I started facing in the same direction that I started. But if I look at how much I turned, I didn't go through 360 degrees. So normally if we go around a loop on a nice flat sheet, if you come back to a started pointing in the same direction, you've turned through 360 degrees. So in this path that I took on sphere, I turned through 270 degrees, I turned through too little. And that tells me something about the surface that I'm walking on. So even if I knew nothing about the surface other than this particular loop, I would then know that the surface inside must be mostly positively curved, like a sphere.

And similarly if I did the same trick, but instead of doing it on the sphere, I took a piece of lettuce and started walking around the edge of a piece of lettuce, in fact, I’d find that when I got back to where I started, I’d turned a couple of hundred times round, instead of just once, or less than once, as in the case the sphere. And so in that case, you've got too much turning. And that tells you that the surface inside is made up of a lot of saddles. It's a very negatively curved surface. And one of the motivations of creating this theorem for Gauss, I believe—I always find it dangerous to talk about history of mathematics in public because you never know what the apocryphal stories are—one of the questions Gauss was interested in was not whether or not the earth was a sphere. Well, actually, whether or not the earth was a sphere. So not whether or not it was round, or topologically a ball, but whether it was geometrically really a perfect sphere. And now we can go up into space and have a look back at the earth, and so we can sort of do a three-dimensional version of that, regard the earth as a three dimensional sphere, but Gauss was stuck on the surface of the earth. So he really had this sort of two dimensional picture. And what you can do is create different triangles and ask, for those triangles, what’s the average amount of curvature? So I look at that turning, I look at the total area, the size of the triangle, and ask does that average amount of curvature change as I draw triangles in different places around the earth? And at least to Gauss’s measurements—again, in the potentially apocryphal story I heard—the earth appeared to be a perfect sphere up to the level of measurement, they were able to do then. I think now, we know that the earth is an oblate spheroid, in other words, going between the poles is a slightly shorter distance than across the equator.

KK: Right.

EH: I believe that it was only a couple of years ago that we managed to make spheres that were more perfect than the Earth. So it was sort of, yeah, the Earth is one of the most perfect spheres that anyone has experience of, but it's not quite a perfect sphere when your measurements are fine enough.

KK: So what's the actual statement of Gauss-Bonnet?

EH: So, the statement is that the holonomy, which is a fancy word for the amount of turning you do as you go around a path on the surface, is equal to—now I’m forgetting the precise details—so that turning is closely related to the integral of the Gaussian curvature as you go over the whole surface.

KK: Right.

EH: So it's relating going around that boundary—which is a single integral because you're just moving around a path—to the double integral, which is the going over every point in the surface. And the Gaussian curvature is the notion of whether you're like a sphere, whether you're flat, or whether you're like a saddle at each individual point.

KK: And the Euler characteristic pops up in here somewhere if I remember right.

EH: Yeah. So the version I was giving was assuming that you’re bounding a disk in the surface, and you can do a more powerful version that allows you to do a loop around something that contains a donut.

EL: Yeah, and it relates the topology of a surface, which seems like this very abstract thing, to geometry, which always seems more tangible.

EH: Yeah. Yeah, the notion that the total amount of curvature doesn't change as you shift things topologically.

EL: Right.

EH: Even though you can push it about locally.

KK: Yeah. So if you're if you're pushing it in somewhere, it has to be pooching out somewhere else. Right? That's essentially what's going on, I guess. Right?

EH: Yeah. You know, another thing that's really nice about the the Gauss-Bonnet theorem, it links back to the Euler characteristic and that early topological work, and sort of pulls the topology in this lovely way back into geometric questions, as Evelyn said. And then the Euler characteristic has echoes back to Descartes. So you're seeing this sort of long development of the mathematics that's coming out. It’s not something that came from nowhere. It was slowly developed by insight after insight, of lots of different thinking on the nature of surfaces and polyhedra and objects like that.

EL: Yeah. And so where did you first encounter this theorem?

EH: So this is rather a confession, because—when I was a undergraduate, I absolutely hated my differential equations course. And I swore that I would never do any mathematics involved in differential equations. And I had a very wise PhD advisor who said, “Okay, I'm not going to argue with you on this, but I predict that at some point, you will give me a phone call and say you were wrong. And I don't know when that will be. But that's my prediction.”

KK: Okay.

EH: It did take several years. And so yes, many years later, I'd learned a lot of geometry, and I wanted to get better control over the geometry. So I sort of got into doing differential geometry not through the normal route—which is you sort of push on through calculus—but through first understanding the geometry and then wanting to really control—specifically thinking about surfaces that were neither the geometry of the sphere, the plane, or the hyperbolic plane. Those are three geometries that you can look at without these tools. But when you want to have surfaces that have saddles somewhere and positive curvature—I mean, this relates back to the CNC because you're needing to understand paths on surfaces there in order to take our tool and produce surfaces.

And so I realized that the answers to all my questions lay within differential equations, and actually differential equations were geometric, so I was foolish to dislike them. And I did call up my advisor and say, “Your prediction has come true. I'm calling you to say I was wrong.”

EL: Yeah.

EH: So basically, I came to it from looking at geometry and trying to understand paths on surfaces and realizing from from there that there was this lovely toolkit that I had neglected. And one of the real gems of this toolkit was this theorem. And I think it's a real shame that it's not something that's talked about more. I’ve said this is a bit like the Sistine Chapel of mathematics. You know, most people have heard of the Sistine chapel.

KK: Sure.

EH: Quite a lot of people can tell you something that's actually in it.

EL: Right.

EH: And slowly, only a few people have really seen it. And certainly a very few people have studied it and really looked and can tell you all the details. But in mathematics, we tend to keep everything hidden until people are ready to hear the details. And so I think this is a theorem that you can really play with and see in the world. I mean, it's not a—there are some models and things you can build that are not great for podcasts, but it's something you can really see in the world. You can put it put items related to this theorem into the hands of people who are, you know, eight or nine years old, and they can understand it and do something with it and and see how what happens because all you have to do is give people strips of paper and ask them to start connecting them together, just controlling how the angles work at the corners.

And depending on whether those angles add up to less than 360 degrees—well not the angles at the corner—depending on whether the turning gives you less than 360, exactly 360, or more than 360, you're going to get different shapes. And then you can start putting those shapes together, and you build out different surfaces. And so you can then explore and discover a lot of stuff in a sort of naive way You certainly don't need to understand what an integral is in order to have some experience of what the Gauss-Bonnet theorem is telling you. And so this is sort of it's that aspect, that this is something that was always there in the world. The sort of experiments, the sort of geometry you can look at, through differential geometry and things like the Gauss-Bonnet, that was available to the whole history of mathematics, but we needed to make a break from just geometry as a representation of the world to then sort of step back and look at this result that is a very practical, hands-on one.

You know, if you really want to control things, then you do need to have solid multivariate calculus. So generally, the three-semester course of calculus is often meant to finish with Gauss-Bonnet, and it's the thing that's dropped by most people at the end of the semester, because you don't quite have time for it. And there's not going to be a question on the test. But it's one of those things that you could sort of put out there and have a greater awareness of in mathematics. Just as: this is an interesting, beautiful result. I would say, you know, it's one of humanity's greatest achievements to my mind. You don't have to really be able to understand it perfectly in order to appreciate it. You certainly—as I proved you—can appreciate it without being able to state it exactly.

EL: Yeah, well, you've sold me—although, as we've learned to this podcast, I'm extremely open—susceptible to suggestion.

KK: That’s true. Evelyn's favorite theorem has changed multiple times now. That's right.

EL: Yeah. And I think you brought it back to Gauss-Bonnet. Because when when we had Jeanne Clelland earlier, who said Gauss-Bonnet, I was like, “Well, yeah, I guess the uniformization theorem is trash now”—my previous favorite theorem, but now—it had been pulled over to Cantor again, but you’ve brought it back.

KK: Excellent. All right, so that's another thing we do on this podcast is ask our guest to pair their theorem with something. So Edmund, what pairs well with Gauss-Bonnet?

EH: Well, I have to go with a walnut and pear salad.

KK: Okay.

EL: All right.

KK: I’m intrigued.

EH: Well, I think I've already mentioned lettuce.

EL: Yes.

EH: Lettuce is an incredibly interesting curved surface. Yeah. And then you've got pears, which gives you—

KK: Spheres.

EH: A nice positively curved thing. But they're not just boring spheres.

EL: Yeah.

EH: They have some nice interesting changes of curvature. And then walnuts are also something with very interesting changing curvature. They have very sharply positively curved pieces where they're sort of coming in but then they've got all these sort of wrinkly saddley parts. In fact, one of the applications of the Gauss-Bonnet theorem in nature is how do you create a surface that sort of fits onto itself and fills a lot of space—or doesn't fill that much space but gives you a very high surface area to volume ratio. So walnut is an example—or brains or coral—you see the same forms coming up. And the way many of those things grow is by basically giving more turning as you grow to your boundary.

KK: Right.

EH: And that naturally sort of forces this negatively-curved thing. So I think the salad really shows you different ways in which this surface can—the theorem can affect the behaviors of the surfaces.

EL: Yeah, well, what I want now is something completely flat to put in the salad. Do you have any suggestions?

KK: Usually you put goat cheese in such a thing, but that doesn't really work.

EL: That’s—well, parmesan. You could shave paremesan.

EH: Yeah, shavings of parmesan. Or maybe some thin-cut salami.

EL: Okay.

EH: And so even though those things would bend over—I mean, we’re now on to a different theorem of Gauss, and I don’t meant to corrupt Evelyn away—but you know, when you thinly cut the salami, it can it can bend but it doesn't actually change its curvature.

KK: Right.

EH: Your loops on that salami are going to have the same behavior that they had before. And I guess I should also say that I did create a toy that makes that paper model that I talked about easier to use. You don't have to use tape. You can hook together pieces. And so the toy is called Curvahedra.

KK: I was going to say, you should promote your toy. Yeah.

EH: I’m terrible at self-promotion, yes.

EL: We will help you. Yes, this is a very fun toy. I actually got to play with it for first time a few weeks ago when you did a little short thing and I think when I had seen pictures of it before I thought it was not going to be as sturdy as it is. But this is—yeah, it's called Curvahedra—look it up. It’s these quite sturdy—you know, you don't need to worry about ripping the pieces as you put them together—but you can create these things that look really intricate, and you can create positive curvature, or flat things, or negative curvature in all these different conformations. It's a very fun thing to play with.

EH: And it is a sort of physical version of exactly the Gauss-Bonnet theorem. As you hook together pieces, you're controlling what happens on a loop. And then as you put more of those loops together, you can get a variety of different surfaces, from hyperbolic planes to spheres to—of course, kids have made animals and creatures with it. So you get this sort of control. In fact, it's one of those things that, you put it into the hands of kids, and they do things that you didn't think were really possible with it because their ability to play with these ideas and be free is always so inspiring. So that's what I said, this is a theorem that you can—people can understand as something in the real world. And then you can tell the story of how this understanding of the world is linked directly back to abstract, esoteric mathematics, of the most advanced sort.

KK: Right. One of my favorite things about Curvahedra, though, is the video that you put online somewhere—I think was on Twitter—of it popping out of your suitcase, like you compressed it down into your suitcase to travel home one time?

EH: Yes, I have a model that's about to a two-foot cube. And so you can’t travel with that easily, but it can compress very small. And that same object has been in my suitcase and other things several times, and it's now sitting in my office here.

KK: That’s great fun. And also you've made similar models out of metal, correct?

EH: Yes. So the basic system—not the big one you can crush down to put into suitcases.

KK: No, certainly not.

EH: I’ve made a couple of the spheres. And we're currently working on a proposal to go outside the Honors College at the University of Arkansas. That grew out of a course—it was a design that was created from Curvahedra and other inspirations—by a course I taught with Carl Smith, who is a landscape architect in our landscape architecture school. And so there's going to be—hopefully at some point there's going to be a 12-foot tall Curvahedra-style model outside the Honors College at University of Arkansas.

KK: Very nice.

EL: Nice.

KK: Yeah, this has been great fun. Anything else we want to talk about?

EL: Yeah, well, do you want to say a website or Twitter account or anything where people can find you online?

EH: So I’m actually @Gelada on Twitter, and there is @Curvahedra, and my blog, which is very rarely updated, but has some nice stuff, is called Maxwell Demon.

EL: Yeah, and can you spell your Twitter?

EH: Yes, so Gelada is spelled G-E-L-A-D-A. They are baboons in Ethiopia, or it’s a cold beer in Brazil. I discovered that latter one after being on Twitter, and I regularly get @-ed by people in Brazil, who were not wanting to talk to me at all, but they're asking each other out for beers.

EL: Ah.

EH: And yeah, so then there's also curvahedra.com, where you can get that toy.

EL: Cool. Thanks for joining us.

KK: Yeah, thanks Edmund.

EH: Thank you.

[outro]

On today’s episode, we were pleased to talk with Edmund Harris, a mathematician and mathematical artist at the University of Arkansas, who is our second guest to sing the praises of the Gauss-Bonnet theorem. Below are some links you might find useful as you listen to the episode.



Edmund’s Twitter account, @Gelada

His blog, Maxwell’s Demon


The website and Twitter account for Curvahedra, the toys he makes that help you explore the Gauss-Bonnet theorem and just have a lot of good fun with geometry


Our episode with Jeanne Clelland, who also chose the Gauss-Bonnet theorem


Edmund and Evelyn both attended the Illustrating Mathematics program at the Institute of Computational and Applied Mathematics (ICERM). The program website, which includes videos of some interesting talks at the intersection of math and art, is here.

Episode 48 - Sophie Carr

Kevin Knudson: Welcome to My Favorite Theorem, a math podcast and so much more. I'm one of your hosts, Kevin Knudson. I'm a professor of mathematics at the University of Florida. And here is your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance writer, usually based in Salt Lake City, but currently coming to you from Providence, Rhode Island.

KK: Hooray! Yeah, you're at ICERM.

EL: Yes. The Institute for computational and experimental research in mathematics, an acronym that I am now good at remembering.

KK: I’m glad you told me. I was trying to remember what it stood for this morning because I'm going next week. We'll be in the same place for, like, only the second time ever.

EL: Yeah.

KK: And the universe didn't implode the first time. So I think we're safe.

EL: Yeah.

KK: So the ICERM thing is visualizing mathematics, I mean, we're sort of doing like—next week is about geometry and topology, which since both of us are nominally that, that's just the right place for us to be.

EL: Yeah, it's it's going to be a fun semester. I'm also very excited because I recently turned in—it feels weird to call it a manuscript, but it is being published by a place that publishes books. It is the final draft of a page-a-day calendar about math. And I hope that by the time we air this, I will be able to have a link where people can purchase this and give it to give it to themselves or to their favorite mathematician.

KK: Yeah.

EL: So that's just, every day you can have a little morsel of math to start your morning.

KK: I’m looking forward to that. That’s really exciting. Yeah, that's that's great. All right, so we're continuing a tradition in this episode.

EL: Yes.

KK: So Christian Lawson-Perfect organizes this thing through the Aperiodical called the Great Internet Math-Off [Editor’s note: Whoops, it’s called the Big Internet Math-Off!] of which you were a participant in the first one but not this one, not the second go-around. And we had the first winner on. The winner gets named the World's Most Interesting Mathematician (among those people who Christian could round up and who were free in July). And so we wanted to keep this trend going of getting the most interesting mathematicians in the world on this podcast. And we are pleased to welcome this year's winner, Sophie Carr. Sophie, you want to introduce yourself, please?

SC: Oh, hello, thank you very much. Yeah, I'm Sophie Carr. I studied Bayesian networks at university, and now I own and run a data analytics company.

EL: Yeah, and you’re the most interesting mathematician!

SC: I am! For this year, I am the most interesting mathematician in the world. It's entirely Nira’s fault that I entered because he suggested, and put me forward.

KK: That’s right. Nira Chamberlain was last year's winner. And so when we interviewed him he was sitting in his attic wearing a winter coat. It was wintertime and it seemed very cold where he was. You look very comfortable. It looks like you have a very lovely home in the background.

SC: Yes, I mean, I am in two jumpers. Autumn has definitely arrived. Summer has gone, and it's a little chilly at the moment.

KK: I can only dare to dream. Yeah.

EL: Yeah, Florida and UK have slightly different seasons.

KK: Just a little bit. So you own a consulting company? That’s correct?

SC: Yeah, I do. I set it up 10 years ago now. There’s me and two other people who work with me. We just have an awful lot of fun finding patterns in numbers. I still find it amazing that we're still going. It's just the best fun ever. We get to go and work on all sorts of different problems with all sorts of different people. It's fantastic.

KK: Yeah, that's great. I mean, I'm glad companies are starting to come around to the idea that mathematicians might actually have something to tell them. Right?

SC: Yes. It really is. When you explain to them, you're not going to do magic and it's not a black box, and you can tell them how it works and how it can really make a difference, they are coming around to that.

KK: That’s fantastic. All right, so we're here to talk about theorems.

EL: Yeah. What is your favorite theorem?

SC: My favorite theorem in the whole world is Bayes’ Theorem.

EL: Yay, I'm so glad that someone will be talking about this! Because I know that this is a great theorem and—confession: I just, I don't appreciate it that much.

KK: You know, same.

EL: I need to be told why it's great.

KK: Yeah, I taught probability one time and I said, “Okay, here's Bayes’ theorem.” I kind of went all right. Fine, but of course the question is what's the prior, Mr. Bates? So tell us. Tell us, please.

EL: Yeah, preach!

KK: Preach for Reverend Bayes.

SC: You know, I don't think there's any any preaching needed. Because I always say this. I mean, there are two bits of statistics, there’s the frequentist and the Bayesian. And I always liken it to rugby union, and rugby league, which are two types of rugby in England. It's different codes, but it's the same thing. So to me, Bayes’ theorem, it's just the way that we naturally think. And it's beautifully simple, and all it does is let you take everything that you know and every piece of information that you have, and use that to update the overall outcome. And you're right, that the really big arguments come about from what the prior is. What is the background information that we have, and can we have actually genuinely have a true prior? And some people say no, because you might not have any information. But that's the great bit! Because then you can go and find out what the prior is. You have to be absolutely open about what you're putting in there. I think the really big debate comes around whether people are happy with uncertainty. Are they happy for you to not give an exact answer? If you go and you say, well, this is the prior, this is what we think the information is as well. And we combine these all, combine these priors, and this is the answer. Let's have a debate. Let's start talking about what we can have. Because at its simplest, you've got two things you’re timesing together. Just two numbers. Something that runs your mobile phone. I mean, that’s quite nifty.

KK: So can we can we remind our listeners what Bayes’ theorem actually says?

SC: Okay, so Bayes’ theorem takes two things. It takes the initial, or the prior distribution. Okay, and that's the bit where the argument is. And that might be just, what's the chance of something happening? What do you think the probability is of something happening? And you combine that with something called likelihood ratio. And it's real simple. The likelihood ratio is just a ratio of the probability of the information, or the evidence you have, assuming one hypothesis,divided by the probability of that information assuming another hypothesis. So you just have to have those two values. [And I say you just have to keep it.

And then all you have to do is times them together! That really is it, and when you start to say to people, it's just two numbers—Now, you can turn that into three numbers if you want. You can turn the likelihood ratio bit into its two separate parts. And you can show Bayes’ theorem very, very simply with decision trees, and that was part of the reason I used decision trees in the Math-Off, was just to show the power of something that is really quite simple, that can drive so, so far. And that's what I love about Bayes’ theorem. I always describe it as something that is stunningly elegant, but unbelievably powerful. And I always liken it to Audrey Hepburn. I think if it were to be a person, it would be Audrey Hepburn. Quite small! I'd say it's, it's this amazing little thing that has two simple numbers. But goodness me, getting those numbers, well, I mean, you can just have so much fun! I think you can.

And maybe it's just me that likes finding the patterns in the numbers and finding those distributions. Coming up with the priors. So come on, Kevin, you said, you sat there and your class said, “Well, what's the prior?”

KK: Yeah.

SC: What do you say? How would you tell people to go about finding a prior? Are they going to use their subjective opinion? Are they going to try and find it from data?

KK: Well, that that is the question, isn't it? Right? So, I mean, often, the problem with probability sometimes is that—at least, like, in political forecasting, right—people tend to round up probabilities to 1 or lop them off to zero. Right? So for example, when, you know, when Trump won the election in 2016, everybody thought it was a huge shock. But you know, 538 had it as, you know, Hillary Clinton was a two-to-one favorite. But two-to-one favorites lose all the time, right?

SC: Yeah.

KK: And and so the question then is, yeah, people like to think about one-off events. And then the question is, how do you estimate the probability of a one time event? And you have to make some guess, right, at the prior. And that’s—I think that's where people get suspicious of Bayes’ theorem, or Bayesian statistics, because how you make this estimate? So how do you make estimates in your daily work as a consultant?

SC: Okay, so we do it in a variety of different ways. So if we're really lucky, there’s some historical data we can go looking at.

KK: Sure.

SC: And often just mining that historical data gives you a good starting point. I always get slightly suspicious of flat distributions. Because if we really, really don't know anything other than that, I think maybe a bit of research before where you find the prior is always a good thing. My favorite priors are when we go and talk to people and start to get out of them their subjective opinion. Because I like statistics, I genuinely love statistics, because of the debate that goes on around it. And I think one of the things that people forget about math is that it's such a living subject. And there are so many brilliant debates—and you can call some of them arguments— people are prepared to go and say, “Look, this is my opinion and this is what I think the shape is.” And then we can do the analysis. Inevitably somebody will stand up and go, “Well, that bit is wrong.” Okay, so tell me why!

EL: Yeah.

SC: What evidence have you got for us to change the shape, or why do you think it should be skewed, or Poisson, or whatever we're using? And sometimes, if we haven't got time to do that we can start to put in flat distributions. We can say, “Well, we think it's about normal.” Or “We think on average, it'll be shoved a little bit to the right or a little bit to the left.” That's the three main ways we go about doing it. And I think the ability to be absolutely open and up front about what you know and what you don’t know helps you find that prior. And I don't really understand why people would be scared of running away from that. Why you would not want to say what the uncertainty is or what you're not sure about. But that might go a long way when people think that math is certain.

EL: Yeah.

SC: That when you say the answer is 12, well it’s 12. And not, “Well, it’s 12 because we kind of do it like this, and actually if something changes, that number might change.” And I think getting comfortable with uncertainty and being uncomfortable, is really the crux for developing those priors.

EL: Yeah. Well, I guess for me, it's hard to reason about statistics in a non frequentist way. Meaning—you know, I'm comfortable with non frequentist statistics to a certain degree. But just like what, as you were, saying, like, what does a 30% chance mean if it's not that we could do this 10 times that have it happen three times. But you can't have a presidential election—the same election—10 times, or you can't run Monday’s weather 10 times, or something like that. But it's just hard for me to interpret what does it mean if there isn't a frequentist interpretation?

SC: Yeah. One of the things we found that works really well is if you start showing patterns—and that's why I always talk about patterns, that we find patterns. It's when you're doing Bayesian stats with priors if you start to show the changes as curves, and I don't mean the distribution, but I mean, just as that rising and falling of numbers, people start to understand what's driving the priors, what assumptions are changing those priors. And then you start to see the impact of that, how the final answer changes. That can be incredibly powerful. Often people don't want that set answer. They want to know what the range is, they want to understand how that changes. And showing that impact as a shape—because I think most people are visual. When you show somebody a surface or, you know, a graph, or whatever it is, that's something you can really get a grip with. And actually I come from a Bayesian belief network. So I kind of found out about Bayes’ theorem by chance. I never set off to learn Bayes’ theorem. I set off to design [unintelligible]. That’s what I grew up wanting to do. But I ended up working on Bayesian networks. That’s the short version of what happened.

EL: So, how—was this a “love at first sight” theorem? Or what was your initial encounter with this theorem? And how did you feel about it? Since this is all about subjective feelings anyway!

SC: Well, my PhD was part-time. I spent eight years collecting subjective opinions. So I started a PhD in Bayesian networks, and there was this brilliant representation of a great big probability table. And this is a while ago now. And I’ve moved on a lot into [unintelligible]. But I've got this Bayesian network and supervisor said, “Here we go,” and I went, “Ah, it’s just lots of ovals connected with arrows”

And I went, “There must be something more to this.” And he went, “There’s this thing called Bayes’ theorem that underpins it and look at how it flows. It’s how the information affects it.” And I went, “Okay!” And so, as with all PhDs, you have this pile of reading, which is apparently going to be really, really good for you.

So I got my pile of reading. I went, “Okay.” And genuinely I just thought, “Yeah, it's just kind of how we all work, isn't it?” And I really had not liked statistics at university at all because I’d only really done frequentist statistics. And it’s not like I dislike frequentist statistics. I just didn’t fall in love with it. But when there was something I could see—and I genuinely think it’s because it's visual. I see the shapes move, I could see the numbers flow, I could see the information flow. I thought, “Oh, this is cool stuff. I understand this. I can get my head around this.” And I could start to see how to put things in and how they changed. And I think also I've got at times a very short attention span. So running millions of replicates never really did it for me.

EL: Yeah.

SC: So I had a bit of an issue with frequentist, where we just have to run lots and lots and lots lots of replicates.

EL: Right.

SC: Can we not assume it's kind of like this shape and see what happens? Then change that shape. Look, that’s great. That's much better for me.

EL: Yeah. So it was kind of a conversion experience there.

SC: I think, for people my age, probably. Because I don’t think Bayesian statistics, years ago, was taught that commonly. it's only really in the past sort of maybe decade that I think it's become really mainstream and been taught in the way it is now. Certainly with its its wide applications. That's what I think people just go, something that they've never heard of is now all in the AI world and it’s in your mobile phone, and it's in your medicine, and it's in your spam filters. And when it suddenly becomes really popular, people start to see what it can do. That's when it's taught more. And then you get all these other debates.

KK: So the other fun thing we like to do on this podcast is ask our guests to pair their theorem with something. So what pairs well with Bayes’ theorem?

SC: So this caused a lot of debate in our household.

KK: It always does.

SC: Yeah. And I am going to pair Bayes’ theorem with my favorite food, which is risotto, because risotto only takes three things. It only needs rice and onions and a good stock.

KK: Yes.

SC: And Bayes’ theorem is classically thought with three numbers. And it’s really powerful and gorgeous. And risotto only takes three ingredients, and it’s really gorgeous.

KK: And also, the outcome is uncertain sometimes, right?

SC: Oh, frequently uncertain. And if you change those prior proportions, you will get a very different outcome.

KK: That’s right. You might get soup, or it might might burn.

SC: So, I am going to say that Bayes' theorem is like a risotto.

EL: And you mentioned Audrey Hepburn earlier so maybe it’s even more like sharing a risotto with Audrey Hepburn.

SC: That would be brilliant. How cool would that be?

EL: I know!

SC: I will have my Bayes’ theorem discussion with Audrey Hepburn over risotto. That would be a pretty good day.

EL: Yeah, you could probably get a cardboard cutout. Just, like, invite her to dinner.

SC: Yeah, I'll do that. I'll try and set up a photo, superimpose them.

EL: Yeah.

KK: But Audrey Hepburn should be breakfast somewhere right?

EL: But you can eat risotto for breakfast.

SC: Yeah, you can eat risotto any time of the day.

KK: Sure.

SC: There’s never a bad time for risotto.

KK: No, there isn't. Yeah. My wife actually doesn't like risotto very much, so I never make it.

EL: So is that one of your restaurant foods? So we have this whole like foods that you you tend to order at a restaurant because your partner doesn't like them. And so it's like something that you can—like I don't really like mushrooms, so my partner often will order a mushroom thing at a restaurant.

KK: Yeah, so for me, I don't go out for Italian food because I can make it at home.

EL: Okay.

KK: So I just have a generic I don't I don't eat Italian out. There’s kind of no point, I think.

SC: So you’re right that risotto is my restaurant food because my husband doesn't like it.

KK: Oh.

EL: Aw.

SC: It's my most favorite thing in the world, so yeah, every time we go out, the kids go, “Mom, just don't get the menu. There’s no point. We know what you’re getting.

EL: Yeah. So you said this caused a debate. Did he have a different opinion about what your pairing should be?

SC: Well, there were discussions about whether it was my favorite drink with [a bag of crisps?], and what things could be combined together. And I said, “No, it just has to be risotto.”

KK: Okay. Excellent.

EL: Yeah, we do make that at home. And actually the funny thing is I don't really like mushrooms, but I do like the mushroom risotto that we make.

SC: Oh.

EL: Yeah.

SC: So you've not got a flat prior. You've actually got a little bit of a skew on there.

EL: Yeah, I guess. I’m trying to figure out how to quantify this. Yeah, like my prior distribution for mushroom preference is going to depend on whether it is cooked with arborio rice or not.

SC: See, there we go and you don’t have to worry about numbers you just draw a shape.

EL: Yeah, nice.

KK: Cool. So we also like to give our guests a chance to plug anything they want to plug. Do you have things out there in the world that you want people to know about?

SC: So the only thing I think that's worth mentioning is I do some Royal Institution maths master classes, where we go out and we take our favorite bit of math, and we go and take it to students who are between the ages of about 14 to 17. And that's really what I'm doing coming up in the near future, and they are a brilliant way for lots of people to engage with maths.

EL: Oh, nice.

KK: That’s very cool.

SC: Yeah. They are really good fun.

KK: Have you been doing that for very long?

SC: I’ve been doing them for about two years now. And the first one I ever did was on Bayes’ theorem. And I've never been so terrified, because I don’t teach. And then you have this group of students, and they come up with just the best and most fantastic questions. Every time you do it, you go, “I hadn’t thought of that.”

KK: Yeah.

SC: “And I don't know how to answer that question straight away.” So it's brilliant, and I love doing them. So that's kind of what we've got coming up. And you know, work is just going to be keeping me nicely busy.

EL: Nice.

SC: Yeah.

KK: Well, this has been great fun. Thank you for joining us, and congratulations on being the world's most interesting mathematician for this year.

EL: Yes. Yeah, thanks a lot.

SC: Thank you. I’ve been so excited to do this. I've been listening to your podcast for quite a long time, and I couldn't believe it when you emailed.

okay, thank you very much.

Okay. Thanks.

On this episode, we had the pleasure of talking with Sophie Carr, a statistics consultant and winner of Christian Lawson-Perfect’s Big Internet Math-Off last summer. Here are some links you may enjoy as you listen to this episode.

As we mentioned at the top of the show, Evelyn’s math page-a-day calendar is available for purchase in the AMS bookstore!
Sophie Carr’s twitter account
The Big Internet Math-Off at the Aperiodical
Royal Institution Masterclasses
Sophie Carr is this year’s World’s Most Interesting Mathematician. We also had last year’s World’s Most Interesting Mathematician, Nira Chamberlain, on the show in January. Find his episode here.

Episode 47 - Judy Walker

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and all kinds of crazy stuff, and I have no idea what it's going to be today. It is a tale of two very different weather formats today. So I am Kevin Knudson, professor of mathematics at the University of Florida. Here's your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a math and science writer in Salt Lake City, Utah, where I am using the heater on May 28.

KK: Yes, and it's 100 degrees in Gainesville today, and I'm miserable. So this is bad news. Anyway, so today, we are pleased to welcome Judy Walker. Judy, why don't you introduce yourself?

Judy Walker: Hello. Thank you for having me. I'm Judy Walker. I'm a professor of mathematics at the University of Nebraska.

KK: And what else? You're like—

JW: And I am Associate Vice Chancellor for faculty and academic affairs, so that’s, like, Vice Provost for faculty.

KK: That sounds—

EL: Yeah, that does sound very official!

JW: It does sound very official, doesn't it?

KK: That’s right. Like you're weighing T & P decisions in your hands. It's like, you're like Caesar, right? With the thumbs up and the—

JW: I have no official power whatsoever.

KK: Right.

JW: So yes.

KK: But, well, your power is to make sure procedures get followed, right?

JW: Yes. And I have a lot of I have a lot of influence on other things.

KK: Yeah. Right. Yeah. That sounds like a very challenging job.

JW: And for what it's worth, I will add that it is cloudy and windy today. But I think we're supposed to be, like, 67 degrees. So right in the middle.

KK: All right. Great.

EL: Okay, perfect.

KK: So if we could see the map of the US, there'd be these nice isoclines. And here we are. Right. So we're, my mine is very hot. Mine's red. So we're good. Anyway, we came to talk about math. You’re excited to talk about math for once, right?

JW: Exactly. I guess I'm kind of going to be talking about engineering, too. So—

EL: Great.

KK: That’s cool. We like it all here. So what's your favorite theorem?

JW: So my favorite theorem is the Tsfasman-Vladut-Zink theorem.

KK: Okay, that's a lot of words.

JW: It is—well, it’s a lot of names. It's three names. And it's a theorem that is in error-correcting codes, algebraic coding theory. And it's my favorite theorem, because it solves a problem, or maybe not solves a problem, but shows that something's possible that people didn't think necessarily was possible. And the way that it shows that it's possible is by using some pretty high-powered techniques from algebraic geometry, which had not previously been brought into the field at all.

EL: So what is the basic setting? Like what kind of codes can you correct with this theorem?

JW: Right. So the codes are what does the correcting. We don't correct the codes, we use the codes to correct. So I used to tell my — actually, my advisor told me and then I've told all my PhD students — that you have to have a sentence that you start everything with. And so my sentence is: whenever information is transmitted across a channel, errors are bound to occur. So that is the setting for coding theory. You've got information that you're transmitting. Maybe it's pictures from a satellite, or maybe it's just storing things on a computer, or whatever, but you're storing this information. Or you're transmitting this information, and then on the other end, or when you retrieve it, there's going to be some mistakes. And so it's the goal of coding theory to add redundancy in such a way that you can find those mistakes and fix them. Okay?

And we don't actually consider it an error if you fix the mistake. So an error is when so many mistakes happened in the transmission or in the storage and retrieval, that what you think was sent was not what was actually sent, if that makes sense.

KK: Sure. Okay.

JW: So that's the basic setting for coding theory, and coding theory kind of started in 1948 with Shannon's theorem.

KK: Right.

JW: So Shannon's theorem says that reliable communication is possible. So what it says really, is that whatever your channel is, whether it's transmitting satellite pictures, or storing data, or whatever—whatever your channel is, there is a kind of maximum efficiency that's possible on the channel. And so what Shannon’s theorem says is that for any efficiency up to that maximum, and for any epsilon greater than zero, you can find a code that is that efficient has less than epsilon probability of error, meaning the probability that what you sent is not what you think was sent at the end. Okay?

So that's Shannon's theorem. Right? So that's a great theorem.

EL: Yeah.

JW: It’s not my favorite theorem. It’s not my favorite theorem because it actually kind of bothers me.

KK: Why does it bother you?

JW: Yeah, so the reason that bothers me are — there are two reasons it bothers me. One is that it doesn't tell us how to find these codes. It says good codes exist, but it doesn't tell us how to find them, which is kind of useless if you're actually trying to transmit data in a reliable way. But it's actually even worse than that. It's a probabilistic proof. And so it doesn't just say that good codes exists, it says they're everywhere, but you can't find them. Right? So it's like it's taunting us. So I just—. So yeah. So that's Shannon's theorem. And that's why it's not my favorite theorem. But why it's a really great theorem is that it started this whole field. So the whole field of coding theory has been — or of channel coding, at least, which is what we've been talking about is to find those codes, and not just find them, but find them along with efficient decoding algorithms for them. And so that's Shannon's challenge is to find the good codes with efficient decoding algorithms for those good codes. That's 1948, that that started. Right? Okay.

So just as a digression, let me say that most mathematicians and engineers will agree that at this point in time — so a little more than 70 years after Shannon's theorem, that Shannon's challenge has been met, so that we can find these good codes. They're not going to agree on how it's been met. But they'll all agree that it has been met. So on the one hand, in the late ‘90s — mid-to-late 90s — engineers found turbo codes, and they rediscovered low-density parity check codes. And these are codes that in simulations come very, very close to meeting Shannon's challenge. The theory around these codes is still being developed. So the understanding of why they meet Shannon challenge is still try to be solved. But the engineers will say that it's solved, that Shannon's challenge is met because they've got these simulations, and they're so confident about it, that these codes are actually being used in practice now.

EL: So I have a naive question, which is, like, does the existence of us talking over the internet on on this call, sort of demonstrate that this has been met? Like we we are hearing each other — I mean, not with perfect fidelity, but we're able to transmit messages. Is that? Or is that just not even in the same realm?

JW: No, that's exactly exactly what we're talking about, exactly what we're talking about. And not only that, but I don't know if you've noticed, but every once in a while, Kevin gets a little glitchy, and he doesn't move for a while. That's the code catching up and fixing the errors.

KK: Yeah, that's the irony is this this call has been very glitchy for me.

JW: Right.

KK: Which is why we each record our own channel.

EL: Yeah.

JW: Exactly. So in fact, low-density parity-check codes and turbo codes are being used now in mobile phones, in satellite communications, in digital TV, and in Wi-Fi. So that's exactly what we're using.

EL: Okay.

JW: But the mathematicians will say, “Well, it's not really—we’re not really done. Because we don't know why. We don't really understand these things. We don't have all the theoretical underpinnings of what's going on.” A lot of work has been done a lot, and a lot of that is there. But it's still a work in progress. About 10 years ago, kind of on the flip side, polar codes were discovered. And polar codes are the first family of codes to provably achieve capacity. So they actually provably meet Shannon's challenge. But at this moment, they are unusable. There's just still a lot of work to understand how we can actually use polar codes. So the mathematicians say, “We've met the challenge, because we've got polar codes, but we can't use them.” And the engineers say, “We've met the challenge because we've got turbo codes and LDPC codes, but we don't know why.” Right? And that's an oversimplification, but that's kind of the current state. And so different people are working on different things now. And of course, there are other kinds of coding that that aren’t — that isn't really channel coding. There are still all kinds of unsolved problems. So if anybody tells you that coding theory is dead, tell them they're wrong.

EL: Okay!

JW: It’s still very much alive. Okay, so we talked about Shannon's theorem from 1948. And we talked about the current status of coding theory. And my favorite theorem, this Tsfasman-Vladut-Zink, is from 1982. So in the middle.

EL: Halfway in between.

JW: Yes, yes. Just like my weather being halfway in between. Yes. So around this time, in the early ‘80s, and and preceding that, the way that mathematicians were approaching Shannon's challenge was through the study of linear codes. So linear codes are just subspaces, and we might as well think of—in a lot of applications, the data is zeros and ones. But let's go to Fq instead of F2, so q is any prime power.

KK: Okay, so we're doing algebraic geometry now, right?

JW: We’re not yet. Right now, we’re just talking about finite fields.

KK: Okay.

JW: We will soon be be doing algebraic geometry, but not yet. Is that okay?

EL: You’re just trying to transmit some finite set of characters.

JW: Yes, some finite string of characters. Order matters, right? So it's a string. And so the way that we think about it, we can think about it as a systematic code. So the first k characters are information, and then we're adding on n−k redundancy characters that are computed based on the first k.

KK: Okay.

JW: So if we're in a linear setting, then this collection of code words that include the information and the redundancy, that collection of code words is a subspace, say it's a k-dimensional subspace, of Fqn. So that's a linear code. And we can think about that ratio, k/n, as a measure of how efficient the code is.

KK: Right.

JW: Because it's the number of information bits divided by the total number of bits, or symbols, or characters. So, let's call that ratio, R for rate, right? k/n, we’ll call it R. And then how many errors can the code correct? Well, if you look at that Hamming distance—so that's the number of characters and number of positions in which to code words differ—then the bigger that distance, the more errors you can make and still be closest to the code word that was sent. So then that's not really an error. Right? So maybe we say the number of mistakes goes up.

EL: Yeah. So again, let's normalize that minimum distance of the code by dividing by the length of the code. So we have a ratio, let's call that ∂. So that's our relative minimum distance for the code. So one way to phrase this is if we want a certain error-correcting capability, so a certain ∂, how efficient can the code be? How big can R be? Okay, so there are a lot of bounds relating R and ∂, our information rate and our error-correcting capability, or our relative minimum distance. So one that I want I tell you about is that Gilbert-Varshamov bound.

So the Gilbert-Varshamov bound is from 1952. And it says that there's a sequence of codes, or a family of codes if you want, of increasing length, increasing dimension, increasing minimum distance, so that the rate converges to R and the minimum distance to converges to ∂. And R is at least 1−Hq(∂), where Hq is this entropy function. So you may have heard of the binary entropy function, there's a q-ary entropy function, that's what Hq(∂) is. So one such sequence is the so-called classical Goppa codes, and I want to say that that's from, 1956, so just a little bit later. And those codes were the best-known codes from this point of view for about 30 years. Okay, so let me just say that again. So the Gilbert-Varshamov bounds says that there's a sequence of codes with R at least 1−Hq(∂). The Goppa codes satisfy r=1−Hq(∂). And for 30 years, we couldn't find any codes with R greater than.

EL: That were better than that.

JW: Right. That were greater than this 1−Hq(∂).

KK: Okay.

JW: So people at this point were starting to think that maybe the Gilbert-Varshamov bound wasn't a bound as much as it was the true value of how good can R be given ∂, how efficient can codes be given given their relative minimum distance. So this is where this Tsfasman-Vladut-Zink theorem comes in. So in 1978—and Kevin, now we can talk about algebraic geometry. I know you’ve been waiting for that.

KK: All right, awesome.

JW: Yes. Right. So in 1978, Goppa defined algebraic geometry codes. So the way that definition works: remember, a code is just a subspace of Fqn, right? So how are we going to get a set of space of Fqn? Well, what we're going to do is we're going to take a curve defined over Fq that has a lot of rational points, Fq-rational points, right? So we're going to take one of those points and take a multiple of it and call that our divisor on the curve. And then we're going to take the rest of them. And we're going to take the rational functions in this space L(D). D is our divisor, right? So these are the functions that only have poles at this chosen point of multiplicity, at most the degree that we've chosen.

KK: Okay.

JW: And we're going to evaluate all those functions at all the rest of those points. So remember, those functions form a vector space, and evaluation is a linear map. So what we get out is a vector space. So that's our code. And if we make some assumptions, so if we assume that that degree of that divisor, so that multiplicity that we've chosen, is at least twice the genus minus 2, twice the genus of the curve minus 2, then Riemann-Roch kicks in, and we can compute the dimension of L(D). But if we also assume that that degree is less than the number of points that we're evaluating at, then the map is injective. And so we have exactly what the dimension of the code is. The dimension of the code is the degree of the divisor, so that multiplicity that we chose, plus 1 minus the genus. And the minimum distance, it turns out, is at least n minus the degree of the divisor. So lots of symbols, lots of everything.

EL Yeah, trying to hold this all in my mind, without you writing it on the board for me!

JW: I know, I’m sorry. But when you put it all together, and you normalize out by dividing by the length, what you get is that if you have a family of curves with increasing genus, and an increasing number of rational points, then we can end up with a family of codes, so that in the limit, R, our information rate, is at least 1−∂—that’s that relative minimum distance—minus the limit of the genus divided by the number of rational points. Okay. So g [the genus] and n are both growing. And so what's that limit? So that's that was Goppa’s contribution. I mean, not his only contribution. But that's the contribution of Goppa I want to talk about, just that definition of algebraic geometry code. So it's a pretty cool definition. It’s a pretty cool construction. It’s kind of brand new in the sense that nobody was using algebraic geometry in this very engineering-motivated piece of mathematics.

EL: Right.

JW: So here is algebraic geometry, here is a way of defining codes, and the question is, are they any good? And it really depends on what—how fast can the number of points grow, given how fast the genus is growing? So what Drinfeld and Vladut proved—so this is not the TVZ theorem, not my favorite theorem, but one more theorem to get there—Drinfeld and Vladut proved that if you take, if you define Nq(g) to be the maximum number of Fq-rational points on any curve over Fq of genus g, then as you let g go to go to infinity, and for a fixed q, the limit superior, the lim sup, of the ratio g/Nq(g), is at most 1/√(q−1). Okay, fine. Why do we care? Well, the reason we care is that the Tsfasman-Vladut-Zink theorem, which is again my favorite theorem, it says—so actually, my favorite theorem is a corollary of the Tsfasman-Vladut-Zink theorem. So the Tsfasman-Vladut-Zink theorem says that if q is a square prime power, then there's a sequence of curves over Fq of increasing genus that meets the Drinfeld-Vladut bound.

EL: Okay.

JW: Okay, so the Drinfeld-Vladut bound said you can be at most this good. And Tsfasman-Vladut-Zink says, hey, you can do that.

EL: Yeah, it's sharp.

JW: So if we put it all together, then the Gilbert-Varshamov bound gave us this curve, right? So it was a concave-up curve that intersects the vertical axis, which is the R-axis, at 1 and the horizontal axis, which is the ∂-axis, at 1−1/q. So it's this concave-up thing that's just kind of curving out. Then the Tsfasman-Vladut-Zink line—the theorem gives you a line that looks like R=1−∂−1/√(q−1). Right? So it's just a line of slope −1, right, with y-intercept 1−1/√(q−1). So the question is, does that line intersect that curve? And it turns out that if you have q, a square prime power q at least 49, then the line intersects the curve in two points.

EL: Okay.

JW: So what that is really doing for us is it's telling us that in that interval between those two points, we have an improvement on the Gilbert-Varshamov bound. We have better codes than we thought were possible for 30 years.

EL: Wow!

JW: Yes. So that's my, that's my favorite theorem.

KK: I learned a lot.

EL: And where did you first encounter this theorem?

JW: In graduate school? Okay, in graduate school, which was not in 1982. It was substantially after that, but it was said to me by my advisor, “I think there's a connection between algebraic geometry and coding theory, go learn about that.”

KK: Oh.

JW: And I said, “Okay.”

KK: And so two years later.

JW: Right. Right, right. Actually, two years later, I graduated.

KK: Okay. All right. So you’re much faster than I am.

JW: Well, there was four years before that of doing other things.

EL: So was it kind of love at first sight theorem?

JW: Very much so. Because I mean, it's just so beautiful, right? Because here's this problem that nobody knew how to solve, or maybe everybody thought was solved. Because nobody had any techniques that could get any better than the Gilbert-Varshamov bound. And then here's this idea, just way out of left field saying, hey, let's use algebraic geometry to find some codes. And then, hey, let's look at curves with many points. And hey, that ends up giving us better codes than we thought were possible. It's really, really pretty. Right? It's why mathematicians are better than electrical engineers.

EL: Ooh, shots fired!

JW: Gauntlet thrown. I know.

EL: But it does make you wonder how many other things in math will eventually find something like this, like, will will find for these problems—you know, factoring integers or things like this— that we think are difficult, will someone swoop in with some completely new thing and throw it on its head?

JW: Yes. Exactly. I mean, I don't know anything about it. Maybe you do. But the idea that algebraic topology, right, is useful in big data.

KK: Yeah, sure. That's what I've been working on lately. Yeah. Right.

JW: I love that.

KK: Yeah. Sure.

JW: I love that. I don't know anything about it. But I love it.

KK: Well, the mantra is data has shape. Right? So let me just, you know, smack the statisticians here. So they want to put everything on a straight line, right? But a circle isn't a straight line. So what if your data’s a circle? So topology is very good at finding circles.

JW: Nice.

KK: Well, that's the mantra, at least. So yeah. All these unexpected connections really do come up. I mean, it's really—that’s part of why we keep doing what we're doing, right? I mean, we love it. But we never know what's out there. It's, you know, to boldly go where no one has gone before. Right?

JW: Exactly. And Evelyn, it's funny that you should bring up factoring integers, because you know that the form of cryptography that we use today to make it safe to use our credit cards on the internet, that’s very much at risk when quantum computers are developed.

EL: Right.

JW: And so, it turns out that algebraic geometry codes are not being used in practice, because LDPC codes and turbo codes are much more easily implementable. However, one of the very few known so far unbreakable methods for post-quantum cryptography is based on algebraic geometry codes.

KK: Excellent.

EL: Nice.

JW: So even if we can factor integers,

KK: I can still buy dog food at Amazon. Right?

JW: You can still shop at Amazon because of algebraic geometry codes.

EL: Yeah, the important things.

KK: That’s right.

EL: Well, so another thing we like to do on this podcast is invite our guests to pair their theorem with something, the way we would pair food with fine wines. So what have you chosen for this theorem?

JW: So that was very hard. Yeah. I mean, it's just kind of the most bizarre request.

EL: Yeah.

JW: So I mean, I guess the way that I think about this Tsfasman-Vladut-Zink theorem, I was looking for something that was just, you know, unexpected and exciting and beautiful. But I couldn't come up with anything. And so instead, what I'm going with is lemon zest.

KK: Okay.

EL: Okay.

JW: Which I guess can be unexpected and exciting in a dessert, but also because of the way that you just kind of scrape it off that curve of the lemon. And that's what the Tsfasman-Vladut-Zink theorem is doing, is it’s scraping off a little bit of that Gilbert-Varshamov curve.

KK: This is an excellent visual. I've got it. I zest lemons all the time. I understand now. This is it.

EL: Yeah.

JW: There you go.

KK: So all right. Well, we also like to give our guests a chance to plug anything. You wrote a book once. Is that still right? I have it on my shelf.

JW: Yeah. I did write a book once. So that book actually was—Yeah, so I wasn't going to plug anything, but I will plug the book a little bit, but more I'm going to plug a suite of programs. So the book is called, I think, Codes and Curves.

KK: That sounds right.

JW: You would think I would know that.

KK: I’d have to find it. But it is on my shelf.

JW: Yes. It's on mine too, surprisingly, which is right behind me, actually, if you have the video on.

So that book really just a grew out of lecture notes from lectures I gave at the program for women and mathematics at the Institute for Advanced Study. Okay, so I will take my opportunity to plug something to plug that program, to plug EDGE, to plug the Carleton program, and to plug the Smith post-bac program, and to plug the Nebraska conference for undergraduate women in mathematics. So what do all these programs have in common they have in common? They have in common two things that are closely related. One is that they are all programs for women in mathematics. And the other is that they were all the subject of study of a recent NSF grant that I had with Ami Radunskaya and Deanna Haunsperger and Ruth Haas that studied what are the most important or effective aspects of these programs and how can we scale them?

EL: Oh, nice.

JW: Yes. And some of the results of that study, along with a lot of other information, are on our website. That is women do math.org?

EL: I will be visting it as soon as we get off this phone call.

JW: Right. Awesome. I hope it's functioning

KK: And because Judy won't promote herself, I will say, you know, she's been a significant leader in promoting programs for women in mathematics through the University of Nebraska’s math department there. There's a picture of her shaking Bill Clinton's hand somewhere.

JW: Well, that's also on my shelf. Okay. Yeah, I think it's online somewhere, too.

KK: Right. Their program won a national excellence award from the President. Really excellent stuff there at the University of Nebraska. Really a model nationally.

EL: Yeah, I’m familiar with that as one of the best graduate math programs for women.

JW: Thank you.

EL: Yeah. Great job!

EL: Yeah, well, we'll have links to all of those programs on the website. So if you didn't catch one, and you're listening, you can to the website for the podcast and find all those. Yeah. Well, thank you so much for joining us, Judy.

JW: Thank you for the opportunity.

KK: Yeah, this has been great fun. Thanks.

JW: All right. Thank you.

On this episode, we were happy to talk with Judy Walker, who studies coding theory at the University of Nebraska. She told us about her favorite theorem, the Tsfasman-Vladut-Zink theorem. Here are some links to more information about topics we mentioned in the episode.


Goppa (algebraic geometry) code

Hamming distance

Gilbert-Varshamov bound

Judy Walker’s book Codes and Curves

The Program for Women and Mathematics at the Institute for Advanced Study

EDGE 

The Carleton Summer Mathematics Program for women undergraduates

The Smith College post-baccalaureate program for women in math

The Nebraska Conference for Undergraduate Women in Mathematics (Evelyn will be speaking at the conference in 2020)

WomenDoMath.org

Episode 46 - Adriana Salerno

Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcasts where there's no quiz at the end. I’m coming up with a new tagline for it.

Kevin Knudson: Good.

EL: I just thought I'd throw that in. Yeah, so I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer from Salt Lake City—or in Salt Lake City, Utah, not originally from here. And here's your other host.

KK: I’m Kevin Knudson, professor of mathematics at the University of Florida in Gainesville, but not from Gainesville. This is part of being a mathematician, right? No one lives where they're from.

EL: Yeah, I guess probably a lot of professions could say this, too.

KK: Yeah, I don’t know. It’s also a sort of a generational thing, right? I think people used to just tend to, you know, live where they grew up, but now not so much. But anyway.

EL: Yeah.

KK: Oh, well, it's okay. I like it here.

EL: Yeah. I mean, it's great here right now it's spring, and I've been doing a ton of gardening, which always seems like such a chore and then I'm out smelling the dirt and looking at earthworms and stuff, and it's very nice.

KK: I’m bird watching like crazy these days. Yesterday, we went out and we saw the bobolinks were migrating through. They're not native here, they just come through for, like, a week, and then they're gone.

EL: The what?

KK: Bobolinks, B-O-B-O-L-I-N-K. They kind of fool you, they look a little bit like an oriole, but the orange is on the wrong side. It's on the back of the neck instead of underneath.

EL: Okay, I'll have to look up a picture of that later.

KK: And then this morning for the first time ever, we had a rose-breasted grosbeak at our feeder. Never seen one before and they're not native around here, they just migrate through. So this is

EL: Very nice. Yes.

KK: This is what I'm doing in my late middle age. This is what I do. I just took up bird watching, you know?

EL: Yeah. Well, I can see the appeal.

KK: Yeah, it's great.

EL: Yes. But we are excited today to be talking with Adriana Salerno. Do you want to introduce yourself?

Adriana Salerno: Hi. Yeah, I'm Adriana Salerno. Now I am an associate professor of math at Bates College in Maine. And I am also not from Maine. I live in Maine. I'm originally from Caracas, Venezuela, so quite a ways away.

EL: Yeah.

AS: Again, you don't choose where you live, but maybe you get to choose where you work. So that's nice.

EL: Yeah. And you're you're not only a professor there, but you're also the department chair right now, right?

AS: Oh, yeah. Yeah, I'm trying to forget. No, I’m kidding.

EL: Sorry!

KK: You know, speaking of, before we we started recording here, I spent my afternoon writing annual faculty evaluations. I’m in the first year as chair. I have 58 of them to write.

AS: Oh, I don't have to do those, which I'm very happy about. But we are hiring a staff position, and I'm in charge of that. And that's been a lot.

EL: And we actually met because both of us have done this mass media fellowship for people interested in math or science and writing. And so you've done a lot of writing not for mathematicians as well, throughout your career path.

AS: Yeah, yeah. I mean, I did the mass media fellowship in 2007. And since then, I've been trying to write more and more about mathematics for a general audience. These days, I mostly spend time writing for blogs for the AMS. And right now I'm editing and writing for inclusion/exclusion. I wish I had more time to write than I do. It's one of those things that I really like to do, and I don't think I do enough of, but these opportunities are great because I get to use those—or scratch that itch, I guess, by talking to you all.

EL: Yes.

KK: Well, so speaking of, we assume you have a favorite theorem that you want to tell us about. What is it?

AS: Well, so it's always hard to decide, right? But I guess I was inspired by a conversation I had with Evelyn at the Joint Math Meetings. So I've decided my favorite theorem is Cantor's diagonalization argument that the real numbers do not have the same cardinality as the natural numbers.

EL: Yes, and I’m so excited about this! Ever since we talked at the Joint Meetings, I’ve been very excited about getting you to talk about this.

AS: Good. Good.

EL: Because really, it’s such a great theorem.

AS: Yeah. Well, I was thinking about it today. And I'm like, how am I going to explain this? But I have chosen that, and I'm sticking with it. Yeah.

EL: Yes.

KK: Good.

AS: So yeah, it’s—one of the coolest things about it is sort of it’s this first experience that you have, as a math student—at least it was for me—where you realize that there are different sizes of infinity. And so another way of saying that is that this theorem shows, without a doubt, I believe—although some students still doubt me after we go over it—it shows that you can have different sizes of infinity. And so the first step, even, is to say, “How do you decide if two things have the same size of infinity?” Right? And so it's a very, very lovely sort of succession of ideas. And so the first thing is, how do you decide that two things are the same size? Well, if they're finite, you count them, and you see that you have the same number of things. But even when things are finite—and say, you're a little kid, and you don't know how to count—another way of saying there's the same number of things is if you can match them up in pairs, right? So you know, if you want to say I have the same number of crayons as I have apples, you can match a crayon to an apple and see that you don't have anything left over, right?

EL: Yeah.

AS: And so it's just a very natural idea. And so when you think about infinite sets—or not even infinite sets—but you can think of this idea of size by saying two things are the same size if I can match every element in one set to every element in another set, just one by one. And so I really like, I'm borrowing from Kelsey Houston-Edwards’ PBS show, but what I really like that she said that you have two sets, and every element has a buddy, right? And so then I love that language, and so I'm borrowing from from her. But then that works for finite sets, but you can extend it to an infinite set.You can say, for example, that two infinite sets are the same size if I can find a matching between every element in the first set and every element in the second set. It’s very hard to picture in your head, I think, but we're going to try to do this. So for example, you can say that the natural numbers, the counting numbers, 1, 2, 3, 4, etc, have the same size as the even numbers, because you can make a matching where you say, “Match the number 1 with the number 2 on the other side. And then the number 2, with the number 4 on the other side.” And you have all the counting numbers, and for every counting number, you have two times that number as the even buddy.

EL: Yeah. And I think this is, it's a simple example that you started with, but it even hints at the weirdness of infinity.

AS: Yeah.

EL: You’ve got this matching, but the even numbers are also a subset of the natural numbers. Ooh, things are going to get a little weird here.

KK: Clearly, there aren’t as many even numbers, right?

AS: Yeah.

KK: This is where you fight with your students all the time.

AS: That’s exactly—so when you're teaching this, the first thing you do is talk about things that have the same cardinality. And then everybody, it can take a while, you know, like, infinity is so weird that you can actually do these matching. So Hilbert’s infinite hotel is a really great way of doing this sort of more conceptually. So you have infinitely many rooms. And so for example, suppose that rooms numbers from 1, 2, 3, to infinity, mean, and so on. Yes, you have to be careful because infinity is not a number. You have to be careful with that. But say that all the rooms are occupied. And so then, you know, say someone shows up in the middle of the night, and they say, “I need a room.” And so what you do if you're the hotel manager is you tell everyone to move one room over. And so everyone moves one room over and you put this person in, and room number one. And so that's another way of seeing that. So the one-to-one pairing, or the matching here is every person has a room. And so the number of rooms and the number of people are the same—the word is cardinality because you don't want to say number because you can't count that.

KK: Right.

AS: And so you you say cardinality instead. But it's really weird, right? Because the first time you think about this, you say, “Well, you know, there's infinity, and there's infinity plus one.” That's like the kind of thing that you would say as a kid, right? And they're the same! When you have the natural numbers and the natural numbers and one extra thing, or like with zero, for example—unless you're in the camp that says zero is an actual number—but we're not going to get to that discussion right now.

KK: I’m camp zero is a natural number.

AS: Okay. I feel like I know maybe half people who say zero is a natural number and the other half say it's not. And I don't think anyone has good arguments other than, ah, it must be true! And so then the cool thing is, once you start doing that, then you start seeing, for example—and these are, these are kind of tricky examples, it can get tricky. Like, you can say that the integers like the positive whole numbers, negative whole numbers and zero, that also has the same cardinality as the natural numbers. Because you can just start with zero—I mean, basically, when you want to say that something has the same cardinality as the natural numbers, what you're really trying to do is to find a buddy, so you're trying to pair someone with one or two, or three. But really, what you can do is just list them in order, right? Like you can have like the first one, the second one, the third one, the fourth one, and you know that that's a good matching. It's like the hotel. You can put everyone in a room. And then you know they're the same number. Everyone has a room. So with the integers, for example, the whole numbers, positive, negative and zero, then you can say, “Okay, put zero first, then one, then negative one, and two, then negative two then three, then negative three,” and then they're the same size, right? And so once you start thinking about this—I remember this pretty clearly from from college—once you start thinking about this, then you're like, “Well, obviously, because infinity is infinity.” That’s the next step. So the first step is like, well no, infinity plus one and infinity are different. But then you get convinced that there is a way of matching things that where you can get things that seem pretty different, or a subset of a set, and they have the same cardinality. And then you go the other direction, which is “Well, of course, anything infinite is going to be the same size as anything else that's infinite.” And so then it turns out that even the rationals are the same size as the natural numbers. And that's way more complicated than we have time for. But if you add real numbers, meaning irrationals as well, then you have a whole different situation.

KK: You do indeed.

AS: It’s mind blowing, right? And so if you just think about the real numbers between zero and one, so just get go real simple. I mean, small, relatively. So you're just looking at decimal expansions. And so if those numbers had the same cardinality as the natural numbers, then you should be able to have a first one and a second one and a third one, and a fourth one. Or you can pair one number was the number 1, one number with the number 2, etc. And that list should be complete, and in the words of Kelsey Houston-Edwards, everyone should have a buddy. And so then, here's the cool thing, this is a proof that these two sizes of infinity are not the same, and it's a proof by contradiction, which is, again, your favorite proof when you are learning how to prove things. I mean, when I was learning proofs, I wanted to do everything by contradiction. So proving something by contradiction means you want to assume, “Well, what if we can list all the all the real numbers?” There’s a first one, a second one, a third one, etc. So Cantor’s amazing insight was that you can always find a number that was not on that list. Every time you make this list: a first one, a second one, a third one… there is some missing element.

And so you line up all your decimals. So you have the first number in decimal. And so you have like, you know, 0.12345… or something like that. And then you have the next one. And the next one. And like, I mean, this is really hard to do verbally, but we're going to do it. And so you sort of line them up, and you have infinite decimals. So you have point, a whole bunch of decimals, point, a whole bunch of decimals. And so you can make a missing number by taking that first number in the first decimal place a just changing that number. Okay, so if it was a 1, you write down a 2. And so you know, because we’ve known how to compare decimals since we were little kids, that what you need to compare is decimal place by decimal place. So these are different because they're different in this one spot, right? And then you go to the second number, and the second decimal point. And then you say, “Well, whatever number I see there, I'm going to make the second decimal point of my new number different.” So if you had a 3, you change it to a 4, whatever it is, as long as it's not the original number. And and this is why it's called the diagonalization argument, or the diagonal argument, because you have lined all those numbers up, and you can go through the diagonal, and for each one of those decimal points, at each decimal place, you just change the value. And what you're going to get is a number, another real number, infinitely many decimals, and it's going to be different from every number on your list, just by virtue of how you made it. And so then, what that shows is that the answer to the “what if” is: you can’t. The “what if” is, if you have a list of all real numbers, it's not complete. So there is never going to be a way that you can make that list complete. And this is the part where every time I tell my students, at some point, they're like, “Wait, there are different sizes of infinity? What?” Then—and that’s sort of lovely, because it's just this this mind-blowing moment where you've convinced yourself, by the way, that you were to infinity is infinity, and then you realize that there's something bigger than the cardinality of the natural numbers. And and then it's really fun when you tell them, “Well, is there something in between?” They’re like, “Of course! There must be!” And then you're like, “Wait, no one knows.”

KK: Maybe not.

EL: Yeah.

AS: So yeah, I just love that argument. And I love how simple it is. And at the same time, it's, simple, but it's very, very deep, right? You really have to understand how these numbers match up with each other. And it requires a big leap of imagination to just think of doing this and realizing that you could make a number that was not on this infinite list by just doing that simple trick.

EL: Yeah.

AS: And so I just think it's a really, really beautiful theorem. And then I also have a really personal connection to this theorem. But it's one of my favorite things to teach. And I'm going to be teaching at this term, and I’m really looking forward to seeing how that how that lands. Sometimes it lands really well. Sometimes people are like, “Eh, you’re just making stuff up.” Yeah.

EL: Yeah.

KK: Well, then you can really blow their minds then when you show them the Cantor set, right?

AS: Yeah, yeah.

KK: And say, “Well, look, I mean, here's this subset of the reels that has the same cardinality, but it's nothing.”

AS: Exactly. Yeah, there's nothing there. Yeah.

EL: Yeah. I remember, then, when I first saw this argument, really carefully talking myself through, “Like, okay, but what if I just added that number I just made to the end of the list? Why wouldn't that work?” And trying to go through, like, “Why can't I—Oh, and then there must be other numbers that don’t fit on the list either.” It's not like we got within 1 of being the right cardinality.

AS: Right.

EL: For these infinite number. So yeah, it's a really cool idea. But you said you had some personal connections to this. So do you want to talk more about those?

AS Sure. So I am from Venezuela, and I went to college there. And I liked college, it was fine. I knew—Well, one thing that you do have to decide when you're a student in high school is, you don't really apply to college, you apply to a major within the college. And so then I knew I wanted to do math. And I signed up for math at a specific university. And so then the first year was very similar to what you would do in the States, which is sort of this general year where everybody's thinking calculus, or everybody's taking—you have some subset of things that everybody takes. And then your second year, you start really going into the math major. And so this was my first real analysis class. This was my first serious proof-y class in my university. And we learned Cantor’s diagonalization argument, which was pretty early. But I loved this argument. I felt so mind-blown. You know, I was like, “This is why I want to do math,” you know, I was just so excited. And I knew I understood everything. And so I took the exam, and I got horrible grade. And in particular, I got zero points on the “prove that the real and the natural numbers don't have the same cardinality.” And so I went to the professor, and I saw my exam, and I was really confused. And I went to the professor, and I said, “I really don't understand what's wrong with this problem. Could you help me understand?” Because I thought I understood this. And then—you know, that's a typical thing. I probably said it in a more obnoxious way than I remember now. But I felt like I was being pretty reasonable. I was not the kind of kid that would go up to my professors too often to ask for points. I really was like, “I don't know what I did wrong.” And especially because I felt like I really got it.

EL: Right.

AS: And so then he just looked at me and said, “If you don't understand what's wrong with this problem, you should not be a math major.” And that was it. That was the end of that conversation. Well, I still don't know what's wrong with this problem, and now you just told me I need to do something else. Just go do something else at a different school. Right? And I mean, I don't know that that was particularly sexist. But I do know that I was the only woman in that class, and I know that I felt it a lot. I think he probably would have said that—I really do think that he in particular would have said that to any student. I don’t think it was just me being female that affected that at all. But I do think that if I had been less stubborn about my math identity, I might have taken him up on that. But I was just like, “No, I'm going to show you!” And eventually I got an A in his class. He taught real analysis every semester, so I had to take the class with him every time and at some point, I cracked his code. And he at some point respected me, and thought I deserved to be there. But he was just very old-fashioned. You know, I don't think it's even sexism. It's just very, very, like, this is how we do things. And then I went—eventually, I did talk to someone. I think it was a teaching assistant. And I was like, “I don't know what's wrong with this problem.” And he looked at it. And he said, “Well, here's the problem. When you were listing—so you needed to list all these generic numbers and their decimal expansion. And I did, “Okay, the first number is point A1, A2, A3, etc. The second number is point B1, B2, B3, etc. The third one is point C1, C2, C3, etc, dot dot dot, right? And he said, “You have listed 26 numbers. And that's not going to be an infinite list.” Right?

KK: That’s cheap.

AS: And I was just like, “Okay, but I got the idea, right?” I was like, “Okay, it's true.” He’s like, “The way you wrote it is incorrect.” And I'm like, sure.

EL: Sort of.

KK: I’ve written that same thing on a chalkboard.

AS: You know, this shows you—like, fine, you can be more careful, you can be more precise, but from this, you shouldn't be a math major? That’s pretty intense.

EL: Yeah.

AS: And I knew the mechanics, I knew what was supposed to be happening, I knew how to make the missing number, right? Like you just need A1: you change it to some other number, B2: you change it to some other number, C3: you change it to some other number. And so, I just thought—I mean, that was a moment where I was just literally told I should not be in math because I made a silly mistake. And it was a moment where I realized that—now looking back, I realize my math identity was pretty strong, because I just said, “Well, ask someone else to see what was wrong, and I'm not going to ask this guy anymore because it's clear what he thinks.”

EL: Yeah.

AS: And sort of the stubbornness of, “Well, I’ll show him that I do deserve to be here.” But I think of all the students who might have taken classes with him, who would have heard that and then been like, “Yeah, maybe I need to do something else.” I mean, it just makes me really sad to hear, especially now that I'm a professor, and teaching these kinds of things. It just makes me sad to see which people were just scared away by someone like that, you know?

EL: Yeah.

AS: So that was a big moment for me. Yeah.

EL: Yeah. Quite a disproportionate response to, what’s basically a bookkeeping difficulty.

AS: Yeah.

EL: So, you know, we like to get our mathematicians to pair their theorems, with something on this show. And what have you chosen as your pairing for Cantor's diagonalization argument?

AS: Well, now that you suggested, music and other things, I'm maybe changing my mind.

EL: You could pair more than one thing.

AS: I was trying to find something that was just like—I need to sort of express the sort of mind-blowing nature of this, right? And so I was like, a tequila shot! You know, really just strong. And like, “Whoa, what just happened?” And so that was one thing that I thought about. And then—I don't know, just mind-blowing experiences, like, when I saw the Himalayas from an airplane, or when—you know, there are some moments where you're just like, “I can't believe this exists.” I can't believe this is a thing that I get to experience. So I guess, you know, there's been—most of these have been with traveling, where you just see something that you're just like, “I can't believe that I get to experience this.” And so I think Cantor's diagonalization argument is something like that, like seeing this amazing landscape where you're just like, “How does this even exist?”

EL: Yeah, I like that. I mean, I've had that experience looking out of airplane windows too. One time we were just flying by the coast of Greenland. And these fjords there. Of course, an airplane window is tiny and it's not exactly high-definition picture quality out of the thick plastic there, but it just took my breath away.

AS: Yeah.

EL: Yeah, I like that. And we can even invite our listeners to think of their own mind-blowing favorite experiences that they've they've had. Hopefully legal experiences in their jurisdiction.

KK: Well, oh wait, it's not 4/20 anymore. Oh, well. So we also like to invite our guests to plug anything they want to plug. So you write for the AMS, the inclusion/exclusion blog, are there other places where we might find your mathematical writing for the general public?

AS: Well, that's my main plug and outlet right now. But I I do write for the MAA Focus magazine sometimes. That's sort of my main, and sometimes the AWM newsletter. So you might find some of my writing there. And the blog. I mean, again, now that I'm chair and doing a lot of other things, I'm not writing as much, but I definitely like to—I’ve gotten really into maybe this is a weird plug, but I've gotten really into storytelling.

EL: Oh yeah, you’ve been on Story Collider?

AS: Yeah, I was on one Story Collider. I've done some of the local stuff. But you can find me on the internet telling stories about being a mathematician. Some of them about some pretty fantastic experiences, and some not so great experiences.

EL: Yeah. Okay. Yeah. Well, we'll link to your Twitter, and that can help people find you too.

AS: Oh, yeah. Cool.

EL: Thanks a lot for joining us.

AS: Yeah. Thanks for having me and listen to me ramble about infinity.

EL: Oh, I just love this theorem so much.

KK: Yeah, we could talk about infinity all day. Thanks, Adriana.

AS: Yeah. Thank you so much.

We were excited to have Bates College mathematician Adriana Salerno on the show. She is also the chair of the department at Bates and a former Mass Media Fellow (just like Evelyn). Here are some links you might enjoy along with this episode.


Salerno's website

Salerno on Twitter
AAAS Mass Media Fellowship for graduate students in math and science who are interested in writing about math and science for non-experts
Hilbert’s Infinite Hotel
Evelyn’s blog post about the Cantor set
Salerno’s StoryCollider episode
The inclusion/exclusion blog, an AMS blog about diversity, inclusion, race, gender, biases, and all that fun stuff

Episode 45 - Your Flash Favorite Theorems

Kevin Knudson: 1-2-3

Kevin Knudson and Evelyn Lamb: Welcome to My Favorite Theorem!

KK: Okay, good.

EL: Yeah.

[Theme music]

KK: So we’re at the JMM.

EL: Yeah, we’re here at the Joint Math Meetings. They’re in Baltimore this year. The last time I was at the Joint Meetings in Baltimore I got really sick, but so far I seem to not be sick.

KK: That’s good. You’ve only been here a couple of days, though.

EL: Yeah. There’s still time.

KK: Yeah, so I’ve only been to the Joint Meetings one other time in my life, 20 years ago as a postdoc in Baltimore. I’ve just got a thing for Baltimore, I guess.

EL: Yeah, I guess so.

KK: So people may have seen this on Twitter. Fun fact: this is our first time meeting in person.

EL: Yeah.

KK: And you’re every bit as charming in real life as you are over video.

EL: And you’re taller than I expected because my first approximation of all humans is that they are my height, and you are not my height.

KK: But you’re not exceptionally short.

EL: No.

KK: You’re actually above average height, right?

EL: I’m about average for a woman, which makes me below average for humans.

KK: Well, if we’re going to the Netherlands, for example, I’m below average for the Netherlands.

EL: Yes.

KK: So I’m actually leaving today. I was only here for a couple of days. I was here for the department chairs workshop. You’re here through when?

EL: I’m leaving on Friday, tomorrow. Yeah, while we’ve been here we’ve been collecting flash favorite theorems where people have been telling us about their favorite theorem in a small amount of time. So yeah, we’re excited to share those with you.

KK: Yeah, this is going to be a good compilation. I’m going to try to get a couple more before I leave town. We’ll see what happens.

EL: Yeah. All right.

KK: Enjoy.

EL: I am here with Eric Sullivan. Can you tell us a little bit about yourself?

Eric Sullivan: Yeah, I'm an associate professor at Carroll College in Helena, Montana, lover of all things mathematics.

EL: And here with me in the Salt Lake City Airport, I assume catching a connecting flight to the Joint Math Meetings.

ES: You got it.

EL: All right, and what is your favorite theorem, or the favorite theorem you'd like to tell me about right now?

ES: Oh, I have many favorite theorems, but the one that's really coming to mind right now, especially since I'm teaching complex analysis this semester, are the Cauchy-Riemann equations.

EL: Very nice.

ES: Giving us a beautiful connection between analytic functions, and ultimately, harmonic functions. Really lovely. And it seems like a mystery to my students when they first see it, but it's beautiful math.

EL: Yeah, it is. They are kind of mysterious, even after you've seen them for a while. It's like, why does this balance so beautifully?

ES: Right? And the way you get there with the limit, so I'm just going to take the limit going one way, then I’ll take the limit going the other way and voila, out comes these beautiful partial differential equations.

EL: Yeah, very lovely. And I know I'm putting you on the spot. But do you have a pairing for this theorem?

ES: Ooh, a pairing? Oh boy, something was a very complex taste. Maybe chili.

EL: Okay.

ES: I’ll say chili because there's all sorts of flavors mixed in with chili, and complex analysis seems to mix all sorts of flavors together.

EL: All right, I like it. Well, thank you. This is the first lightning My Favorite Theorem I'm recording so far at the joing meetings, or even before, on the way, so yeah, thanks for joining me.

Courtney Gibbons: I'm Courtney Gibbons. I'm a professor at Hamilton College in upstate New York. And my favorite theorem is Hilbert’s Nullstellensatz, which translates to zero point theorem, but if you run it through Google Translate, it's actually quite beautiful. It's like the “empty star theorem” or something like that. It's very astronomical. And I love this theorem because it's one of those magical theorems that connect one area that I love, algebra, to another area that I don't really understand, but would like to love, geometry. And I find that in my classes, when I ask someone, “What's a parabola?” I have a handful of students who do some sort of interpretive dance. And I have a handful of students were like, “Oh, it's like y equals some x squared stuff.” And I'm like, “I'm with you.” I think of the equation. And some people think of the curve, the plot, and that's the geometric object, and the Nullstellensatz tells you how to take ideals and relate them to varieties. So it connects algebra and geometry. And it's just gorgeous, and the proof is gorgeous, and everything about it is wonderful, and David Hilbert was wonderful. And if I were going to pair it with something, I’d probably pair it with a trip to an observatory, so that you could go appreciate the beauty of the stars, and think about the wonderful connectedness of all of mathematics and the universe. And maybe you should have, like, a beer or something too.

EL: Why not?

CG: Yeah. Why not? Exactly.

EL: Good. Well, thank you. Absolutely.

KK: All right, JMM flash theorem time. Introduce yourself, please.

Shelley Kandola: Hi. My name is Shelly Kandola. I'm a grad student at the University of Minnesota.

KK: And it’s warmer here than where you are usually.

SK: Yeah, it's 15 degrees in Minnesota right now.

KK: That’s awful.

SK: Yeah.

KK: Well anyway, we’ve got to be quick here. What's your favorite theorem?

SK: The Banach-Tarski paradox.

KK: This is an amazing result that I still don't really understand and I can't wrap my head around.

SK: Yeah, you've got a solid sphere, a filled-in S2, and you can cut it into four pieces using rigid motions, and then put them back together and get two solid spheres that are the same size as the original.

KK: Well, theoretically, you can do this, right? This isn't something you can actually do, is it?

SK: Physically no, but with the power of group theory, yes.

KK: With the power of group theory.

SK: The free group on two generators.

KK: Why do you like this theorem so much?

SK: So I like it because it was the basis of my senior research project in college.

KK: It just seems so weird it was something you should think about?

SK: Yeah, it intrigued me. It's a paradox. And it's the first theorem I dove really deep into, and we found a way to generalize it to arbitrarily many dimensions with one tweak added.

KK: Cool. So what does one pair with the Banach-Tarski paradox?

SK: One of my favorite Futurama episodes. There's this one episode where there's a Banach-Tarski duplicator, and Bender jumps into the duplicator, and he makes two more, and he wants to build an army of himself.

KK: Sure.

SK: But every time he jumps in, the two copies that come out, are half the size of the original. He ends up with an army of nanobots. It contradicts the whole statement of the paradox that you're getting two things back that are the same size as the original.

KK: Although an army of Benders might be fun.

SK: Yeah, they certainly wreak havoc.

KK: Don’t we all have a little inner Bender?

SK: Oh yeah. He's powered by beer.

KK: Well, thanks for joining us. You gave a really good talk this morning.

SK: Thanks.

KK: Good luck.

SK: Thank you for having me.

KK: Sure.

David Plaxco: My name is David Plaxco. I'm a math education researcher at Clayton State University. And my favorite theorem is really more of an exercise, I think most people would think. It's proving that the set of all elements in a group that conjugate with a fixed element is a subgroup of the group. I'll tell you why. Because in my dissertation, that exercise was the linchpin in understanding how students can learn by proving.

EL: Okay.

DP: So I was working with a student. He had read ahead in the textbook and knew that not all groups are commutative, so you can't always commute any two elements you feel like. And he generalized this to thinking about inverses. He didn’t think that every inverse was necessarily two-sided, which in a group you are. Anyway, so he was trying to prove that that set was a subgroup and came to this impasse because he wanted to left cancel and right cancel with inverses and could only do them on one side. And then he started to question, like, maybe I'm just crazy, like maybe you can use the same inverse on both sides. And then he proved it himself using associativity. So he made, like, I call it John’s lemma, he came up with this kind of side proof to show that, well, if you're associative and you have a right inverse and a left inverse, then those have to be the same. And then he came back and was able to left and right cancel at free will any inverse, and then proved that it was a subgroup, so through his own proof activity, he was able to change his own conceptual understanding about what it means to be an inverse, like how groups work, all these things, and it gave him so much more power moving forward. So that's how that theorem became my favorite theorem because it gave me insight into how individuals can learn.

EL: Nice. And do you have a pairing for this theorem?

DP: My diploma. Because it helped me get it.

EL: That seems appropriate. Thanks.

DP: Thanks.

Terence Tsui: So I'm Terrance, and I'm currently a final year undergraduate studying in Oxford. My favorite theorem is actually a really elegant proof of Euler’s identity on the Riemann zeta function. We all know that the Riemann zeta function is defined in a way of the sum of 1/ks where k runs across all the natural numbers. But at the same time, Euler has given a really good other formulation: we say status is the same as the infinite product of 1-1/ps, where p runs across the primes. And then it's really interesting, because if you look at you see, on one hand, an infinite some, and on the other hand, you have an infinite product. And it’s very rare that we see that infinite sums and infinite products actually coincide. And they’re only there because it is a function that actually works on nearly every s larger than 1. And that means that this beautiful, elegant identity actually runs correct for infinitely many values. And the most interesting thing about this theorem is that the proof to it could be done probabilistically, where we consider some certain particular events, and we realize that the Riemann zeta series sum is actually equivalent to finding a certain intersection of infinitely many independent events. And first it is just an infinite product of certain events. And first we have the Riemann zeta function equalling a particular infinite product. And I think that is something that is really out of out of our imagination, because not only does it link two things—a sum to an infinite product, but at the same time, the way that it proves it comes from somewhere we could not even imagine, which is from probability. So if I need to pair this theorem with something, I would say it’s like a spider web, because you can see that there's very intricate connections and that things connect to each other, but in the most mysterious ways.

ELL: Cool. Well, thanks.

TT: Thank you.

Courtney Davis: So Hi, I'm Courtney Davis. I am an associate professor at Pepperdine University out in LA.

EL: Okay. And I hear that we have a favorite model, not a favorite theorem from you.

CD: Yes. So I'm a math biologist. So I'm going to say the obvious one, which is SIR modeling, because it is the entry way into getting to do this cool stuff. It’s the way that I get to show students how to write models. It's the first model I ever saw that had biology in it. And it's something that is ubiquitous and used widely. And so despite being the first thing everyone learns, it's still the first thing everyone learns. And that's what makes it interesting to me.

EL: Yeah. And and can you kind of just sum up in a couple sentences what this model is, what SIR means?

CD: Yeah. So SIR is you are modeling the spread of disease through a susceptible (S) population through infected and into recovered or immune, and you can change that up quite a lot. There are a lot of different ways to do it. It's not one fixed model. And it's all founded on the very simple premise that when two individuals run into each other in a population, that looks like multiplication. And so you can take multiplication, and with that build all the interactions that you really need, in order to capture what's actually happening in a population that at least is well mixed, so that you have a big room of people moving around about it, for instance.

EL: Okay. And I'm going to spring something on you, which is that usually we pair something with our theorem, or in this case model, so we have our guests, you know, choose a food, beverage, piece of art, or anything. Is there anything that you would suggest that pairs well with SIR?

CD: With an SIR model, I would say, a paint gun.

EL: Okay.

CD: I don't know that that's what you're looking for.

EL: That’s great.

CD: Simply because running around and doing pandemic games or other such things is also a common way to get data on college campuses so that you can introduce students, and they can parameterize their models by paint guns or water guns or something like that.

EL: Oh, cool. I like it. Thank you.

CD: Absolutely. Thank you.

Jenny Kenkel: I’m Jenny Kenkel. I'm a graduate student at the University of Utah. I study commutative algebra. My favorite theorem is this isomorphism between a particular local cohomology module and an injective module: The top local cohomology of a Gorenstein ring is isomorphic to the injective hull of its residue field. But I was thinking that maybe it would pair really well with like, a dark chocolate and a sharp cheddar, because these two things are isomorphic, and you would never expect that. But then they go really well together, just in the same way that I think a dark chocolate and a sharp cheddar seem kind of like a weird pairing, but then it's amazing. Also, they're both beautiful.

EL: Nice, thank you.

JK: Thank you.

Dan Daly: My name is Dan Daly. And I am the interim chair of the Department of Mathematics at Southeast Missouri State University.

KK: Southeast—is that in the boot?

DD: That is close to the boot heel. It's about two hours south of St. Louis.

KK: Okay. I'm a Cardinals fan. So I'm ready, we’ve got something here. So what's your favorite theorem?

DD: So my favorite theorem is actually the classification of finite simple groups.

KK: That’s a big theorem.

DD: That is a very big, big,

KK: Like 10,000 pages of theorem.

DD: At least

KK: Yeah. So what draws you to this? Is it your area?

DD: So I am interested in algebraic and combinatorics, and I am generally interested in all things related to permutations.

KK: Okay.

DD: And one of the things that drew me to this theorem is that it's such an amazing, collaborative effort and one of the landmarks of 20th century mathematics.

KK: Big deal. Yeah.

DD: And, you know, it just to me, it seems such a such an amazing result that we can classify these building blocks of finite groups.

KK: Right. So what does one pair this with?

DD: So I think since it's such a collaborative effort, I'm going to pair it with Louvre museum.

KK: The Louvre, okay.

DD: Because it's a collection of all of these different results that are paired together to create something that is really, truly one of a kind.

KK: I’ve never been. Have you?

DD: I have. It’s a wonderful place. Yeah. It’s a fabulous place. One of my favorite places.

KK: I’m going to wait until I can afford to rent it out like Beyonce and Jay Z.

DD: Yeah, right.

KK: All right, well thanks, Dan. Enjoy your time at the Joint Math Meetings.

DD: All right, thank you much.

Charlie Cunningham: My name's Charlie Cunningham. I'm visiting assistant professor at Haverford College. And my area of research originally is, or still is, geometric group theory. But the theorem that I want to talk about was a little bit closer to set theory, which is I want to talk about the existence of solutions to Cauchy’s functional equation.

EL: Okay. And what is Cauchy’s functional equation?

CC: So Cauchy’s functional equation is a really basic sort of thing you can ask about a function. It's asking, all right, you take the real numbers, and you ask is there—what are the functions from the real numbers to the real numbers where if you add two numbers together, and then apply the function, it's the same thing as applying the function to both of those numbers and then adding them together?

EL: Okay. So kind of like you're naive student and wanting to—how a function should behave.

CC: Yes. Right. So this would come up in a couple of places. So if you’ve taken linear algebra, that's the first axiom of a linear function. It doesn't ask about the scaling part. It's just the additive part. And if you've done group theory, it's a fancy way, is it's all the homomorphisms from the real numbers to themselves, an additive group. So the theorem basically, is that well, well, first of all, the question is, well, there are some obvious ones. There are all the functions where you just multiply by a fixed number, all the linear functions you’d know from linear algebra, like 2 times x, 3 times x, or π times x, any real number times x. So the question is, are there any others? Or are those the only functions that exists at all that satisfy this equation? And the theorem turns out that the answer depends on the fundamental axioms you take for mathematics.

EL: Wow. Okay.

CC: Right. So the answer is just to use a little bit of set theory, that if you are working in a set theory, which most mathematicians do, that has something called the axiom of choice in it, then the answer is no, there are lots and lots and lots of other functions that satisfy this equation, other than those obvious ones, but they're almost impossible to think about or write down. They're not continuous anywhere, they are not differentiable anywhere. They're not measurable, if anyone knows what that means. Their graph, if you tried to draw them, are dense in the entire plane, which means any little circle you draw on the plane intersects of the graph somewhere. They still pass the vertical line test. They’re still functions that are well-defined. And I really like this theorem. One reason is because it's a really great place for math students to learn that there isn't always one right answer in math. Sometimes the answer to very reasonably posed questions isn't true or false. It depends on the fundamental universe we’re working in. It depends on the what we all sit down and agree are the starting rules of our system. And it's a sort of question where you wouldn't realize that those sorts of considerations would come up. It also comes up—When I've asked linear algebra students, it's equivalent to the statement are both parts of the definition of a linear function actually necessary? We usually give them to you as two pieces: one, it satisfies this, and the other is scalars pull out. Do we actually need that second part? Can we prove that scalars pull out just from the first part? And this is the only way to prove the answer's no. It's a good exercise to try yourself to prove just from this axiom, that rational scalars pull out, any rational number has to pull out of that function. But real numbers, not necessarily. And these are the counter examples. So it's a good place at that level when you're first learning math, to realize that there are really subtle issues of what we really think truth means when we're beginning to have these conversations

EL: Nice. And what is your theorem pairing?

CC: My theorem pairing, I'm going to pair it with artichokes.

EL: Okay.

CC: I think that artichokes also had a bad rap for a lot of time, for a long time. You should also look at the artichoke war, if you've never heard of it, a great piece of history of New York City, and it took a long time for people to really understand that these prickly, weird looking vegetables can actually be delicious if approached from the right perspective.

EL: Nice. Well, thank you.

Ellie Dannenburg: So I'm Ellie Dannenberg, and I am visiting assistant professor at Pomona College in Claremont, California. And my favorite theorem is the Koebe-Andreev-Thurston circle packing theorem, which says that if you give me a triangulation of a surface, that I can find you exactly one circle packing where the vertices of your triangulation correspond to circles, and an edge between two vertices says that those circles are tangent.

EL: Okay, so this seems site kind of related to Voronoi things? Maybe I'm totally going in a wrong direction.

ED: So, I know that these are—so I don't think they're exactly related.

EL: Okay. Nevermind. Continue!

ED: Okay. But, right, it’s cool because the theorem says you can find a circle packing if I hand you a triangulation. But what is more exciting is you can only find one. So that's it.

EL: Oh, huh. Cool. All right. And do you have something that you would like to pair with this theorem?

ED: So I will pair this theorem with muhammara, which is this excellent Middle Eastern dip made from walnuts and red peppers and pomegranate molasses that is delicious and goes well with anything.

EL: Okay. Well, it's a good pairing. My husband makes a very good version. Yeah. Thank you.

ED: Thank you.

Manuel González Villa: This is Manuel González Villa. I'm a researcher in CIMAT [Centro de Investigación en Matemáticas] in Guanajuato, Mexico, and my favorite theorem is the Newton-Puiseux theorem. This is a generalization of implicit function theorem but for singular points of algebraic curves. That means you can parameterize a neighborhood of a singular point on an algebraic curve with a power series expansion, but with rational exponents, and the denominators of those exponents are bounded. The amazing thing about this theorem is that it’s very old. It comes back from Newton. But some people will still use it in research. I learned this theorem in Madrid where I made my PhD from a professor call Antonia Díaz-Cano. And also I learned with the topologist José María Montesinos to apply this theorem. It has some high-dimensional generalizations for some type of singularities, which are called quasi-ordinary.

The exponents—so you get a power series, so you get an infinite number of exponents. But there is a finite subset of those exponents which are the important ones, because they codify all the topology around the singular point of the algebraic curve. And this is why this theorem is very important. And the book I learned it from is Robert Walker’s Algebraic Curves. And if you want a more recent reference, I recommend you to look at Eduardo Casas-Alvero’s book on singularities of plane curves. Thank you very much.

EL: Okay.

EL: Yeah. So can you introduce yourself?

JoAnne Growney: My name is JoAnne Growney. I'm a retired math professor and a poet.

EL: And what is your favorite theorem?

JG: Well, the last talk I went to has had me debating about it. What I was prepared to say an hour ago was that it was the proof by contradiction that the real numbers are countable, and Cantor's diagonal proof. I like proofs by contradiction because I kind of like to think that way: on the one hand, and then the opposite. But I just returned from listening to a program on math and art. And I thought, wow, the Pythagorean theorem is something that I use every day. And maybe I'm being unfair to take something about infinity instead of something practical, but I like both of them.

EL: Okay, so we've got a tie there. And have you chosen something to pair with either of your theorems? We like to do, like, a wine and food pairing or, you know, but with theorems, you know, is there something that you think goes especially well, for example a poem, if you’ve got one.

JG: Well, actually, I was thinking of—the Pythagorean theorem, and it's probably a sound thing, made me think of a carrot.

EL: Okay.

JG: And oh, the theorem about infinity, it truly should make me think of a poem, but I don't have a pairing in mind.

EL: Okay. Well, thank you.

JG: Thank you.

Mikael Vejdemo-Johansson: I’m Michael Vejdemo-Johansson. I'm from the City University of New York.

KK: City University of New York. Which one?

MVJ: College of Staten Island and the Graduate Center.

KK: Excellent. All right, so we're sitting in an Afghan restaurant at the JMM. And what is your favorite theorem?

MVJ: My favorite theorem is the nerve lemma.

KK: Okay, so remind everyone what this is.

MVJ: So the nerve lemma says—well, it’s basically a family of theorems, but the original one as I understand it says that if you have a covering of a topological space where all the cover elements and all arbitrary intersections of cover elements are simple enough, then the intersection complex, the nerve complex of the covering that inserts a simplex for each nonlinear intersection is homotopy equivalent to the whole space.

KK: Right. This is extremely important in topology.

MVJ: It fuels most of topological data analysis one way or another.

KK: Absolutely. Very important theorem. So what pairs well, with the nerve lemma?

MVJ: I’m going to go with cotton candy.

KK: Cotton candy. Okay, why is that?

MVJ: Because the way that you end up collapsing a large and fluffy cloud of sugar into just thick, chewy fibers if you handle it right.

KK: That's right. Okay. Right. This pairing makes total sense to me. Of course, I’m a topologist, so that helps. Thanks for joining us, Mikael.

MVJ: Thank you for having me.

Michelle Manes: I’m Michelle Manes. I'm a professor at the University of Hawaii. And my favorite theorem is Sharkovskii’s theorem, which is sometimes called period three implies chaos. So the statement is very simple. You have a weird ordering of the natural numbers. So 3 is bigger than 5 is bigger than 7 is bigger than 9, etc, all the odd numbers. And then those are all bigger than 2 times 3 is bigger than 2 times 5 is bigger than 2 times 7, etc. And then down a row 4 times every odd number, and you get the idea. And then everything with an odd factor is bigger than every power of 2. And the powers of 2 are listed in decreasing order. So 23 is bigger than 22 is bigger than 2 is bigger than 1.

EL: Okay.

MM: So 1 is the smallest, 3 is the biggest, and you have this big weird array. And the statement says that if you have a continuous function on the real line, and it has a point of period n, for n somewhere in the Sharkovskii ordering, so put your finger down on n, it’s got a point of period everything less than n in that ordering. So in particular, if it has a point of period 3, it has points of every period, every integer. So I mean, I like the theorem, because the hypothesis is remarkable. The hypothesis is continuity. It's so minimal.

EL: Yeah.

MM: And you have this crazy ordering. And the conclusion is so strong. And the proof is just really lovely. It basically uses the intermediate value theorem and pretty pictures of folding the real line back on itself and things like that.

EL: Oh, cool.

MM: So yeah, it's my favorite theorem. Absolutely.

EL: Okay. And do you have something that you would suggest pairing with this theorem?

MM: So for me, because when I think of the theorem, I think of the proof of it, which involves this, like stretching and wrapping and stretching and wrapping, and an intermediate value theorem, it feels very kinetic to me. And so I feel like it pairs with one of these kind of moving sculptures that moves in the wind, where things sort of flow around.

EL: Oh, nice.

MM: Yeah, it feels like a kinetic theorem to me. So I'm going to start with the kinetic sculpture.

EL: Okay. Thank you.

MM: Thanks.

John Cobb: Hey there, I’m John Cobb, and I'm going to tell you my favorite theorem.

EL: Yeah. And where are you?

JC: I’m at College of Charleston applying for PhD programs right now.

EL: Okay.

JC: Okay. So I picked one I thought was really important, and I'm surprised it isn't on the podcast already. I have to say it's Gödel’s incompleteness theorems. Partly because for personal reasons. I'm in a logic class right now regarding the mechanics of the actual proof. But when I heard it, I was becoming aware of the power of mathematics, and hearing the power of math to talk about its own limitations, mathematics about mathematics, was something that really solidified my journey into math.

EL: And so what have you chosen to pair with your theorems?

JC: Yeah, I was unprepared for this question. So I’m making up on the spot.

EL: So you would say your your preparation was…incomplete?

JC: [laughing] I would say that! Man. I'll go with the crowd favorite pizza for no reason in particular.

EL: Well pizza is the best food and it's good with everything.

JC: Yeah.

EL: So that's a reason enough.

JC: Awesome. Well, thank you for the opportunity.

EL: Yeah, thanks.

Talia Fernós: My name is Talia Fernós, and I'm an associate professor at the University of North Carolina at Greensboro. My favorite theorem is Riemann’s rearrangement theorem. And basically, what it says is that if you have a conditionally convergent series, you can rearrange the terms in the series that the series converges to your favorite number.

EL: Oh, yeah. Okay, when you said the name of it earlier, I didn't remember, I didn't know that was the name of the theorem. But yes, that's a great theorem!

TF: Yeah. So the proof basically goes as follows. So if you do this with, for example, the series which is 1/n times -1 to, say, the n+1, so that looks like 1-1/2+1/3-1/4, and so on. So when you try to see why this is itself convergent, what you'll see is that you jump forward 1, then back a half, and then forward a third, back a fourth, so if you kind of draw this on the board, you get this spiral. And you see that it very quickly, kind of zooms in or spirals into whatever the limit is.

So now, this is conditionally convergent, because if you sum just 1/n, this diverges. And you can use the integral test to show that. So now, if you have a conditionally convergent series, you will have necessarily that it has infinitely many positive terms and infinitely many negative terms. And that each of those series independently also diverge. So when you want to show that a rearrangement is possible, so that it converges to your favorite number, what you're going to do is, let's say that you're trying to make this converge to 1, okay? So you're going to add up as many positive terms as necessary, until you overshoot 1, and then as many negative terms as necessary until you undershoot, and you continue in this way until you kind of have again, this spiraling effect into 1. And now the reason why this does converge is that the fact that it's conditionally convergent also tells you that the terms go to zero. So you can add sort of smaller and smaller things.

EL: Yeah, and you you don't run out of things to use.

TF: Right.

EL: Yeah. Cool. And what have you chosen to pair with this theorem?

F: For its spiraling behavior, escargot, which I don't eat.

EL: Yeah, I have eaten it. I don't seek it out necessarily. But it is very spiraly.

TF: Okay. What does it taste like?

EL: It tastes like butter and parsley.

TF: Okay. Whatever it’s cooked in.

EL: Basically. It's a little chewy. It's not unpleasant. I don't find a terribly unpleasant, but I don’t

TF: think it's a delicacy.

EL: Yeah. But I'm not very French. So I guess that's fair. Well, thanks.

TF: Sure.

This episode of my favorite theorem is a whirlwind of “flash favorite theorems” we recorded at the Joint Mathematics Meetings in Baltimore in January 2019. We had 16 guests, so we’ll keep this brief. Below is a list of our guests and their theorems with timestamps for each guest in case you want to skip around in the episode. We hope you enjoy this festival of theorem love as much as we enjoyed talking to all of these mathematicians!


Episode 44 - James Propp

Evelyn Lamb: Hello, and welcome to My Favorite Theorem, a math podcast. I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: I’m Kevin Knudson, professor of mathematics at the University of Florida. How's it going?

EL: All right, yeah. Up early today for me. You know, you’re on the East Coast, and I'm in the Mountain Time Zone. And actually, when my husband is on math trips — sorry, if I'm on math trips on the East Coast, and he's in the mountain time zone, then we have, like, the same schedule, and we can talk to each other before we go to bed. I'm sort of a night owl. So yeah, it's early today. And I always complain about that the whole time.

KK: Sure. Is he a morning person?

EL: Yes, very much.

KK: So Ellen, and I are decidedly not. I mean, I'd still be in bed, really, if I had my way. But you know, now that I'm a responsible adult chair of the department, I have to—even in the summer—get in here to make sure that things are running smoothly.

EL: But yeah, other than the ungodly hour (it’s 8am, so everyone can laugh at me), everything is great.

KK: Right. Cool. All right, I’m excited for this episode.

EL: Yes. And today, we're very happy to have Jim Propp join us. Hi, Jim, can you tell us a little bit about yourself?

Jim Propp: Yeah, I'm a math professor at UMass Lowell. My research is in combinatorics, probability, and dynamical systems. And I also blog and tweet about mathematics.

KK: You do. Your blog’s great, actually.

EL: Yeah.

KK: I really enjoy it, and you know, you're smart. Once a month.

EL: Yes. That that was a wise choice. Most months, I think on the 17th, Jim has an excellent post at Math Enchantments.

KK: Right.

EL: So that’s a big treat. I think somehow I didn't realize that you did some things with dynamical systems too. I feel like I'm familiar with you in, like, the combinatorics kind of world. So I learned something new already.

KK: Yup.

JP: Yeah, I actually did my PhD work in ergodic theory. And after a few years of doing a postdoc in that field, I thought, “No, I'm going to go back to combinatorics," which was sort of my first love. And then some probability mixed into that.

KK: Right. And actually, we had some job candidates this year in combinatorics, and one of them was talking about—you have a list of problems, apparently, that's famous. I don't know.

JP: Oh, yes. Tilings. Enumeration of tilings.

KK: That’s right. It was a talk about tilings. Really interesting stuff.

JP: Yeah, actually, I should say, I have gone back to dynamical systems a little bit, combining it with combinatorics. And that's a big part of what I do these days, but I won't be talking about that at all.

EL: Okay. And what is your favorite theorem?

JP: Ah, well, I've actually been sort of leading you on a bit because I'm not going to tell you my favorite theorem, partly because I don't have a favorite theorem.

KK: Sure.

JP: And if I did, I wouldn't tell you about it on this podcast, because it would probably have a heavy visual component, like most of my favorite things in math, and it probably wouldn't be suited to the purely auditory podcast medium.

KK: Okay, so what are you gonna tell us?

JP: Well, I could tell you about one theorem that I like that doesn't have much geometric content. But I'm not going to do that either.

EL: Okay, so what bottom of the barrel…

JP: I’m going to tell you about two theorems that I like, okay, they’re sort of like twins. One is in continuous mathematics, and one is in discrete mathematics.

KK: Great.

JP: The first one, the one in continuous mathematics, is pretty obscure. And the second one, in discrete mathematics, is incredibly obscure. Like nobody’s named it. And I've only found it actually referred to, stated as a result in the literature once. But I feel it's kind of underneath the surface, making a lot of things work, and also showing resemblances between discrete and continuous mathematics. So these are, like, my two favorite underappreciated theorems.

EL: Okay.

KK: Oh, excellent. Okay, great. So what have we got?

JP: Okay, so for both of these theorems, the underlying principle, and this is going to sound kind of stupid, is if something doesn't change, it’s constant.

EL: Okay. Yes, that is a good principle.

JP: Yeah. Well, it sounds like a tautology, because, you know, doesn't “not changing” and “being constant” mean the same thing? Or it sounds like a garbled version of “change is the only constant.” But no, this is actually a mathematical idea. So in the continuous realm, when I say “something,” what I mean is some differentiable function. And when I say “doesn't change,” I mean, has derivative zero.

KK: Sure.

JP: Derivatives are the way you measure change for differentiable functions. So if you’ve got a differentiable function whose derivative is zero—let’s assume it's a function on the real line, so its derivative is zero everywhere—then it's just a constant function.

KK: Yes. And this is a corollary of the mean value theorem, correct?

JP: Yes! I should mention that the converse is very different. The converse is almost a triviality. The converse says if you've got a constant function, then its derivative is zero.

KK: Sure.

JP: And that just follows immediately from the definition of the derivative. But the constant value theorem, as you say, is a consequence of the mean value theorem, which is not a triviality to prove.

KK: No.

JP: In fact, we'll come back later to the chain of implications that lead you to the constant value theorem, because it's surprisingly long in most developments.

KK: Yes.

JP: But anyway, I want to point out that it's kind of everywhere, this result, at least in log tables— I mean, not log tables, but anti-differentiation tables. If you look up anti-derivatives, you'll always see this “+C” in the anti-derivative in any responsible, mathematically rigorous table of integrals.

EL: Right.

JP: Because for anti-derivatives, there's always this ambiguity of a constant. And those are the only anti-derivatives of a function that's defined on the whole real line. You know, you just add a constant to it, no other way of modifying the function will leave its derivative alone. And more generally, when you've got a theorem that says what all the solutions to some differential equation are, the theorem that guarantees there aren't any other solutions you aren't expecting is usually proved by appealing to the constant value theorem at some level. You show that something has derivative zero, you say, “Oh, it must be constant. “

KK: Right.

JP: Okay. So before I talk about how the constant value theorem gets proved, I want to talk about how it gets used, especially in Newtonian physics, because that's sort of where calculus comes from. So Newtonian physics says that if you know the initial state of a system, you know, of a bunch of objects—you know their positions, you know their velocities—and you know the forces that act on those objects as the system evolves, then you can predict where the objects will be later on, by solving a differential equation. And if you know the initial state and the differential equation, then you can predict exactly what's going to happen, the future of the system is uniquely determined.

KK: Right.

JP: Okay. So for instance, take a simple case: you’ve got an object moving at a constant velocity. And let's say there are no forces acting on it all. Okay? Since there are no forces, the acceleration is zero. The acceleration is the rate of change of the velocity, so the velocity has derivative zero everywhere. So that means the velocity will be constant. And the object will just keep on moving at the same speed. If the constant value theorem were false, you wouldn't really be able to make that assertion that, you know, the object continues traveling at constant velocity just because there are no forces acting on it.

KK: Sure.

JP: So, kind of, pillars of Newtonian physics are that when you know the derivative, then you really know the function up to an ambiguity that can be resolved by appealing to initial conditions.

EL: Yeah.

KK: Sure.

JP: Okay. So this is actually telling us something deep about the real numbers, which Newton didn't realize, but which came out, like, in the 19th century, when people began to try to make rigorous sense of Newton's ideas. And there's actually a kind of deformed version of Newton's physics that's crazy, in the sense that you can't really predict things from their derivatives and from their initial conditions, which no responsible physicist has ever proposed, because it's so unrealistic. But there are some kind of crazy mathematicians who don't like irrational numbers. I won't name names. But they think we should purge mathematics of the real number system and all of these horrible numbers that are in it. And we should just do things with rational numbers. And if these people tried to do physics just using rational numbers, they would run into trouble.

EL: Right.

JP: Because you can have a function from the rational numbers to itself, whose derivative is zero everywhere—with derivative being defined, you know, in the natural way for functions from the rationals to itself—that isn't a constant function.

KK: Okay.

JP: So I don't know if you guys have heard this story before.

KK: This is making my head hurt a little, but okay. Yeah.

EL: Yeah, I feel like I have heard this, but I cannot recall any details. So please tell me.

JP: Okay, so we know that the square root of two is irrational, so every rational number, if you square it, is either going to be less than two, or greater than two.

KK: Yes.

JP: So we could define a function from the rational numbers to itself that takes the value zero if the input value x satisfies the inequality x squared is less than 2 and takes the value 1 if x squared is bigger than two.

EL: Yes.

JP: Okay. So this is not a constant function.

KK: No.

JP: Right. Okay. But it actually is not only continuous, but differentiable as a function of the

EL: Of the rationals…

JP: From the rationals to itself.

KK: Right. The derivative zero but it's not constant. Okay.

JP: Yeah. Because take any rational number, okay, it's going to have a little neighborhood around it avoiding the square—avoiding the hole in the rational number line where the square root of 2 would be. And it's going to be constant on that little interval. So the derivative of that function is going to be zero.

KK: Sure.

JP: At every rational number. So there you have a non-constant function whose derivative is zero everywhere. Okay. And that's not good.

KK: No.

JP: It’s not good. for math. It's terrible for physics. So you really need the completeness property of the reals in a key way to know that the constant value theorem is true. Because it just fails for things like the set of rational numbers.

EL: Right.

KK: Okay.

JP: This is part of the story that Newton didn't know, but people like Cauchy figured it out, you know, long after.

KK: Right.

JP: Okay. So let's go back to the question of how you prove the constant value theorem.

EL: Yeah.

JP: Actually, I wanted to jump back, though, because I feel like I wanted to sell a bit more strongly this idea that the constant value theorem is important. Because if you couldn't predict the motions of particles from forces acting on those particles, no one would be interested in Newton's ideas, because the whole story there is that it is predictive of what things will do. It gives us a sort of clockwork universe.

KK: Sure.

JP: So Newton's laws of motion are kind of like the rails that the Newtonian universe runs on, and the constant value theorem is what keeps the universe from jumping off those rails.

KK: Okay. I like that analogy. That’s good.

JP: That’s the note I want to end on for that part of the discussion. But now getting back to the math of it. So how do you prove the constant value theorem? Well, you told me you prove it from the mean value theorem. Do you remember how you prove the mean value theorem?

KK: You use Rolle’s theorem?

EL: Just the mean value theorem turned sideways!

KK: Sort of, yeah. And then I always joke that it’s the Forrest Gump proof. Right? You draw the mean value theorem, you draw the picture on the board, and then you tilt your head, then you see that it's Rolle’s theorem. Okay, but Rolle’s theorem requires, I guess what we sometimes call in calculus books Fermat’s theorem, that if you have a differentiable function, and you're at a local max or min, the derivative is equal to zero. Right?

JP: Yup. Okay, actually, the fact that there exists even such a point at all is something.

KK: Right.

JP: So I think that's called the Extreme Value Theorem.

KK: Maybe? Well, the Extreme Value Theorem I always think of as as—well, I'm a topologist—that’s the image of a compact set is is compact.

JP: Okay.

KK: Right. Okay. So they need to know what the compact sets of the real line are.

JP: You need to know about boundedness, stuff like that, closedness.

KK: Closed and bounded, right. Okay. You're right. This is an increasingly long chain of things that we never teach in Calculus I, really.

JP: Yeah. I've tried to do this in some honors classes with, you know, varying levels of success.

KK: Sure.

JP: There’s the boundedness theorem, which says that, you know, a continuous function is bounded on a closed interval. But then how do you prove that? Well, you know, Bolzano-Weierstrass would be a natural choice if you're teaching a graduate class, maybe you prove that from the monotone convergence theorem. But ultimately, everything goes back to the least upper bound property, or something like it.

KK: Which is an axiom.

JP: Which is an axiom, that’s right. But it sort of makes sense that you'd have to ultimately appeal to some heavy-duty axiom, because like I said, for the rational numbers, the constant value theorem fails. So at some point, you really need to appeal to the the completeness of the reals.

EL: Yeah, the structure of the real numbers.

KK: This is fascinating. I've never really thought about it in this much detail. This is great.

JP: Okay. Well, I'm going to blow your mind…

KK: Good!

JP: …because this is the really cool part. Okay. The constant value theorem isn't just a consequence of the least upper bound property. It actually implies the least upper bound property.

KK: Wow. Okay.

JP: So all these facts that this, this chain of implications, actually closes up to become a loop.

KK: Okay.

JP: Each of them implies all the others.

KK: Wow. Okay.

JP: So the precise statement is that if you have an ordered field, so that’s a number system that satisfies the field axioms: you've got the four basic operations of pre-college math, as well as inequality, satisfying the usual axioms there. And it has the Archimedean property, which we don't teach at the pre-college level. But informally, it just says that nothing is infinitely bigger or infinitely smaller than anything else in our number system. Take any positive thing, add it to itself enough times, it becomes as big as you like.

KK: Okay.

JP: You know, enough mice added together can outweigh an elephant.

KK: Sure.

EL: Yeah.

JP: That kind of thing. So if you've got an ordered field that satisfies the Archimedean property, then each of those eight propositions is equivalent to all the others.

KK: Okay.

JP: So I really like that because, you know, we tend to think of math as being kind of linear in the sense that you have axioms, and from those you prove theorems, and from those you prove more theorems—it's a kind of a unidirectional flow of the sap of implication. But this is sort of more organic, there's sort of a two-way traffic between the axioms and the theorems. And sometimes the theorems contain the axioms hidden inside them. So I kind of like that.

KK: Excellent.

JP: Yeah.

KK: So math’s a circle, it's not a line.

JP: That’s right. Anyway, I did say I was going to talk about two theorems. So that was the continuous constant value theorem. So I want to tell you about something that I call the discrete constant value theorem that someone else may have given another name to, but I've never seen it. Which also says that if something doesn't change, its constant. But now we're talking about sequences and the something is just going to be some sequence. And when I say doesn't change, it means each term is equal to the next, or the difference between them is zero.

EL: Okay.

JP: So how would you prove that?

EL: Yeah, it really feels like something you don't need to prove.

KK: Yeah.

JP: If you pretend for the moment that it's not obvious, then how would you convince yourself?

KK: So you're trying to show that the sequence is eventually constant?

JP: It’s constant from the get-go, every term is equal to the next.

EL: Yeah. So the definition of your sequence is—or part of the definition of your sequence is—a sub n equals a sub n+1.

JP: That’s right.

EL: Or minus one, right?

JP: Right.

EL: So I guess you'd have to use induction.

KK: Right.

JP: Yeah, you’d use mathematical induction.

KK: Right.

JP: Okay. So you can prove this principle, or theorem, using mathematical induction. But the reverse is also true.

KK: Sure.

JP: You can actually prove the principle of mathematical induction from the discrete constant value theorem.

EL: And maybe we should actually say what the principle of mathematical induction is.

KK: Sure.

JP: Sure.

EL: Yeah. So that would be, you know, if you want to prove that something is true for, you know, the entire set of whole numbers, you prove it for the first one—for 1—and then prove that if it's true for n, then it's true for n+1. So I always have this image in my mind of, like, someone hauling in a chain, or like a big rope on a boat or something. And they're like, you know, each each pull of the of their arm is the is the next number. And you just pull it in, and the whole thing gets into boat. Apparently, that's where you want to be. Yeah, so that's induction.

JP: Yeah. So you can use mathematical induction to prove the discrete constant value theorem, but you can also do the reverse.

EL: Okay.

JP: So just as the continuous constant value theorem could be used as an axiom of completeness for the real number system, the discrete constant value theorem could be used as an axiom for, I don't want to say completeness, but the heavy-duty axiom for doing arithmetic over the counting numbers, to replace the axiom of induction.

EL: Yeah, it has me putting in my mind, like, oh, how could I rephrase, you know, my standard induction proof—that at this point, kind of just runs itself once I decided to try to prove something by induction—like how to make that into a sequence, a statement about sequences?

JP: Yeah, for some applications, it's not so natural. But one of the applications we teach students mathematical induction for is proving formulas, right? Like, the sum of the first n positive integers is n times n+1 over 2.

KK: Right.

JP: And so we do a base case. And then we do an induction step. And that's the format we usually use.

KK: Right.

JP: Okay. Well, proving formulas like that has been more or less automated these days. Not completely, but a lot of it has been. And the way computers actually prove things like that is using something more like the discrete constant value theorem.

EL: Okay.

JP: So for example, say you've got a sequence who's nth term is defined as the sum of the first n positive integers.

KK: Okay.

JP: So it’s 1, 1+2, 1+2+3,…. Then you have another sequence whose nth term is defined by the formula, n times n+1 over 2.

KK: Right.

JP: And you ask a computer to prove that those two sequences are equal to each other term by term. The way these automated systems will work, is they will show that the two sequences differ by a constant,

EL: and then show that the constant is zero.

JP: And then they’ll show that the constant is zero. So you show that the two sequences at each step increase by the same amount. So whatever the initial offset was, it’s going to be the same. And then you see what that offset is.

EL: Yeah.

KK: Okay, sure.

JP: So this is this looking a lot more like what we do in differential equations classes, where, you know, if you try and solve a differential equation, you determine a solution up to some unknown real parameters, and then you solve for them from initial conditions. There's a real strong analogy between solving difference equations in discrete math, and solving differential equations in continuous math. But somehow, the way we teach the subjects hides that.

EL: Yeah.

JP: The way we teach mathematical induction, by sort of having the base case come first, and then the induction step come later, is the reverse order from what we do with differential equations. But there's a way to, you know, change the way we present things so they're both mathematically rigorous, but they're much more similar to each other.

KK: Yeah, we've got this bad habit of compartmentalizing in math, right? I mean, the lower levels of the curriculum, you know, it's like, “Okay, well, in this course, you do derivatives and optimization. And in this course, you learn how to plow through integration techniques. And this course is multi-variable stuff. And in this course, we're going to talk about differential equations.” Only later do you do the more interesting things like induction and things like that. So are you arguing that we should just, you know, scrap it all in and start with induction on day one?

JP: Start with induction? No.

KK: Sure, why not?

JP: I’ve given talks about why we should not teach mathematical induction.

KK: Really?

JP: Yeah. Well, I mean, it’s not entirely serious. But I argue that we should basically teach the difference calculus, as a sort of counterpart to the differential calculus, and give students the chance to see that these ideas of characteristic polynomials and so forth, that work in differential equations, also work with difference equations. And then like, maybe near the end, we can blow their mind with that wonderful result that Robert Ghrist talked about.

KK: Yeah.

JP: Where you say that one of these operators, the difference operator, is e to the power of the derivative operator.

KK: Right.

EL: Yeah.

JP: They’re not just parallel theories. They're linked in a profound way.

EL: Yeah, I was just thinking this episode reminded me a lot of our conversation with him, just linking those two things that, yeah they they are in very different places in in my mental map of how how I think about math.

KK: All right. So what does one pair with these theorems?

JP: Okay, I'm going pair the potato chip.

EL: Okay, great. I love potato chips.

KK: I do too.

JP: So I think potato chips sort of bridge the gap between continuous mathematics and discrete mathematics.

EL: Okay.

JP: So the potato chip as an icon of continuous mathematics comes by way of Stokes’ theorem.

KK: Sure.

JP: So if you’ve ever seen these books like Purcell’s Electromagnetism that sort of illustrate what Stokes’ theorem is telling you, you have a closed loop and a membrane spanning in it,

EL: Right.

JP: …a little like a potato chip.

KK: Sure. Right.

JP: And the potato chip as an icon of discrete mathematics comes from the way it resembles mathematical induction.

KK: You can't eat just one.

JP: That’s right. You eat a potato chip, and then you eat another, and then another, and you keep saying, “This is the last one,” but there is no last potato chip.

EL: Yeah.

JP: And if there’s no last potato chip, you just keep eating them.

KK: That’s right. You need another bag. That's it.

EL: Yeah.

JP: But the other reason I really like the potato chip as sort of a unifying theme of mathematics, is that potato chips are crisp, in the way that mathematics as a whole is crisp. You know, people complain sometimes that math is dry. But that's not really what they're complaining about. Because people love potato chips, which are also dry. What they really mean is that it’s flavorless, that the way it's being taught to them lacks flavor.

KK: That’s valid, actually. Yeah.

JP: So I think what we need to do is, you know, when the math is too flavorless, sometimes we have to dip it into something.

EL: Yeah, get your onion dip.

JP: Yeah, the onion dip of applications, the salsa of biography, you know, but math itself should not be moist, you know?

EL: So, do you prefer like the plain—like, salted, obviously—potato chips, or do you like the flavors?

JP: Yeah, I don't like the flavors so much.

EL: Oh.

JP: I don’t like barbecue or anything like that. I just like salt.

KK: I like the salt and vinegar. That’s…

EL: Yeah, that's a good one. But Kettle Chips makes this salt and pepper flavor.

KK: Oh, yeah. I’ve had those. Those are good.

EL: It’s great. Their Honey Dijon is one of my favorites too. And I love barbecue. I love every—I love a lot of flavors of chips. I shouldn't say “every.”

KK: Well yeah, because Lay's always has this deal every year with the competition, like, with these crazy flavors. So, they had the chicken and waffles one year.

EL: Yeah, I think there was a cappuccino one time. I didn’t try that one.

KK: Yeah, no, that’s no good.

JP: I just realized, though, potato chips have even more mathematical content than I was thinking. Because there's the whole idea of negative curvature of surfaces.

EL: Yes, the Pringles is the ur-example of negatively curved surface.

JP: Yeah. And also, there's this wacky idea of varifolds, limits of manifolds, where you have these corrugated surfaces and you make the corregations get smaller and smaller, like, I think it’s Ruffes.

EL: Yeah, right.

JP: So a varifold is, like, the limit of a Ruffles potato chip as the Ruffles shrink, and the angles don't decrease to zero. There’s probably a whole curriculum.

EL: Yeah, we need a spinoff podcast. Make this—tell what this potato chip says about math.

KK: Right.

EL: Just give everyone a potato chip and go for it.

KK: Excellent.

EL: Very nice. I like this pairing a lot.

KK: Yeah.

EL: Even though it's now, like, 8:30-something, I'll probably go eat some potato chips before I have breakfast, or as breakfast.

JP: I want to thank you, Evelyn, because I know it wasn't your choice to do it this early in the morning. I had childcare duties, so thank you for your flexibility.

EL: I dug deep. Well, it was a sunny day today. So actually the light coming in helped wake me up. It's been really rainy this whole month, and that's not great for me getting out of bed before, you know, 10 or 11 in the morning.

KK: Sure. So we also like to give our guests a chance to plug things. You have some stuff coming up, right?

JP: I do. Well, there's always my Mathematical Enchantments essays. And I think my July essay, which will come out on the 17th, as always, will be about the constant value theorems. And I'll include links to stuff I have written on the subject. So anyone who wants to know more, should definitely go first to my blog. And then in early August, I'll be giving some talks in New York City. And they'll be about a theorem with some visual content called the Wall of Fire theorem, which I love and which was actually inspired by an exhibit at the museum. So it's going to be great actually to give a talk right next to the exhibit that inspired it.

EL: Oh, yeah, very nice.

KK: This is at the Museum of Math, the National Museum of Math, right? Okay.

JP: Yeah, I’ll actually give a bunch of talks. So the first talk is going to be, like, a short one, 20 minutes. That's part of a conference called MOVES, which stands for the Mathematics of Various Entertaining Subjects.

KK: Yeah.

JP: It’s held every two years at the museum, and I don't know if my talk will be on the fourth, fifth or sixth of August, but it'll be somewhere in that range. And then the second talk will be a bit longer, quite a bit longer. And it's for the general public. And I'll give it twice on August 7th, first at 4pm, and then at 7pm. And it'll feature a hands-on component for audience members. So it should be fun. And that's part of the museum's Math Encounters series, which is held every month. And for people who aren't able to come to the talk, there'll be a video on the Math Encounters website at some point.

EL: Oh, good. I've been meaning to check that because I'm on their email list, and so I get that, but obviously living in Salt Lake City, I don't end up in New York a whole lot. So yeah, I'm always like, “Oh, that would have been a nice one to go to.”

KK: Yeah.

EL: But I'll have to look for the videos.

KK: So, Jim, thanks for joining us.

JP: Thank you for having me.

KK: Thanks for making me confront that things go backwards in mathematics sometimes.

EL: Yes.

KK: Thanks again.

EL: Yeah, lots of fun.

JP: Thank you very much. Have a great day.

[outro]

In this episode of My Favorite Theorem, we were happy to talk with Jim Propp, a mathematician at the University of Massachusetts Lowell. He told us about the constant value theorem and the way it unites continuous and discrete mathematics.

Here are some links you might find interesting after listening to the episode:

Propp’s companion essay to this episode

Propp’s mathematical homepage

Propp’s blog Math Enchantments (home page, wordpress site)
His list of problems about enumeration of tilings that we mentioned
Our previous My Favorite Theorem episode with guest Robert Ghrist, who also talked about a link between continuous and discrete math
Propp’s article “Real Analysis in Reverse”
Mean Value Theorem
Rolle’s Theorem
Fermat’s Theorem
Varifold
MOVES (Mathematics of Various Entertaining Subjects), a conference he will be speaking at in August
Math Encounters, a series at the Museum of Mathematics (he will be speaking there in August)

Episode 43 - Matilde Lalin

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about theorems and math and all kinds of things. I'm one of your host,s Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City. How are you today?

KK: I have a sunburn.

EL: Yeah. Can’t sympathize.

KK: No, no, I was, you know, Ellen and I went out birdwatching on Saturday, and it didn't seem like it was sunny at all, and I didn't wear a hat. So I got my head a little sunburned. And then yesterday, she was doing a print festival down in St. Pete. And even though I thought we were in the shade—look my, arms. They're like totally red. I don’t know. This is what happens.

EL: You know, March in Florida, you really can’t get away without SPF.

KK: No, you really can't. You would think I would have learned this lesson after 10 years of living here, but it just doesn't work. So anyway. Yeah. How are you?

EL: Oh, I'm all right. Yeah. Not sunburned.

KK: Okay. Good for you. Yeah. I'm on spring break. So, you know, I'm feeling pretty good. I got some time to breathe at least. So anyway, enough about us. This is actually a podcast where we invite guests on instead of boring the world with our chit chat. Today, we're pleased to welcome Matilde Lalín, you want to introduce yourself?

Matilde Lalín: Hi. Okay. Thank you for having me here. So I'm originally from Argentina. I grew up in Buenos Aires, and I did my undergraduate there. And then I moved to the US to do my Ph.D., mostly at the University of Texas at Austin. And then I moved to Canada for postdocs, and I stayed in Canada. So right now, I'm a professor at the University of Montreal, and I work in number theory.

EL: And I'm guessing you do not have a sunburn, being in Montreal in March.

ML: So maybe I should say we are celebrating that we are very close to zero Celsius.

KK: Oh, okay.

EL: Yeah, so exciting times.

ML: Yeah. So some of the snow actually is melting.

KK: Oh, okay. I haven’t seen snow in quite a while. I kind of miss it sometimes. But anyway.

EL: Oh, It is very pretty.

KK: Yeah, it is. It’s lovely. Until you have to shovel it every week for six months. But yeah, so Matilde, what is your favorite theorem?

ML: Okay, so I wanted to talk about a problem more than theorem. Well, it will lead to some theorems eventually, and a conjecture. So my favorite problem, let's say, is the congruent number problem.

KK: Okay.

ML: So okay, so basically, a positive integer number is called congruent if it is the area of a right triangle with rational sides.

EL: All three sides, right?

ML: Exactly, exactly. So the question will be, you know, how can you tell that a particular number is congruent? But more generally, can you give a list of all congruent numbers? So for example, six is congruent, because it is the area of the right triangle with sides three, four, and five. So that's easy, but then seven is congruent because it’s the area of the triangle with sides 24/5, 35/12, and 337/60.

KK: Ah, okay.

EL: So that’s not quite as obvious.

ML: Not quite as obvious, exactly. And in fact, there is an example, due to Zagier: 157 is congruent, and so the size of the triangle, they are fractions that have—okay, so the hypotenuse has 46 and 47 digits, the numerator and denominator. And so it can be very big. Okay, let me clarify for a congruent number, there are actually infinitely many triangles that satisfy this. But the example I'm giving you is the smallest, in a sense.

EL: Okay.

ML: So actually it can be very complicated, a priori, to decide whether a number is congruent or not.

KK: Sure.

ML: So this problem appears for the first time in an Arab manuscript in the 10th century, and then it was—

EL: Oh, wow, that's shocking!

ML: Yes. Well, because triangles—I mean, it's a very natural question. But then it was picked up by Fibonacci, who was actually looking at this question from a different point of view. So he was studying arithmetic sequences. So he posed the question of whether you can have a three-term arithmetic sequence whose terms are all squares. So basically, let me give you an example. So 1-25-49. Okay? So those are three squares, and 25−1 is 24. And 49−25 is 24. So that makes it an arithmetic sequence. And each of the three members are squares.

EL: Yeah.

ML: And he said that the difference—so in this case it would be 24, okay? 25−1 is 24, 49−25 is 24—so the difference is called a congruum, if you can build a sequence with this difference, basically. So it turns out that this this problem is essentially equivalent to the congruent number problem, so that's where the name, the word congruent, comes from. Fibonacci was calling this congruum. So congruent has to do with things that sort of congregate.

EL: Okay.

ML: And so kind of this difference of the arithmetic sequence. And you can prove that from such a sequence you can build your triangle. So in the example I gave you, this is a sequence that shows that six is congruent. Well, technically it shows that 24 is congruent, but 24 is a square times six. And so if you have a triangle, you can always multiply the size by the constant, and that would be equivalent to multiplying the area by some square.

KK: Sure, yeah,

EL: Right. Right, and so if it has a square in it, then there's a rational relationship that will still be preserved.

ML: Exactly. So Fibonacci actually managed to prove that seven is congruent. And then he posed as a question, as a conjecture, that one wasn't congruent. So when you say that one is not congruent, you are also saying that the squares are not congruent. The square of any rational number.

EL: Oh.

KK: Okay.

ML: It’s actually kind of a nicer statement, in a sense. It's like a very special case. And then, like 400 years after, Fermat came, and so he actually managed to solve Fibonacci’s question. So he actually proved, using his famous descent, he proved that one is not congruent. And also that two and three are not congruent. So basically, he settled the question for those. And five is known to be congruent, also six and seven. So well, that takes care of the first few numbers. Because four is one in this case.

KK: Four is one, that’s right.

ML: Yeah, exactly. And well, one thing that happens with this problem is that actually, if you go in the direction that Fibonacci was looking, okay, so this sequence of three squares, actually, if you can think of them as—say you call the middle square x, and then one is x−n and the other is x+n. So when you multiply these three together, it gives you a square. And what this is telling you, is that actually giving you a solution to an equation that you could write as, say, y2=x(x−n)(x+n). And that's what is called an elliptic curve.

EL: Oh, okay.

ML: Yes. So basically, an elliptic curve in this context is more general. You could think of it as y2 equals a cubic polynomial in x. And so basically, the congruent numbers problem is asking whether, for such an equation, you have a solution such that y is different from 0. So so then you can study the problem from that point of view. There is a lot, there is a big theory about elliptic curves.

EL: Right. And so I've been wondering, like, is this where people got the idea to bring elliptic curves into number theory? That's always seemed mysterious to me—like, when you first learn about Fermat’s Last Theorem, and you learn there's all this elliptic curve stuff involved in proving that, like, how do people think to bring elliptic curves in this way?

ML: As a matter of fact, okay, so elliptic curves in general, it’s actually a very natural object to study. So I don't know if it came exactly via the congruent number problem, because essentially—okay, so essentially, a natural problem more generally is Diophantine equations. So basically, I give you a polynomial with integer coefficients, and I am asking you about solutions that are either integers or rational. And we understand very well what happens when the degree is one, say an equation of a form ax+by=c. Okay? So those we understand completely. We actually understand very well what happens when the degree is two and actually, degree three is elliptic curves. So it's a very natural progression.

EL: Okay.

ML: So it doesn't necessarily have to come with congruent numbers. However, it is true that many people chose choose to introduce elliptic curves via numbers, because it's such a natural question, such a natural problem. But of course, it leads you to a very specific family of elliptic curves. I mean, not just the whole story. So what is known about elliptic curves that can help understand this question of the congruent number problem: So in 1922 Mordell actually proved that the solutions of an elliptic curve—Actually, I should have said this before. So the solutions of an elliptic curve, say, over the rationals, so if you look at all the rational numbers that are solutions to an equation like that, y2 equals some cubic polynomial in x, they form a group. And actually an abelian group.

And as I was saying, Mordell proved that this group actually is finally-generated. So you can actually give a finite list of elements in the group, and then every element in the group is a combination of those. Okay? So basically, it's very tempting to say, “Well, I mean, if you give me an elliptic curve, I want to find what the group is. So I just give the generators. So this should be very easy, okay? Yeah. [laughter] But actually, it's not easy. So there's no systematic way to find all the generators to determine what the group is. And even—so, you will always have, you may have, points of finite order. So, elements that if you, take some multiple, you get back to 0. So those are easy to find. But the question of whether they are elements of infinite order, and if there are, how many there are, or how many generators you need, all these questions are difficult in generals for an elliptic curve. And so, my favorite theorem actually—so the way I ended up coming up with the idea of talking about the congruent number problem, is actually Mordell’s theorem. So I really like Mordell’s theorem.

KK: And that theorem’s not at all obvious. I mean, so you sort of, I'm not sure if I’ve ever even seen a proof. I mean, I remember, this is one of the first things I learned in algebraic geometry, you draw the picture, you know, of the elliptic curve. And the group law, of course, is given by: take two points, draw the line, and where it intersects the curve in the third point is the sum of those things, right—actually, then you reflect, it’s minus that, right? Yeah. Those three points add to zero. That's right.

EL: We’ll put a picture of this up. Because Kevin's helpful air drawing is not obvious to our listeners.

ML: That’s right.

KK: Yeah. And from that, somehow, the idea that this is a finitely-generated group is really pretty remarkable. But the picture gives you no clue of where the where to find these generators, right?

ML: Well, their first issue, actually, is to prove that this is an associative law. So that statement is annoyingly complicated to prove in elementary ways.

KK: Yeah, commutativity is kind of obvious, right?

ML: But, yes, already to prove that it's a group in the sense that associativity, yeah. And then Mordell’s theorem, actually, it follows, it does some descent. So it follows in the spirit of Fermat’s descent, actually. But I mean, in a more complicated context. But it's very beautiful, yeah.

So as I was saying, the number of generators that have infinite order, that's called the rank, and already knowing whether the rank is zero, or what the value is, that's a very difficult question. And so in 1965, Birch and Swinnerton-Dyer came up with a conjecture that relates the rank to the order of vanishing of a certain function that you build from the elliptic curve. It’s called the L-function. So, in principle, with this conjecture, one can predict the value of the rank. That doesn't mean that we can find easily the generators, but at least we can answer, for example, whether there are infinitely many solutions or not and say that.

EL: Yeah.

ML: So basically, that's kind of the most exciting conjecture associated to this question. And I mean, it goes well beyond this question, and it's one of the Millennium Problems from the Clay.

EL: Right. Yeah. So it’s a high dollar-value question.

ML Yes. And it's interesting, because for this question, it is known that—if the L-function doesn’t vanish, then the rank is zero. So it's known for R[ank] zero, one direction, and the same for R[ank] 1. But not much more is known on average. So this very recent result, relatively recent result by Bhargava and Shankar, where they prove that the rank for, if you take all the elliptic, curves and order them in a certain way, the rank on average is bounded by 7/6. And so that means that there is a positive proportion of elliptic curves that actually satisfy BSD. Okay. But I mean, the question would be what BSD tells us about the original question that I posed.

EL: Right, yeah, so when we were chatting earlier, you said that a lot of questions or theorems about congruent numbers were basically—the theorems were proved as partial solutions to BSD. Am I getting that right?

ML: Yeah, okay. Some progress that is being done nowadays has to do with proving BSD for some particular families, I mean for these elliptic curves are attached to congruent numbers. But if I go back to the first connection, so there is this famous theorem by Tunnell that was published in 1983 where he basically ties the property of being a congruent number to two quadratic equations in three variables having one having double the solutions as the other somehow. So Tunnell’s result came, obviously, in ’83, so much earlier than most advances in BSD.

EL: Okay.

ML: And basically what Tunnell gives is like an algorithm to decide whether a number is congruent or not. And for the case where it’s non-congruent, actually it is conclusive, because this is a case, okay, so it depends on BSD, but this is a case where we know. And then the problem is the case where it will tell you that the number is congruent. So that is assuming BSD. So for now, like I said, many cases will just be the cases that—for example, there is some very recent result by Tian, where basically he proved that BSD applies to certain curves. And so for example, it is known that for primes congruent to five, six, or seven modulo eight, they are congruent. So this is a result that goes back to Heegner and Monsky in ’52, for Heegner. So that's for primes. So that's an infinite family of numbers that satisfy that they are congruent. But every question attached to this problem has to do with, okay, can you generalize this for all natural numbers that are congruent to six, five, or seven modulo eight. For example, that’s some direction of research going on now.

EL: So you could disprove the BSD conjecture, if you could find some number that Turner’s [ed. note: Evelyn misremembered the name of the theorem; this should be “Tunnell’s”] theorem said was congruent, but was actually not congruent?

ML: Yeah, yeah. So you could disprove—say you find a number that is congruent to six mod eight that is not a congruent number, you disprove BSD, yes.

EL: All right. Yeah, so our listeners, I'm sure they'll go—I’m sure no one has ever searched a lot of numbers to check on this. So yeah, that's our assignment for you. So something we like to do on this show then is to ask our guest to pair their theorem with something. So what do you think enhances enjoyment of congruent numbers, the congruent number problem, the Birch and Swinnerton-Dyer Conjecture, all of these things?

ML: Well, for me it is really how I pair my mathematics with things, right? And so I would pair it with chocolate because I am a machine of transforming chocolate into theorems instead of coffee. I will also pair it with mate, which is an infusion from South America. That's my source of caffeine instead of coffee. So it's a very interesting drink that we drink a lot in Argentina, but especially in Uruguay.

KK: Do you have the special straw with the filter and everything?

ML: Yeah, yeah, I have the metal straw. So you you put the metal straw and then you put just the leaves in in your special cup, and you drink from the straw that filters the leaves. So yeah, that's right. And you share it with friends. So it's a very collaborative thing, like mathematics.

EL: So I’ve never tried this. Does this—what does it taste like it? I mean, I know it's hard to describe tastes that you've never actually tasted before. But does it taste kind of like tea, kind of like coffee, kind of like something else entirely?

ML: I would say it tastes like tea. You could think was a special tea.

KK: Okay.

EL: There’s a coffee shop near us that has that. But I haven't tried it yet.

KK: Oh, come on. Give it a shot, Evelyn.

EL: Yeah, I will.

KK: You have to report back in a future episode. Actually, I going to hold you to it the next time we meet. Before then, have some mate. Do you have a chocolate preference? Are you a dark chocolate, milk chocolate?

ML: Milk chocolate, I would say. I'm not super gourmet with chocolate. But I do have my favorite place in Montreal to go drink a good cup of hot chocolate.

KK: All right, I've learned a lot.

EL: Yeah.

KK: This is very informative. In fact, while you were describing the congruent number problem, I was sort of sitting here sketching out equations that I might try to actually solve. Of course, it wasn't an elliptic curves, it was sort of the naive things that you might try. But this is a fascinating problem. And I could see how you could get hooked.

EL: Yeah, well, it does seem like it just has all these different branches. And all these weird dependencies where you can follow these lines around.

KK: I mean, the best mathematics is like that, right? I mean, it's sort of kind of simple, it’s a simple question to ask, you could explain this to a kid. And then the mathematics is so deep, it goes in so many directions. Yeah, it's really, really interesting.

EL: Yeah. Thanks a lot. Are there any places people can find you online? Your website, other other things you'd like to share?

ML: Well, yeah, my website. Shall I say the address?

EL: We can just put a link to that.

ML: Yeah, definitely my website. I actually will be giving a talk in the math club at my university on congruent numbers in a couple of weeks. So I’m going to try to post the slides online, but they are going to be in French.

EL: Okay, well, that'll be good. Our Francophone listeners can can check that out.

ML: I really like some notes that Keith Conrad wrote. And actually, I have to say, he has a bunch of expository papers in different areas that I always find super useful for, you know, going a little bit beyond my classes. And so in general, I recommend that his website for that, and in particular, the notes on the congruent number problem, if you're more interested. And then of course, there are some, some books that discuss congruent numbers and elliptic curves. So for example, a classic reference is Koblitz’s book on, I guess it’s called [Introduction to] Elliptic Curves and Modular Forms.

EL: Oh, yeah, I actually have that book. Because as a grad student, my second or third year, I, for some reason—I was not interested in number theory at all, but I think I liked this professor, so I took this class. So I have this book. And I remember, I just felt like I was swimming in that class.

KK: I have this book too, sitting on my shelf.

EL: The one number theory book two topologists have.

ML: So for me, I got this book before knowing I was going to be a number theorist.

EL: Yeah. No, but it is a nice book. Yeah. But yeah, well, we'll link to those will have. Make sure to get those all in the show notes so people can find those easily.

ML: Yeah.

EL: Well, thanks so much for joining me.

KK: Yeah.

EL: Us. Sorry, Kevin!

ML: Thank you for having us—for having me, now I’m confused! Thanks a lot. It's such a pleasure to be here.

EL: Yeah.

KK: Thanks.

On this episode, we were excited to welcome Matlide Lalín, a math professor at the University of Montreal. She talked about the congruent number problem. A congruent number is a positive integer that is the area of a right triangle with rational side lengths.

Our discussion took us from integers to elliptic curves, which are defined by equations of the form y2=x3+ax+b. As we mention in the episode, solutions to equations of this form satisfy what is known as a group law. That is a fancy way of saying there is a way to “add” two points on the curve to get another point. The diagram Kevin mentioned is here:

Episode 42 - Moon Duchin

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about math and…I don't even know what it's going to be today. We'll find out. I'm one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida. Here is your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. So yeah, how are you today?

KK: I’ve had a busy week. Both of my PhD students defended on Monday.

EL: Wow, congratulations.

KK: Yeah, and so, through some weird quirk of my career, these are my first two PhD students. And it was a nice time, slightly nerve-wracking here and there. But everybody went through, everything's good.

EL: Great.

KK: So we have two new professors out there. Well, one guy's going to go into industry. But yeah, how about you?

EL: I’m actually—once we're done with this, I need to go pack for a trip I'm leaving on today. I'm teaching a math writing workshop at Ohio State.

KK: Right. I saw that, yeah.

EL: I mean, if it goes well, then we'll leave this part in the thing. And if it doesn't, no one else will know. But yeah, I'm looking forward to it.

KK: Good. Well, Ellen and I are going to Seattle this weekend.

EL: Fun.

KK: She got invited to be on a panel at the Bainbridge Island Art Museum. And I thought. “I'm going along” because I like Seattle.

EL: Yeah, it's beautiful there.

KK: Love it there. Anyway, enough about us. Today, we are pleased to welcome Moon Duchin to the show. Moon, why don’t you introduce yourself?

Moon Duchin: Hi, I'm Moon. I am a mathematician at Tufts University, where I'm also affiliated with the College of Civic Life. It’s a cool thing Tufts has that not everybody has. And in math, my specialties are geometric group theory, topology (especially low-dimensional topology), and dynamics.

KK Very cool.

EL: Yeah, and so how does this civil life thing work?

MD: Civic life, yeah.

EL: Sorry.

MD: So that's sort of because in the last couple years, I've become really interested in politics and in applications—I think of it as applications of math to civil rights. So that's that's sort of mathematics engaging with civics, it’s kind of how we do government. So that's become a pretty strong locus of my energy in the last couple years.

EL: Yeah.

KK: And I'll vouch for Moon's work here. I mean, I've gone to a couple of the workshops that she's put together. Big one at Tufts in 2017, I guess it was, and then last December, this meeting at Radcliffe. Really cool stuff. Really important work. And I've gotten interested in it too. And let's hope we can begin to turn some tides here. But anyway, enough about that. So, Moon, what's your favorite theorem?

MD: Alright, so I want to tell you about what I think is a really beautiful theorem that is known to some as Gromov’s gap.

KK: Okay.

EL: Which also sounds like it could be the name of a mountain pass in the Urals or something.

MD: I was thinking it sounds like it could be, you know, in there with the Mines of Moria in Middle Earth.

EL: Oh, yeah, that too.

MD: Just make sure you toss the dwarf across the gap. Right, it does sound like that. But of course, it's Mischa Gromov, who is the very prolific Russian-French mathematician who works in all kinds of geometry, differential geometry, groups, and so on.

So what the theorem is about is, what kinds of shapes can you see in groups? So let me set that up a little with—you know, let me set the stage, and then I'll tell you the result.

EL: Okay.

MD: So here's the setting. Suppose you want to understand—the central objects in geometric group theory are, wel,l groups. So what are groups? Of course, those are sets where you can do an operation. So you can think of that as addition, or multiplication, it's just some sort of composition that tells you how to put elements together to get another element. And geometric group theory is the idea that you can get a handle on the way groups work—they’re algebraic objects, but you can study them in terms of shape, geometrically. So there are two basic ways to do that. Either you can look at those spaces that they act on, in other words, spaces where that group tells you how to move around. Or you can look at the group itself as a network, and then try to understand the shape of that network. So let's stick with that second point of view for a moment. So that says, you know, the group has lots of elements and instructions for how to put things together to move around. So I like to think of the network—a really good way to wrap your mind around that is to think about chess pieces. So if I have a chessboard, and I pick a piece—maybe I pick the queen, maybe I pick the knight—there are instructions for how it can move. And then imagine the network where you connect two squares if your piece can get between them in one step. Right?

KK: Okay.

MD: So, of course that's going to make a different network for a knight than it would for a queen and so on, right?

EL: Yeah.

MD: Okay. So that's how to visualize a group, especially an infinite—that works particularly well for infinite groups. That's how to visualize a group as a bunch of points and a bunch of edges. So it's some big graph or network. And then GGT, geometric group theory, says, “What's the shape of that network?” Especially when you view it from a distance, does it look flat? Does it look negatively curved, like a saddle surface? Or does it kind of curve around on itself like a sphere? You know, what's the shape of the group?

And actually, just a cool observation from, you know, a hundred plus years ago, an observation of Felix Klein is that actually the two points of view—the spaces that the group acts on or the group itself—those really are telling the same story. So the shape of the space is about the same as the shape of the group. That's become codified as kind of a fundamental observation in GGT. Okay, great. So that's the space I want to think about. What is the shape—what are the possible shapes of groups? Okay, and that's where Gromov kicks in. So the theorem is about the relationship of area to perimeter. And here's what I mean by that.

Form a loop in your space, in your network. And here, a loop just means you start at a point, you wander around with a path, and you end up back where you started. Okay? And then look at the efficient ways to fill that in with area. So visualize that, like, first you have an outline, and then you try to fill it in with maybe some sort of potato chip-y surface that kind of interpolates around that boundary. Okay, so the question is, if you look at shapes that have the extremal relationship of area to perimeter, then what is the relationship of area to parameter?

So let's do that in Euclidean space first, because it's really familiar. So we know that the extremal shapes there are circles, and you fill those in with discs. And the relationship is that area looks about like perimeter squared, right?

KK: Right.

MD: Okay, great. So now here's the theorem, then. Get ready for it. I love this theorem. In groups, you can find groups where area looks like perimeter to the K power. It can look like perimeter to the 1, it can look like perimeter to the 1, or 3, or 4, and so on. You can build designer groups with any of those exponents. But furthermore, you can also get rational exponents. You can get pretty much any rational exponent you want. You can get 113/5, you can get, you know, 33/10. Pick your favorite exponent, and you can do it.

EL: Can you get less than one?

MD: Well, let's come back to that.

EL: Okay. Sorry.

MD: So let me state Gromov’s theorem in this level of generality. So here's the theorem. You can get pretty much any exponent that you want, as long as it's not between 1 and 2.

KK: Wow.

EL: Oh.

MD: Isn’t that cool? That's Gromov’s gap.

KK: Okay.

EL: Okay.

MD: So there's this wasteland between 1 and 2 that's unachievable.

KK: Wow.

MD: Yeah. And then you can, see—past 2, you can see anything. Um, it actually turns out, it's not just rationals. You can see lots of other kinds of algebraic numbers too.

KK: Sure.

MD: And the closure up there is everything from two to infinity! But nothing between one and two. It's a gap.

EL: Oh, wow. That's so cool!

MD: That’s neat, right? Evelyn, to answer your question under one turns out not to really be well defined for reasons we could talk about. But yeah.

KK: This is remarkable. This sounds like something Gromov would prove, right? I mean, just these weird theorems out of nowhere. I mean—how could that be true? And then there it is. Yeah.

MD: Or that Gromov would state and leave other people to prove.

KK: That—yeah, that's really more accurate. Yeah. So. Okay, so you can't get area to the—I mean, perimeter to the 3/2. I mean, that's, that's really…Okay. Is there any intuition for why you can't get things between one and two?

MD: Yeah, there kind of is, and it's beautiful. It is that the stuff that sits at the exponent 1, in other words, where area is proportional, the perimeter is just really qualitatively different from everything else. Hence the gulf. And what is that stuff? That is hyperbolic groups. So this comes back to Evelyn's wheelhouse, I believe.

EL: It’s been a while since I thought in a research way about this, but yes, vaguely at the distance of my memory.

MD: Let me refresh your memory. Yes, so negatively curved things, things that are saddled shaped, those are the ones where area is proportional to parameter. And everything else is just in a different regime. And that's really what this theorem is telling you.

So that's one beautiful point of view, and kind of intuition, that there's this qualitative difference happening there. But there's something—there’s so many things I love about this theorem. It's just the gateway to lots of beautiful math. But one of the things I love about this theorem is that it fails in higher dimensions, which is really neat. So if you, instead of filling a loop with area, if you were to fill a shell with volume, there would be no gap.

EL: Oh.

MD: Cool, right?

EL: So this is, like, the right way to measure it if you want to find this difference in how these groups behave.

MDL Absolutely. And, you know, another way to say it, is this is an alternative definition of hyperbolic group from the usual one. It's like, the right way to pick out these special groups from everything else is specifically to look at filling loops.

KK: Right. And I might be wrong here, but aren't most groups hyperbolic? Is that?

MD: Yeah, so that's definitely the kind of religious philosophy that’s espoused. But you know, to talk about most groups usually the way people do that is they talk about random constructions of groups. And a lot of that is pretty sensitive to the way you set up what random means. But yeah, that's definitely the, kind of, slogan that you hear a lot in geometric group theory, is that hyperbolic groups are special, but they're also generic.

EL: Yeah.

KK: So are there explicit constructions of groups with say, exponent 33/10, to pick an example?

MD: Yes, there are. Yeah. And actually, if you're going to end up writing this up, I can send you some links to beautiful papers.

EL: Yeah, yeah. But there’s, like, a recipe, kind of, where you're like, “Oh, I like this exponent. I can cook up this group.”

MD: Yeah. And that's why I kind of call them designer groups.

EL: Right, right. Yeah. Your bespoke groups here.

MD: Yeah, there are constructions that do these things.

KK: That’s remarkable. So I was going to guess that your favorite term was the isoperimetric inequality. But I guess this kind of is, right?

MD: I mean, exactly. Right? So the isometric inequality is all about asking, what is the extremal relationship of area to parimeter? And so this is exactly that, but it's in the setting of groups.

KK: Yeah, yeah.

EL: So how did you first come across this theorem?

MD: Well, I guess, in—when you're in the areas of geometric topology, geometric group theory, there's this one book that we sometimes call the Bible—here I'm leaning on this religious metaphor again—which is this this great book by Bridson and Haefliger called Metric Spaces of Non-Positive Curvature. And it really does feel like a Bible. It's this fat volume, you always want it around, you flip to the stuff you need, you don't really read it cover to cover.

KK: Just like the Bible. Yeah,

MD: Exactly. Great. And that's certainly where I first saw it proved. But, yeah, I mean, the ideas that circulate around this theorem are really the fundamental ideas in GGT.

KK: Okay, great. Does this come up in your own work a lot? Do you use this for things you do? Or is this just like, something that you love, you know, for its own sake?

MD: Yeah, no, it does come up in my own work in a couple of ways. But one is I got interested in the relationship between curvature—curvature in the various senses that come from classical geometry—I got interested in the relationship between that and other notions of shape in networks. So this theorem takes you right there. And so for instance, I have a paper of a theorem with Lelièvre, and Mooney, where we look at something really similar, which is, we call it sprawl. It's how spread out do you get when you start at a point and you look at all the different positions you can get to within a certain distance. So you look at a kind of ball around the point. And then you ask how far apart are the points in that ball from each other? So that's actually a pretty fun question. And it turns out, here's another one of these theorems where hyperbolic stuff, there's just a gap between that and everything else.

KK: Right.

MD: So let’s follow that through for a minute. So suppose you start at a base point, and you take the ball of radius R around that base point. And then you ask, “How far apart are the points in that ball from each other?” Well, of course, by the triangle inequality, the farthest apart that could possibly be is 2R because can connect them through the center to each other, right, 2R. Okay, so then you could ask, “Hm, I wonder if there's a space that’s so sprawling, so spread out, so much like, you know, Houston, right, so sprawling that the average distance is actually the maximum?”

EL: Yeah.

MD: Right. What if the average distance between two points is actually equal to 2R?

And that’s, so that's something that we proved. We proved that when you're negatively curved, and you have, you know, a few other mild conditions, basically—but certainly true for negatively curved groups, just like the setting of Gromov’s theorem—so for negatively curved groups, the average is the maximum. You’re as sprawling as you can be. Yeah, isn't that neat? So that's very much in the vein of this kind of result.

KK: Oh, that's very cool. All right.

EL: Yeah. Kind of like the SNCF metric, also, where you have to go to Paris to go anywhere else. Slightly different, but still, that you basically you have to go in to the center to get to the other side.

MD: It’s exactly the same collection of ideas. And I'm just back from Europe, where I can attest that it's really true. You want to get from point to point on the periphery of France, you’d better be going through Paris if you want to do it fast. But yeah, it’s precisely the same idea, right? So the average distance between points on the periphery of France will be: get to Paris and get to the other point. So there's a max there that's also realized.

KK: All right, so France is hyperbolic.

MD: France is hyperbolic. Yup, in terms of travel time.

EL: Very appropriate. It’s such a great country. Why wouldn't it be hyperbolic?

KK: All right, so the other fun thing on this podcast is we ask our guests to pair their theorem with something. So what pairs well with Gromov’s gap theorem?

MD: So I'm actually going to claim that it pairs beautifully with politics. Right? True to form, true to form.

EL: Okay.

KK: Right, yeah, sure.

MD: All right, so let me try and make that connection. So, well, I got really interested in the last few years in gerrymandering in voting districts. And classically, one of the ways that we know that a district is problematic is exactly this same way, that it's built very inefficiently. It has too much perimeter, it has too much boundary, a long, wiggly, winding boundary without enclosing very much area. That's been a longstanding measurement of kind of the fairness or the reasonableness of the district. So I got interested in that through likes of this kind of network curvature stuff, with the idea that maybe the problem is in the relationship between area and perimeter.

And so what does that make you want to do, if you're me? It makes you want to take a state and look at it as a network. And you can do that with census data. You sort of take the little chunks of census geography and connect the ones that are next to each other and presto! You have a network. And it's a pretty big network, but it's finite.

KK: Yeah.

MD: So Pennsylvania's got about 9000 precincts. So you can make a graph out of that. But it's got a whole lot more census blocks. Virginia—we were just looking at Virginia recently—300,000 census blocks. So that's a pretty big network, but you know, still super duper finite, right?

EL: Yeah.

MD: And so you can sort of ask the same question, what's the shape of that network? And does that—you know, maybe the idea is, if the network itself, which is neutral, no one's doing any gerrymandering, that's just where the people are.

EL: Yeah.

MD: If the network network itself is is negatively curved, in some sense, then maybe that explains large perimeters in a reason that isn't due to political malfeasance, you know?

EL: Right.

MD: So I think this is a way of thinking about shape and possibility that lends itself to lots of problems. But I like to pair everything with politics these days.

EL: Yeah, well, I really I think—so I went to your talk at the joint mass meetings a couple of years ago, which I know you've probably given similar talks, with talking about gerrymandering, I think it's really important for people not to take too simplistic a view and just say, “Oh, here's a weird shape.” And you were, you did a really great job of showing, like, there are sometimes good reasons for weird shapes. Obvious things like theres a river here and people end up grouping like this around the river for this reason. But there are a lot of different reasons for this. And if we want to talk about this in a way that can actually be productive, we have to be very nuanced about this and understand all of those subtleties, which are mixing the math—we can't just divorce it from the world. We we mix the math from, you know, the underlying civil rights and, you know, politics, historic inequalities in different groups and things like that.

MD: Yeah, absolutely. That's definitely the point of view that I've been preaching, to stick with the religious metaphor. It’s the one that says, if you want to understand what should be out of bounds, because it's unreasonable when it comes to redistricting, first you have to understand the lay of the land. You have to spec out the landscape of what's possible. And like you're saying, you know, that landscape can have lots of built and structure that districting has to respect. So, yeah, you should really—that could be physical geography, like you mentioned rivers—but it could also be human geography. People distribute themselves in very particular ways. And districting isn't done with respect to like, imaginary people, it’s done with respect to the real, actual people and where they live.

EL: Yeah.

MD: And that's why I really, you know, I think more and more that some of those same tools that we use to study the networks of infinite groups, we can bring those to bear to study the large finite networks of people and how they live and how we want to divide them up.

EL: Yeah, that's, that's a nice pairing, maybe one of the weightier pairings we’ve had.

KK: Yeah, right.

MD: It was either that or a poem, but. I was thinking Gromov’s gap, maybe I could pair that with The Wasteland.

EL: Oh.

MD: Because you can’t get in the wasteland between exponents one and two. Nah, let's go politics.

EL: Well, I've tried to read that poem a few times, and I always feel like I need someone to hold my hand and, like, point everything out to me. It's like, I know there's something there but I haven't quite grabbed on to it yet.

MD: Yeah. Poetry is like math, better with a tour guide.

EL: Yes.

KK: Well, we also like to give our guests a chance to plug things. You want to tell everyone where they can find you online and and maybe about the MGGG?

MD: Sure, yeah, absolutely. So I co-lead a working group called MGGG, the Metric Geometry and Gerrymandering Group, together with a computer scientist named Justin Solomon, who's over at MIT. You can visit us online at mggg.org, where we have lots of cool things to look at, such as the brief we filed with the Supreme Court a couple weeks ago, which just yesterday was actually quoted in oral argument, which was pretty exciting, if quoted in a surprising way.

You can also find cool software tools that we've been developing, like our tool called Districtr lets you draw your own districts and kind of see—try your own hand at either gerrymandering or fair district thing and gives you a sense of how hard that is. We think it's one of the more user-friendly districting tools out there. Lots of different research links and software tools and resources on our site. So that would be that'd be fun if people want to check that out and give us feedback.

Other things I want to mention: Oh, I guess I'm going to do the 538 Politics podcast tomorrow, talking about this new Supreme Court case.

EL: Nice.

MD: Yeah. So I think that'll be fun. Those are some smart folks over there who’ve thought a lot about some of the different ways of measuring gerrymandering, so I think that'll be a pretty high-level conversation.

KK: Yeah, I'm sure. They turn around real fast, like this will be months from now.

MD: Right. I see. Okay, cool. Yeah, by the time this comes out, maybe we'll maybe we'll have yet another Supreme Court decision on gerrymandering that will…

KK: Yeah, fingers crossed.

MD: We’ll all be handling the fallout from.

EL: Yeah.

KK: All right. Well, this has been great fun, Moon. Thanks for joining us.

MD: Oh, it's a pleasure.

[outro]

Episode 41 - Suresh Venkatasubramanian

Evelyn Lamb: Welcome to My Favorite Theorem, a podcast about theorems where we ask mathematicians to tell us about theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. This is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson. I’m a professor of mathematics at the University of Florida. It seems like I just saw you yesterday.

EL: Yeah, but I looked a little different yesterday.

KK: You did!

EL: In between when I talked with you and this morning, I dyed my hair a new color, so I'm trying out bright Crayola crayon yellow right now.

KK: It looks good. Looks good.

EL: Yeah, it's been fun in this, I don't know, like 18 hours I've had it so far.

KK: Well, you know what, it's what it's sort of dreary winter, right, you feel the need to do something to snap you out of it, although it's sunny in Salt Lake, right, it’s just cold? No big deal.

EL: Yeah, we’ve had some sun. We’ve had a lot of snow recently, as our guest knows, because our guest today also lives in Salt Lake City. So I'm very happy to introduce Suresh Venkatasubramanian. So Hi. Can you tell us a little bit about yourself?

Suresh Venkatasubramanian: Hey, thanks for having me. First of all, I should say if it weren't for your podcast, my dishes would never get done. I put the podcast on, I start doing the dishes and life is good.

EL: Glad to be of service.

SV: So I'm in the computer science department here. I have many names. It depends on who's asking and when. Sometimes I'm a computational geometer, I'm a data miner. Sometimes occasionally a machine learner, though people raise their eyebrows at that. But the name I like the most right now is bias detective, or computational philosopher. These are the two names I like the most right now.

EL: Yeah, yeah. Because you've been doing a lot of work on algorithmic bias. Do you want to talk about that a little bit?

SV: Sure. So one of the things that we're dealing with now as machine learning and associated tools go out into the world is that they're being used for not just, you know, predicting what podcast you should listen to, or what music you should listen to, but they're also being used to decide whether you get a job somewhere, whether you get admission to college, whether you get surveilled by the police, what kind of sentences you might get, whether you get a loan. All these things are where machine learning is now being used just because we have lots of data to collect and seemingly make better decisions. But along with that comes a lot of challenges, because what we're finding is that a lot of the human bias in the way make decisions is being transferred to machine bias. And that causes all kinds of problems, both because of the speed at which these decisions get made and the relative obscurity of automated decision making. So trying to piece together, piece out what's going on, how this changes the way we think about the world, how we—the way we think about knowledge about society, has been taking up most of my time of late.

EL: Yeah, and you've got some interesting papers that you've worked on, right, on how people who design algorithms can help combat some of these biases that can creep in.

SV: Yeah, so there are many, many levels of questions, right? One basic question is how do you even detect whether there’s—so first of all, I mean, I think as a mathematical question, how do you even define what it means for something to be biased? What does that word even mean? These are all loaded terms. And, you know, once you come up with a bunch of different definitions for maybe what is a relatively similar concept, how do they interact? How do you build systems that can sort of avoid these kinds of bias the way you've defined it? And what are the consequences of building systems? What kind of feedback loops do you have in systems that you use? There’s a whole host of questions from the mathematical to the social to the philosophical. So it's very exciting. But it also means everyday, I feel even more dumb than I started the day with so.

EL: Yeah.

KK: So I think the real challenge here is that, you know, people who aren't particularly mathematically inclined just assume that because a computer spit out an answer, it must be valid, it must be correct. And that, in some sense, you know, it's cold, that the machine made this decision, and therefore, it must be right. How do you think we can overcome that idea that, you know, actually, bias can be built into algorithms?

SV: So this is the “math is not racist” argument, basically.

KK: Right.

SV: That comes up time and time again. And yeah, one thing that I think is an encouraging is that we've moved relatively quickly, in a span of, say, three to four years, from “math is not racist” to “Well, duh, of course, algorithms are biased.”

EL: Yeah.

SV: So I guess that's a good thing. But I think the problem is that there’s a lot of commercial and other incentives bound up with the idea that automated systems are more objective. And like most things, there's a kernel of truth to it in the sense that you can avoid certain kinds of obvious biases by automating decision making. But then the problem is you can introduce others, and you can also amplify them. So it's tricky, I think. You're right, it's getting away from the notion where commercially, there's more incentive to argue that way. But also saying, “Look, it's not all bad, you just need more nuance.” You know, arguments for more nuance tend not to go as well as, you know, “Here's exactly how things work,” so it's hard.

KK: Everything’s black or white, we know this, right?

SV: With a 50% probability everything is either true or not, right.

EL: So we invited you on to hear about your favorite theorem. And what is that?

SV: So it's technically an inequality. But I know that in the past, you've allowed this sort of deviation from the rules, so I'm going to propose it anyway.

EL: Yes, we are very flexible.

SV: Okay, good good good. So the inequality is called Fano’s inequality after Robert Fano, and it comes from information theory. And it’s one of those things where, you know, the more I talk about it, the more excited I get about it. I'm not even close to being an expert on the ins and outs of this inequality. But I just love it so much. So I need to tell you about that. So like all good stories, right, this starts with pigeons.

EL: Of course.

SV: Everything starts with pigeons. Yes. So, you may have heard of the pigeonhole principle.

KK: Sure.

SV: Ok. So the pigeonhole principle, for those in the audience who may not, basically, if you have ten pigeons and you have nine pigeonholes, someone's going to get a roommate, right? It’s one of the most obvious statements one can make, but also one of the more powerful ones, because if you unpack a little bit, it's not telling you where to find that pigeonhole with two pigeons, it's merely saying that you are guaranteed that this thing must exist, right, it's a sort of an existence proof that can be stated very simply.

But the pigeonhole principle, which is used in many, many parts of computer science to prove that, you know, some things are impossible or not can be restated, can be used to prove another thing. So we all know that if, okay, I need to store a set of numbers, and if the numbers range from 1 to n, that I need something like log(n) bits to store it. Well, why do I need log(n) bits? One way to think about this is this is the pigeonhole principle in action because you're saying, I have n things. If I have log(n) bits to describe a hole, like an address, then there are 2log(n) different holes, which is n. And if I don't have log(n) bits, then I don't have n holes, and therefore by the pigeonhole principle, two of my things will get the same name. And that's not a good thing. I want to be able to name everything differently.

So immediately, you get the simple statement that if you want to store n things you need log(n) bits, and of course, you know, n could be whatever you want it to be, which means that now—in theoretical computer science, you want to prove lower bounds, you want to say that something is not possible, or some algorithm must take a certain amount of time, or you must need to store so many bits of information to do something. These are typically very hard things to prove because you have to reason about any possible imaginary way of storing something or doing something, and that's very hard. But with things like the pigeonhole principle and the log(n) bit idea, you can do surprisingly many things by saying, “Look, I have to store this many things, I'm going to need at least log of that man bits no matter what you do.” And that's great.

KK: So that's the inequality.

EL: No, not yet.

KK: Okay, all right.

SV: I was stopping because I thought Evelyn was going to say something. I'm building up. It's like, you know, a suspense story here.

KK: Okay, good.

EL: Yes.

KK: Chapter two.

SV: So if you now unpack this log(n) bit thing, what it's really saying is that I can encode elements and numbers in such a way, using a certain number of bits, so that I can decode them perfectly because there aren't two pigeons living in the same pigeonhole. There's no ambiguity. Once I have a pigeonhole, who lives there is very clear, right? It's a perfect decoding. So now you've gone from just talking about storage to talking about an encoding process and a decoding process. And this is where Fano’s inequality comes into play.

So information theory—so going back to Shannon, right—is this whole idea of how you transmit information, how you transmit information efficiently, right, so the typical object of study in information theory is a channel. So you have some input, some sort of bits going into a channel, something happens to it in the channel, maybe some noise, maybe something else, and then something comes out. And one of the things you'd like to do is looking at, so x comes in, y comes out, and given y you'd like to decode and get back to x, right? And it turns out that, you know, you can talk about the ideas of mutual information entropy, they start coming up very naturally, where you start saying if x is stochastic, it's a random variable, and y is random, then the channel capacity, in some sense the amount of information the channel can send through, relates to what is called the mutual information between x and y, which is this quantity that captures, roughly speaking, if I know something about x, what can I say about y and vice versa. This is not quite accurate, but this is more or less what it says.

So information theory at a broader level—and this is where really Fano’s inequality connects to so many things—it’s really about how to quantify the information content that one thing has about another thing, right, through a channel that might do something mysterious to your variable as it goes through. So now what does Fano’s inequality say? So Fano’s inequality you can think of now in the context of bits and decoding and encoding, says something like this. If you have a channel, you take x, you push it into the channel and out comes y, and now you try to reconstruct what x was, right? And let's say there's some error in the process. The error in the process of reconstructing x from y relates to the term that is a function of the mutual information between x and y.

More precisely, if you look at how much, essentially, entropy you have left in x once I tell you y— So for example, let me see if this is a good example for this. So I give you someone's name, let's say it's an American Caucasian name. With a certain degree of probability, you'll be able to predict what their gender is. You won't always get it right, but there will be some probability of this. So you can think of this as saying, okay, there's a certain error in predicting that person's name from the—for predicting the person's gender from the name as you went through the channel. The reason why you have an error is because there's certain one of noise. Some names are sort of gender-ambiguous, and it's not obvious how to tell. And so there's a certain amount of entropy left in the system, even after I've told you the name of the person. There's still an amount of uncertainty. And so your error in predicting that person's gender from the name is related to the amount of entropy left in the system. And this is sort of intuitively reasonable. But what it's doing is connecting two things that you wouldn’t normally expect to be connected. It's connecting a computational metaphor, this process of decoding, right, and the error in decoding, along with a basic information theoretic statement about the relationship between two variables.

And because of that—Fano’s inequality, or even the basic log(n) bits needed to sort n things idea—for me, it's pretty much all of computer science. Because if we want to prove a lower bound on how we compete something, at the very least we can say, look, I need at least this much information to do this computation. I might need other stuff, but I need at least as much information. That clearly will be a lower bound. Can I prove a lower bound? And that has been a surprisingly successful endeavor, in reasoning about the lower bounds for computations, where it would be otherwise very hard to think about, okay, what does this computation do? or what does that computation do? Have I imagined every possible algorithm I can design? I don't care about any of that because Fano’s inequality says it doesn't matter. Just analyze the amount of information content you have in your system. That's going to give you a lower bond on the amount of error you’re going to see.

EL: Okay, so I was reading—you wrote a lovely posts about this, and I was reading that this morning, before we started talking. And I think this is what you just said, but it's one of these things that is very new to me, I'm not used to thinking like a computer scientists or information theorist or anything. So something that was a little bit, I was having trouble understanding, is whether this inequality, how much it depends on the particular relationship you have between the two, x and y that you're looking at.

SV: So one way to answer this question is to say that all you need to know is the conditional entropy of the x given y. That's it. You don't need to know anything else about how y was produced from x. All that you need to know, to put a bound on the decoding error, is the amount of entropy that's left in the system.

KK: Is that effectively computable? I mean, it is that easy to compute?

SV: For the cases where you apply Fano’s inequality, yes. Typically it is. In fact, you will often construct examples where you can show what the conditional entropy is, and therefore be able to reason, directly use Fano’s inequality, to argue for the probability of error. So let me give an example in computer science of how this works.

KK: Okay, great.

SV: Suppose I want to build a—so one of the things we have to do sometimes is build a data structure for a problem, which means you're trying to solve some problem, you want to store information in a convenient way so you can access the information quickly. So you want to build some lookup table or a dictionary so that when someone comes up with the question—“Hey, is this person's name in the dictionary?”—you can quickly give an answer to the question. Ok. So now I want to say, look, I have to process these queries. I want to know how much information I need to store my data structure so that I can answer queries very, very quickly. Okay, and so you have to figure out—so one thing you'd like to do is, okay, I built this awesome data structure, but is it the best possible? I don't know, let me see if I can prove a lower bound on how much information I need to store.

So the way you would use Fano’s inequality to prove that you need a certain amount of information would be to say something like this. You would say, I'm going to design a procedure. I'm going to prove to you that this procedure correctly reconstructs the answer for any query a user might give me. So there's a query, let’s say “Is there an element in the database?” And I have my algorithm, I will prove correctly returns this with some unknown number of bits stored. And given that it correctly returns us answer up to some error probability, I will use Fano’s and equality to say, because of that fact, it must be the case that there is a large amount of mutual information between the original data and the data structure you stored, which is the, essentially, the thing that you've converted the data from through your channel. And so this if that mutual information is large, the amount of bits you need to store this must also be large. And therefore, with small error, you must pay at least these many bits to build this data structure. And so this idea sort of goes through a lot of, in fact, more recent work in lower bounds in data structure design and also communication. So, you know, two parties want to communicate with each other, and they want to decide whether they have the same bit string. And you want to argue that you need these many bits of information for them to communicate, to show that the other same bit string. You use, essentially, either Fano’s inequality or even simpler versions of it to make this argument that you need at least a certain number of bits. This is used in statistics in a very different way. But that's a different story. But in computer science, this is the one of the ways in which the inequality is used.

EL: Okay, getting back to the example that you used, of names and genders. How—can you kind of, I don't know if this is going to work exactly. But can you kind of walk us through how this might help us understand that? Obviously, not having complete information about how, how this does correspond to names, but,

SV: Right, so let's say you have a channel, okay? What’s going into the channel, which is x, is the full information about the person, let's say, including their gender. And the channel, you know, absorbs a lot of this information, and just spits out a name, the name of the person. That's y and now the task is to predict. or reconstruct the person's gender from the name. Okay.

So now you have x, you have y. You want to reconstruct x. And so you have some procedure, some algorithm, some unknown algorithm that you might want to design, that will predict the gender from the name. Okay? And now this procedure will have a certain amount of error, let's call this error p, right, the probability of error, the probability of getting it wrong, basically is p. And so what Fano’s inequality says, roughly speaking—so I mean, I could read out the actual inequality, but on the air, it might not be easy to say—but roughly speaking, it says that this probability p times a few other terms that are not directly relevant to this discussion, is greater than or equal to the entropy of the random variable s, which is you can think of it as drawn from the population of people. So I drew uniformly from the population of people in the US, right, or of Caucasians in the US, because we are limiting ourselves to that thing. So, that's our random variable x. And going through this, I get the value y. So I compute the entropy of the distribution on x conditioned that the gender was, say, female. So I look at how much— and basically that probability of error is greater than or equal to, you know, with some extra constants attached, this entropy.

So in other words, so what it's saying is very intuitive, right? If I tell you, the person is is female— and we're sort of limiting ourselves to this binary choice of gender—but let's just say this person is female, you know—sorry, this person's name is so and so. Right? What is the range of gender? What does the gender distribution look like conditioned on having that name? So let's say the name, let's say is, we think of a name, that would be—let’s say, Dylan, so Dylan is the name. So there's going to be some, you know, probability of the person being male and some property being female. And in the case of a name like Dylan, those probabilities might not be too far apart, right? So you can actually compute this entropy now. You can say, okay, what is the entropy of x given y? You just write the formula for entropy for this distribution. It’s just p1logp(1)+p2log(p2) in this case, where p1+p2=1. And so the probability of error no matter what, how clever your procedure is, is going to be lower bounded by this quantity with some other constants attached to make it all make sense.

EL: Okay.

SV: Does that help?

EL: Yeah.

KK: Right.

SV: I made this completely up on the fly. So I'm not even sure it’s correct. I think it's correct.

KK: It sounds correct. So it's sort of, the noisier your channel, the probability is going to go up, the entropy is going to be higher, right?

SV: Right, yeah.

KK: Well, you're right, that's intuitively obvious, right? Yeah. Right.

SV: Right. And the surprising thing about this is that you don't have to worry about the actual reconstruction procedure, that the amount of information is a limiting factor. No matter what you do, you have to deal with that basic information thing. And you can see why this now connects to my work on algorithmic fairness and bias now, right? Because, for example, one of the things that is often suggested is to say, Oh, you know—like in California they just did a week ago, saying, “You are not allowed to use someone's gender to give them driver’s insurance.”

KK: Okay.

SV: Now, there are many reasons why there are problems with the policy as implemented versus the intent. I understand the intent, but the policy has issues. But one issue is this, well, just removing gender may not be sufficient, because there may be signal in other variables that might allow me, allow my system, to predict gender. I don't know what it's doing, but it could internally be predicting gender. And then it would be doing the thing you're trying to prohibit by saying, just remove the gender variable. So while the intention of the rule is good, it's not clear that it will, as implemented, succeed in achieving the goals it’s set out. But you can't reason about this unless you can reason about information versus computation. And that's why Fano’s inequality turns out to be so important. I sort of used it in spirit in some of my work, and some people in follow-up have used it explicitly in their work to sort of show limits on, you know, to what extent you can actually reverse-engineer, you know, protected variables of this kind, like gender, from other things.

EL: Oh, yeah, that would be really important to understand where you might have bias coming in.

SV: Right. Especially if you don't know what the system is doing. And that's what's so beautiful about Fano’s inequality. It does not care.

EL: Right. Oh, that's so cool. Thanks for telling us about that. So, is this something that you'd kind of learn in one of your first—maybe not first computer science courses, but very early on in in your education, or did you pick this up along the way?

SV: Oh, no, you don’t—I mean, it depends. It is very unlikely that you will hear about Fano’s inequality in any kind of computer science, theoretical computer science class, even in grad school.

EL:Okay.

SV: Usually you pick it up from papers, or if you take a course in information theory, it's a core concept there. So if you take any course in information theory, it'll come up very quickly.

EL: Okay.

SV: But in computer science, it comes up usually when you're reading a paper, and they use a magic lower bound trick. Like, where did they get that from? That's what happened to me. It's like, where do they get this from? And then you go down a rabbit hole, and you come back up three years later with this enlightened understanding of… I mean, Fano’s inequality generalizes in many ways. I mean, there's a beautiful, there's a more geometric interpretation of what Fano’s inequality really is when you go to more advanced versions of it. So there's lots of very beautiful—that’s the thing about a lot of these inequalities, they are stated very simply, but they have these connections to things that are very broad and very deep and can be expressed in different languages, not just in information theory, also in geometry, and that makes them really cool. So there's a nice geometric analog to Fano’s inequality as well that people use in differential privacy and other places.

KK: So what does one pair with Fano’s inequality?

SV: Ah! See, when I first started listening to My Favorite Theorem, I said, “Okay, you know, if one day they ever invite me on, I’m going to talk about Fano’s inequality, and I'm going to think about what to pair with it.” So I spent a lot of time thinking about this.

KK: Good.

SV: So see you have all these fans now, that's a cool thing.

So my choice for the pairing is goat cheese with jalapeño jam spread on top of it on a cracker.

EL: Okay.

KK: Okay.

SV: And the reason I chose this pairing: because they are two things that you wouldn't normally think go together well, but they go together amazingly well on a cracker.

KK: Well of course they do.

SV: And that sort of embodies for me what Fano’s inequality is saying, that two things that you don't expect to go together go together really well.

KK: No, no, no, the tanginess of the cheese and the salty of the olives. Of course, that's good.

SV: Not olives, jalapeño spread, like a spicy spread.

KK: Oh, okay, even better. So this sounds very southern. So in the south what we do, I mean, I grew up in North Carolina, you take cream cheese and pepper jelly, hot pepper jelly, and you put that on.

SV: Perfect. That's exactly it.

KK: Okay. All right. Good. Okay, delicious.

EL: So do you make your own jalapeños for this, or you have a favorite brand?

SV: Oh boy, no, no. I'm a glutton, not a gourmet, so I'll eat it whatever someone gives it to me, but I don't know what to make these things.

EL: Okay. We recently, we had a CSA last fall, and had a surplus of hot peppers. And I am unfortunately not very spice-tolerant. Or spiciness. I love spices, but, you know, can't handle the heat very well. But my spouse loves it. So I've been making these slightly sweet pickled jalapeños. I made those. And since then, he's been asking me, you know, I've just been going out and getting more and making those, so I think I will be serving this to him around the time we air this episode.

KK: Good.

SV: So since you can get everything horrifying on YouTube, one horrifying thing you can watch on YouTube is the world chili-eating competitions.

EL: Oh, no.

SV: Let’s just say it involves lots of milk and barf bags.

KK: I can imagine.

SV: But yeah, I do like my chilies. I like the habañeros and things like that.

EL: Yeah, I just watched the first part of a video where everyone in an orchestra eats this really hot pepper and then they're supposed to play something, but I just couldn't made myself watch the rest. Including the brass and wins and stuff. I was just thinking, “This is so terrible.” I felt too bad for them.

SV: It’s like the Drunk History show.

KK: We’ve often joked about having, you know, drunk favorite theorem.

SV: That would be awesome. That'd be so cool.

KK: We should do that.

EL: Yeah, well, sometimes when I transcribe the episodes, I play the audio slower because then I can kind of keep up with transcribing it. And it really sounds like people are stoned. So we joked about having “Higher Mathematics.”

KK: That’s right.

SV: That’s very good.

EL: Because they’re talking, like, “So…the mean…value…theorem.”

SV: I would subscribe to that show.

EL: Note to all federal authorities: We are not doing this.

KK: No, we’re not doing it.

EL: Yeah. Well, thanks a lot for joining us. So if people want to find you online, they can find you on Twitter, which was I think how we first got introduced to each other. What is your handle on that?

It's @geomblog, so Geometry Blog, that's my first blog. So g e o m, b l o g. I maintain a blog as well. And so that's among the places—I’m a social media butterfly. So you can find me anywhere.

EL: Yeah.

SV: So yeah, but web page is also a good place, my University of Utah page.

EL: We’ll include that, including the link to this post about Fano’s inequality so people can see. You know, it really helped me to read that before talking with you about it, to get some of the actual inequalities of the terms that appear in there straight in my head. So, yeah, thanks a lot for joining us.

KK: Thanks, Suresh.

SV: Thanks for having me.

[outro]

Episode 40 - Ursula Whitcher

Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcasts where we ask mathematicians to tell us about their favorite theorems. I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And I am joined as usual by my co host Kevin. Can you introduce yourself?

Kevin Knudson: Sure. I'm still Kevin Knudson, professor of mathematics at the University of Florida. How are things going?

EL: All right. Yeah.

KK: Well, we just talked yesterday, so I doubt much has changed, right? Except I seem to have injured myself between yesterday today. I think it's a function of being—not 50, but not able to say that for much longer.

EL: Yeah, it happens.

KK: It does.

EL: Yeah. Well, hopefully, your podcasting muscles have not been injured.

KK: I just need a few fingers for that.

EL: Alright, so we are very happy today to welcome Ursula Whitcher to the show. Hi, can you tell us a little bit about yourself?

Ursula Whitcher: Hi, my name is Ursula. I am an associate editor at Mathematical Reviews, which, if you've ever used MathSciNet to look up a paper or check your Erdős number or any of those exciting things, there are actually 16 associate editors like me checking all the math that gets posted on MathSciNet and trying to make sure that it makes sense. I got my PhD at the University of Washington in algebraic geometry. I did a postdoc in California and spent a while as a professor at the University of Wisconsin Eau Claire, and then moved here to Ann Arbor where it's a little bit warmer, to start a job here.

EL: Ann Arbor being warmer is just kind of a scary proposition.

UW: It’s barely even snowing. It's kind of weird.

EL: Yeah. Well, and yeah, you mentioned Mathematical Reviews. I—before you got this job, I was not aware that, you know, there were, like, full time employees just of Mathematical Reviews, so that's kind of an interesting thing.

UW: Yeah, it's a really cool operation. We actually go back to sometime in probably the ‘40s.

KK: I think that’s right, yeah.

EL: Oh wow.

UW: So it used to be a paper operation where you could sign up and subscribe to the journal. And at some point, we moved entirely online.

KK: I’m old enough to remember in grad school, when you could get the year’s Math Reviews on CD ROM before MathSciNet was a thing. And you know, I remember pulling the old Math Reviews, physical copies, off the shelf to look up reviews.

UW: We actually have in the basement this set of file cards that our founder, who came from Germany around the Second World War, he had a collection of handwritten cards of all the potential reviewers and their possible interests. And we've still got that floating around. So there's a cool archival project.

KK: I’m ashamed to admit that I'm a lapsed reviewer. I used to review, and then I kind of got busy doing other things and the editors finally wised up and stopped sending me papers.

UW: I try to tell people to just be really picky and only accept the papers that you're really excited to read.

KK: I feel really terrible about this. So maybe I should come back. I owe an an apology to you and the other editors.

UW: Yeah, come back. And then just be really super, super picky and only take things that you are truly overjoyed to read. We don't mind. I—you know, I read apologies for my job for part of my day every day for not reviewing. So I’ve become sort of a connoisseur of the apology letter.

KK: Sure. So is part of your position also that you have some sort of visiting scholar deal at the University of Michigan? Does that come with this?

UW: Yeah, that is. So I get to hang out at the University of Michigan and go to math seminars and learn about all kinds of cool math and use the library card. I'm a really heavy user of my University of Michigan library card. So yeah.

EL: Those are excellent.

KK: It’s a great campus. That’s a great department, a lot of excellent people there.

EL: Yeah. So, what is your favorite theorem, or the favorite theorem you would like to talk about today?

UW: So I decided that I would talk about mirror theorems as a genre.

EL: Okay.

UW: I don't know that I have a single favorite mirror theorem, although I might have a favorite mirror theorem of the past year or two. But as this kind of class of theorems, these are a weird thing, because they run kind of backwards.

Like, typically there's this thing that happens where mathematicians are just hanging out and doing math because math is cool. And then at some point, somebody comes along and is like, “Oh, I see it practical use for this. And maybe I can spin it off into biology or physics or engineering or what have you.” Mirror theorems came the other way. They started with physical observation that there were two ways of phrasing of a theoretical physics idea about possible extra dimensions and string theory and gravity and all kinds of cool things. And then that physical duality, people chewed on and figured out how to turn it into precise mathematical statements. So there are lots of different precise mathematical statements encapsulating maybe something different about about the way these physical theories were phrased, or maybe building then, sort of chaining off of the mathematics and saying something that no longer directly relates to something you could state about a possible physical world. But there is still in like a neat mathematical relationship you wouldn't have figured out without having the underlying physical intuition.

EL: Yeah. And so this is, the general area is called mirror symmetry. And when I first heard that phrase, I assumed it was something about like group theory that was looking at, like, you know, more tangible, things that I would consider symmetric, like what it looks like when you look in a mirror. But that's not what it is, I learned.

UW: So I can tell you why it's called mirror symmetry, although it's kind of a silly reason. The first formulations of mirror symmetry, people were looking at these spaces called Calabi-Yau three-folds, which are—so there are three complex dimensions, six real dimensions, they could maybe be the extra dimensions of the universe, if you're doing string theory. And associated with a Calabi-Yau three-fold, you have a bunch of numbers that tell you about it’s topological information, sort of general stuff about what is this six dimensional shape looking like. And you can arrange those numbers in a diamond that's called the Hodge diamond. And then you can draw a little diagonal line through the Hodge diamond. And some of the mirror theorems predict that if I hand you one Calabi-Yau three-fold with a certain Hodge diamond, there should be somewhere out there in the universe another Calabi-Yau three-fold with another Hodge diamond. And if you flip across this diagonal axis, one is the Hodge diamonds should turn into the other Hodge diamond.

EL: Okay.

UW: So there is a mirror relationship there. And there is a really simple reflection there. But it's like you have to do a whole bunch of topology, and you have to do a whole bunch of geometry and you, like, convince yourself that Hodge diamonds are a thing. And then you have to somehow—like, once you've convinced yourself Hodge diamonds are a thing, you also have to convince yourself that you can go out there and find another space that has the right numbers in the diamond.

EL: So the mirror is, like, the very simplest thing about this. It’s this whole elaborate journey to get to the mirror.

UW: Yeah.

EL: Okay, interesting. I didn't actually know that that was where the mirror came from. So yeah. So can you tell us what these mirror theorems are here?

UW: Sure. So one version of it might be what I said, that given a Calabi-Yau manifold, with this information, that it has a mirror.

Or so then this diamond of information is telling you something about the way that the space changes. And there are different types of information that you could look at. You could look at how it changes algebraically, like if you wrote down an equation with some polynomials, and you changed those coefficients on the polynomials just a little bit, sort of how many different sorts of things, how many possible deformations of that sort could you have? That's one thing that you can measure using, like, one number in this diamond.

EL: Okay.

UW: And then you can also try to measure symplectic structure, which is a related more sort of physics-y information that happens over in a different part of the diamond. And so another type of mirror theorem, maybe a more precise type of mirror theorem, would say, okay, so these deformations measured by this Hodge number on this manifold are isomorphic in some sense to these other sorts of deformations measured by this other Hodge number on this other mirror manifold.

KK: Is there some trick for constructing these mirror manifolds if they exist?

UW: Yeah, there are. There are sort of recipes. And one of the games that people play with mirror symmetry is trying to figure out where the different recipes overlap, when you’ve, like, really found a new mirror construction, and when you’ve found just another way of looking at an old mirror construction. If I hand you one manifold, does it only have a unique mirror or does it have multiple mirrors?

KK: So my advisor tried to teach me Hodge theory once. And I can't even remember exactly what goes on, except there's some sort of bi-grading in the cohomology right?

UW: Right.

KK: And is that where this diamond shows up?

UW: Yeah, exactly. So you think back to when you first learned complex analysis, and there was, like, d/dz direction and there was the d/dz̅ direction.

KK: Right.

UW: And we're working in a setting where we can break up the cohomology really nicely and say, okay, these are the parts of my cohomology that come from a certain number of homomorphic d/dz kind of things. And these are the other pieces of cohomology that can be decomposed and look like, dz̅. And since it’s a Kähler manifold, everything fits together in a nice way.

KK: Right. Okay, there. That's all I needed to know, I think. That's it, you summarized it, you're done.

EL: So, I have a question. When you talk about like mirror theorems, I feel like some amount of mirror symmetry stuff is still conjectural—or “I feel like”—my brief perusal of Wikipedia on this indicates that there are some conjectures involved. And so how much of these theorems are that in different settings, these mirror relationships hold, and how much of them are small steps in this one big conjectural picture. Does that question make sense?

UW: Yeah. So I feel like we know a ton of stuff about Calabi-Yau three-folds that are realized in sort of the nice, natural ways that physicists first started writing down things about Calabi-Yau three-folds.

When you start getting more general on the mathematical side—for instance, there's a whole flavor of mirror symmetry that's called homological mirror symmetry that talks about derived categories and the Fuakaya category—a lot of that stuff has been very conjectural. And it's at the point where people are starting to write down specific theorems about specific classes of examples. And so that's maybe one of the most exciting parts of mirror symmetry right now.

And then there are also generalizations to broader classes of spaces, where it's not just Calabi-Yau three-folds where maybe you're allowing a more general kind of variety or relaxing things, or you're starting to look at, what if we went back to the physics language about potentials, instead of talking about actual geometric spaces? Those start having more conjectural flavor.

EL: Okay, so a lot of this is in the original thing, but then there are different settings where mirror symmetry might be taking place?

UW: Yeah.

EL: Okay. And I assume if you're such a connoisseur of mirror theorems, that this is related to your research also. What kinds of questions are you looking at in mirror symmetry?

UW: Yeah, so I spend some time just playing around with different mirror constructions and seeing if I can match them up, which is always a fun game, trying to see what you know. Lately, what I've been really excited about is taking the sort of classical old-fashioned hands-on mirror constructions where I can hand you a space, and I can take another space, and I can say these two things are mirror manifolds. And then seeing what knowing that tells me, maybe about number theory, about maybe doing something over a finite field in a setting that is less obviously geometry, but where maybe you can still exploit this idea that you have all of this extra structure that you know about because of the mirror and start trying to prove theorems that way.

EL: Oh, wow. I did not know there is this connection in number theory. This is like a whole new tunnel coming out here.

UW: Yeah, no, it's super awesome. We were able to make predictions about zeta functions of K3 surfaces. And in fact we have a theorem about a factor of as zeta function for Calabi-Yau manifolds of any dimensions. And it's a very specific kind of Calabi-Yau manifold, but it's so hard to prove anything about zeta functions! In part because if you're a connoisseur of zeta functions, you know they are controlled by the size of the cohomology, so once your cohomology starts getting really big, it’s really difficult to compute anything directly.

EL: So, like, how tangible are these? Like, here is a manifold and here is its mirror? Are there some manifolds you can really write down and, like, have a visual picture in your mind of what these things actually look like?

UW: Yeah, definitely. So I'm going to tell you about two mirror constructions. I think one of these is maybe more friendly to someone who likes geometry. And one of these is more friendly to someone who likes linear algebra.

EL: Okay.

UW: So the oldest, oldest mirror symmetry construction was, it's due to Greene and Plesser who were physicists. And they knew that they were looking for things with certain symmetries. So they took the diagonal quintic in projective four-space. I have to get to my dimensions right, because I actually often think about four dimensions instead of six.

So you're taking x5+y5+z5 plus, then, v5 and w5, because we ran out of letters, we had to loop around.

EL: Go back.

UW: Yeah. And you say, Okay, well, these are complex numbers, I could multiply any of them by a fifth route of unity, and I would have preserved my total space, right?

Except we're working in projective space, so I have to throw away one of my overall fifth roots of unity because if I multiply by the same fifth root of unity on every coordinate, that doesn't do anything. And then they wanted to maybe fit this into a family where they deformed by the product of all the variables. And if you want to have symmetries of that entire family, you should also make sure that the product of all of your roots of unity, I think multiplies to 1? So anyway, you throw out a couple of fifth roots of unity, because you have these other symmetries from your ambient space and things that you're interested in, and you end up with basically three fifths roots of unity that you can multiply by.

So I've got x5+y5+z5+v5+w5, and I'm modding out by z/5z3. Right? So I’m identifying all of these points in this space, right? I've just like got, like, 125 different things, and I’m shoving all these 125 different things together. So when I do that, this space—which was all nice and smooth and friendly, and it's named after Fermat, because Fermat was interested in equations like that—all of a sudden, I'd made it like, really stuck together and messy, and singular.

KK: Right.

UW: So I go in as a geometer, and I start blowing up, which is what algebraic geometers call this process of going in with your straw and your balloon, and blowing and smoothing out and making everything all nice and shiny again, right?

KK: Right.

UW: And when you do that, you've got a new space, and that's your mirror.

EL: Okay.

KK: So you blow up all the singularities?

UW: Yeah, your resolve the singularity.

KK: That’s a lot.

UW: Yeah. So what you had was, you had something which is floating around in P4. And because we picked a special example, it happens to have a lot of algebraic classes. But a thing in P4, the only algebraic piece you really know about it, in it, is, like, intersecting with a hyper plane.

So it has lots and lots of different ways you can vary all of its different complex parameters on only this one algebraic piece that you know about. And then when you go through this procedure, you end up with something which has very few algebraic ways to modify it. It actually naturally has only a one-parameter algebraic deformation space. But then there are all of these cool new classes that you know about, because you just blew up all of these things. So you're trading around the different types of information you have. You go from lots of deformations on one side to very few deformations on the other.

KK: Okay, so that was the geometry. What's the linear algebra one?

UW: Okay, so the linear algebra one is so much fun. Let's go back to that same space.

EL: I wish our listeners could see how big your smile is right now.

KK: That’s right. It’s really remarkable.

EL: It is truly amazing.

UW: Right. So we've got this polynomial, right, x5+y5+z5+w5+v5. And that thing I was telling you about finding the different fifth roots of unity that we could raise things to, that’s, like, a super tedious algebraic process, right, where you just sit down and you're like, gosh, I can raise different parts of the variables, like fifth roots of unity. And then I throw away some of my fifth roots of unity. So you start with that, the equation and the little algebraic rank that you want to get a group associated with it.

And then you convert your polynomial equation to a matrix. In this case, my matrix is just going to be like all fives down the diagonal.

KK: Okay.

UW: But you can do this more generally with other types of polynomials. The ones that work well for this procedure have all kinds of fancy names, like loops and chains of Fermat’s. So like Fermat’s is just the like different pure powers of variables. Loops would be if I did something like x5+y5+z5+…, and then I looped back around and used an x again.

EL: Okay.

UW: Or, sorry, it should have been like, x4y+y4z, and so on. So you can really see the looping about to happen.

And then chains are a similar thing. Anyway, so given one of these things, you can just read off the powers on your polynomial, and you can stick each one of those into a matrix. And then to get your mirror, you transpose the matrix.

EL: Oh, of course!

UW: And then you run this little crank, to tell you about an associated group.

EL: Okay.

UW: So getting which group goes with your transposed matrix, it's kind of a little bit more work. But I love the fact that you have this, like, huge, complicated physics thing with all this stuff, like the Hodge diamond, and then you're like, oh, and now we transpose a matrix! And, you know it’s a really great duality, right, because if you transpose the matrix again, you get back where you started.

KK: Sure.

EL: Right. Yeah. Well, and it seems like so many questions in math are, “How can we make this question into linear algebra?” It's just, like, one of the biggest hammers mathematicians have.

UW: Yeah.

EL: So another part of this podcast is that we ask our guest to pair their theorem, or in your case, you know, set of theorems, or flavor of theorems, with something. So what have you chosen as you're pairing?

UW: I decided that we should pair the mirror theorems with really fancy ramen.

EL: Okay. So yeah, tell me why.

UW: Okay. So really fancy ramen, like, the good Japanese-style, where you've simmered the broth down for hours and hours, and it's incredibly complex, not the kind that you just go buy in a packet, although that also has its use.

EL: Yeah, no, Top Ramen.

UW: Right. So it's complex. It has, like, a million different variants, right? You can get it with miso, you can get it spicy, you can put different things in it, you can decide whether you want an egg in it that gets a little bit melty or not, all of these different little choices that you get. And yeah, it seems like it's this really simple thing, it’s just noodle soup. And we all know what Top Ramen is. But there's so much more to it. The other reason is that I just personally, historically associate fancy ramen with mirror theorems. Because there was a special semester at the Fields Institute in Toronto, and Toronto has a bunch of amazing ramen. So a lot of the people who were there for that special semester grew to associate the whole thing with fancy ramen, to the point where one of my friends, who's an Italian mathematician, we were some other place in Canada, I think it was Ottawa, and she was like, “Well, why don't we just get ramen for lunch?” And I was like, “Sorry, it turns out that Canada is not a uniform source of amazing ramen.” That was special to Toronto.

KK: Yeah, Ottawa is more about the poutine, I think.

UW: Yeah, I mean, absolutely. There's great stuff in Ottawa. It just like, didn't have this beautiful ramen-mirror symmetry parents that we had all

EL: Right, I really liked this pairing. It works on multiple levels.

KK: Sure. It's personal, but it also works conceptually, it's really good. Yeah. Well, so how long have you been at Math Reviews?

UW: I think I'm in my third year.

KK: Okay.

KK: Do you enjoy it?

UW: I do. It’s a lot of fun.

KK: Is it a permanent gig? Or are these things time limited?

UW: Yeah, it's permanent. And in fact, we are hiring a number theorist. So if you know any number theorists out there who are really interested in, you know, precise editing of mathematics and reading about mathematics and cool stuff like that, tell them to look at our ad on Math Jobs. We're also hiring in analysis and math physics. And we've been hiring in combinatorics as well, although that was a faster hiring process.

EL: Yeah. And we also like to, you know, plug things that you're doing. I know, in addition to math, you have many other creative outlets, including some poetry, right, related to math?

UW: That’s right.

EL: Where can people find that? Ah, well, you can look at my website. Let's see, if you want the poetry you should look at my personal website, which is yarntheory.net.

There's one poem that was just up recently on JoAnne Growney’s blog.

EL: Yeah, that's right.

UW: And I have a poem that's coming out soon, soon, I’m not sure how soon in Journal of Humanistic Math. Yeah, it's a really goofy thing where I made up some form involving the group of units for the multiplicative group associated to the field of seven elements and then played around with that.

EL: I'm really, really looking forward to getting into that. Do you have a little bit of explanation of the mathematical structure in there?

UW: Just the very smallest. I mean, I think what I did was I listed, I found the generators of this group, and then I listed out where they would go as you generated them, and then I looked for the ones that seemed like they were repeating in an order that would make a cool poem structure.

EL: Okay, cool. Yeah. Well, thanks a lot for joining us. We'll be sure to share all that and hopefully people can find some of your work and enjoy it.

UW: Cool.

KK: Thanks, Ursula.

UW: Thanks so much for having me.[outro]

Episode 39 - Fawn Nguyen

Evelyn Lamb: Hello and welcome to My Favorite Theorem, a math podcast where we get mathematicians to tell us about theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. It's Friday night.

EL: Yeah, yeah, I kind of did things in a weird order today. So there's this concert at the Utah Symphony that I wanted to go to, but I can't go tonight or tomorrow night, which are the only two nights they're doing it. But they had an open rehearsal today. So I went to a symphony concert this morning. And now I'm doing work tonight, so it's kind of a backwards day.

KK: Yeah. Well, I got up super early to meet with my financial advisor.

EL: Oh, aren’t you an adult?

KK: I do want to retire someday. I’ve got 20 years yet, but you know, now it's nighttime and my wife is watching Drag Race and I'm talking about math. So.

EL: Cool. Yeah.

KK: Life is good.

EL: Yes. Well, we're very happy today to have Fawn Nguyen on the show. Hi, Fawn. Could you tell us a little bit about yourself?

Fawn Nguyen: Hi, Evelyn. Hi, Kevin. I was was thinking, “How nerdy can we be? It’s Friday.”

So my name is Fawn Nguyen. I teach at Mesa Union School in Somis, California. And it's about 60 miles north of LA.

EL: Okay.

FN: 30 miles south of Santa Barbara. I teach—the school I'm at is a K-8, one-school district, of about 600 students. Of those 600, about 190 of them are in the junior high, 6-8, but it's a unique one-school district. So we're a family. It's nice. It's my 16th year there but 27th overall.

EL: Okay, and where did you teach before then?

FN: I was in Oregon. And I was actually a science teacher.

KK: Is your current place on the coast or a little more inland?

FN: Coast, yeah, about 10 miles from the coast. I think we have perfect weather, the best weather in the world. So.

KK: It’s beautiful there, it’s really—it’s hard to complain. Yeah.

FN: Yeah. It’s reflected in the mortgage, or rent.

EL: Yeah

KK: Right.

FN: Big time.

EL: So yeah, what theorem have you decided to share with us today?

FN: You mean, not everyone else chose the Pythagorean Theorem?

EL: It is a very good theorem.

FN: Yeah. I chose the Pythagorean Theorem. I have some reasons, actually five reasons.

KK: Five, good.

FN: I was thinking, yeah, that's a lot of reasons considering I don't have, you know, five reasons to do anything else! So I don't know, should I just talk about my reasons?

EL: Yeah, what’s your first reason?

FN: Jump right in?

EL: Well, actually, we should probably actually at least say what the Pythagorean theorem is. It's a very familiar theorem, and everyone should have heard about it in a middle school math class from a teacher as great as Fawn. But yeah, so could you summarize the theorem for us?

FN: Well, that's just it. Yeah, I chose it for one, it’s the first and most obvious reason, because I am at the middle school. And so this is a big one for us, if not the only one. And it's within my pay grade. I can wrap my head around this one. Yeah, it's one of our eighth grade Common Core standards. And the theorem states that when you have a right triangle, the sum of the squares of the two legs is equal to the square of the hypotenuse.

KK: Right.

FN: How did I do, Kevin?

KK: Well, yeah, that's perfect. In fact, today I was—it just so happens that today, I was looking through the 1847 Oliver Byrne edition of Euclid’s Elements, this sort of very famous one with the pictures where the colors, shapes, and all of that, and I just I happened to look at that theorem, and the proof of it, which is really very nice.

FN: Yeah. So it being you know, the middle school one for us, and also, when I talk about my students doing this theorem—I just want to make sure that you understand that I no longer teach eighth grade, though. This is the first year actually at Mesa, and I've been there 16 years, that I do not get eighth graders. I'm teaching sixth graders. So when I refer to the lessons, I just want to make sure that you understand these are my former students.

EL: Okay.

FN: Yeah. And once upon a time, we tracked our eighth graders at Mesa. So we had a geometry class for the eighth graders. And so of course, we studied the Pythaogrean theorem then.

KK: So you have reasons.

FN: I have reasons. So that was the first reason, it’s a big one because, yeah. The second reason is there are so many proofs for this theorem, right? It's mainly algebraic or geometric proofs, but it's more than any other theorem. So it's very well known. And, you know, if you ever Google, you get plenty of different proofs. And I had to look this up, but there was a book published in 1940 that already had a 370 proofs in it.

KK: Yeah.

FN: Yeah. Even one of our presidents, I don't know if you know this, but yeah, this is some little nice trivia for the students.

EL: Yeah.

FN: One of our presidents, Garfield, submitted a proof back in 18-something.

EL: Yeah.

FN: He used trapezoids to do that.

KK: He was still in Congress at the time, I think the story is that, you know, that he was in the House of Representatives. And like, it was sort of slow on the floor that day, and he figured out this proof, right.

FN: Yeah. And then people continue to submit, and the latest one that I know was just over a year ago, back in November 2017, was submitted. That's the latest one I know. Maybe there was one just submitted two hours ago, who knows? And his was rearranging the a-squared and b-squared, the smaller squares, into a parallelogram. So I thought that was interesting. Yeah. And what's interesting is Pythagoras, even though it's the Pythagorean Theorem, he was given credit for it, it was a known long before him. And I guess there's evidence to suggest that it was by developed by a Hindu mathematician around 800 BC.

EL: Okay.

FN: And Pythagoras was what? 500-something.

KK: Something like that.

EL: Yeah.

FN: Yeah, something like that. But he was the first, I guess he got credit, because he was the first to submit a proof. He wasn't just talking about it, I guess it was official, it was a formal proof. And his was rearrangement. And I think that's a diagram that a lot of us see. And the kids see it. It's the one where you got the big, the big c square in the middle with the four right triangles around it, four congruent right triangles. Yeah. And then just by rearranging that big c squared became two smaller squares, your a squared and b squared. Yeah.

EL: Yeah. And I think it was known by, or—you know, I'm not a math historian. And I don't want to make up too much history today. But I think it has been known by a lot of different people, even as far back as Egyptians and Babylonians and things, but maybe not presented as a mathematical theorem, in the same kind of way that we might think about theorems now. But yeah, I think this is one of these things that like pretty much every human culture kind of comes up with, figuring out that this is true, this relationship.

KK: Yeah, I think recently wasn’t it? Or last year? There's this Babylonian tablet. And I remember seeing on Twitter or something, there was some controversy about someone claimed that this proved that the the Babylonians knew, all kinds of stuff, but really—

EL: Well, they definitely knew Pythagorean triples.

KK: Yeah, they knew lots of triples. Maybe you wrote about this, Evelyn.

EL: I did write about it, but we won’t derail it this way. We can put a link to that. I’ll get too bothered.

FN: Now that you brought up Pythagorean triples, how many do you know? How many of those can you get the kids to figure out? Of course, including Einstein submitted a proof. And I thought it was funny that people consider Einstein’s proof to be the most elegant. And I'm thinking, “Well, duh, it’s Einstein.” Yeah. And I guess I would have to agree, because there were a lot of rearrangements in the proofs, but Einstein, you know, is like, “Yeah, I don't need no stinking rearrangement.” So he stayed with the right triangle. And what he did was draw in the altitude from the 90 degree angle to the hypotenuse, and used similar triangles. And so there was no rearrangement. He simply made the one triangle into—by drawing in that altitude, he got himself three similar triangles. And yeah, and then he drew squares off of the hypotenuse of each one of those triangles, and then wrote, you know, just wrote up an equation. Okay, now we're just going to divide everything by a factor. The one that was drawn, in you just divide out the triangle, then you just you end up with a squared plus b squared equals c squared. It's hard to do without the the image of it, but yeah.

EL: But yeah, it is really a lovely one.

FN: Yeah. And this is something I didn't know. And it was interesting. I didn't know this until I was teaching it to my eighth graders. And I learned that, I mean, normally, we just see those squares coming off of the right triangle. And then I guess one of the high school students—we were using Geometer’s Sketchpad at the time, and one of the high school students made an animated sketch of the Pythagorean theorem. And, you know, he was literally drawing Harry Potter coming off of the three sides. And you know, and I just, Oh, I said, yeah, yeah. You don't have to have squares as long as they’re similar figures, right, coming off the edges. That would be fine. So that was fun to do. Yeah. So I have my kids just draw circles so that they—just anything but a square coming off of the sides, you know, do other stuff.

EL: Well, now I’m trying to—my brain just went to Harry Potter-nuse.

KK: Boo.

EL: I’m sorry, I’m sorry.

FN: That’s a good one.

KK: So I have forgotten when I actually learned this in life. You know, it's one of those things that you internalize so much that you can't remember what stage of your education you actually learned it in. So this is this is an actual Common Core eighth grade standard?

FN: Yes, yes. In the eighth grade, yeah.

KK: I grew up before the Common Core, so I don't really remember when we learned this.

FN: I don't know. Yeah, prior to Common Core, I was teaching it in geometry. And I don't think it was—it wasn't in algebra, you know, prior to these things we had algebra and then geometry. So yeah.

My third reason—I’m actually keeping track, so that was the second, lots of proofs. So the third reason I love the Pythagorean theorem is one fine day it led me to ask one of the best questions I'd asked of my geometry students. I said to them, “I wonder if you know how to graph and irrational number on the number line.” I mean, the current eighth grade math standard is for the kids to approximate where an irrational numbers is on the number line. That's the extent of the standard. So I went further and just asked my kids to locate it exactly. You know, what the heck?

EL: Nice. Yeah.

FN: And I actually wrote a blog post about it, because it was one of those magical lessons where you didn't want the class to end. And so I titled the post with “The question was mine, but the answer was all his.” And so I just threw it out to the class, I began with just, “Hey, where can we find—how do you construct the square root of seven on the number line?” And so, you know, they did the usual struggle and just playing around with it, but one of the kids towards the end of class, he got it, he came up with a solution. And I think when I saw it, and heard him explain it, it made me tear up, because it's like, so beautiful. And I'm so glad I did, because it was not, you know, a standard at all. And it was just something at the spur of the moment. I wanted to know, because we'd been working a lot with the Pythagorean theorem. And, yeah, so what he did was he drew two concentric circles, one with radius three and one with radius four on the coordinate plane, and the center is at (0,0). If you can imagine two concentric circles. And then he drew in y=-3, a line y=-3. And then you drew a line perpendicular to that line, that horizontal line, so that it intersects the—right, perpendicular to the horizontal line at negative three. And it intersects the larger circle, the one with radius four.

KK: Yeah, okay.

FN: So eventually what he did was he created, yeah, so you would have a right triangle created with one of the corners at (0,0). And the triangle would have legs of—the hypotenuse would be, what, four, the hypotenuse is four. One of the legs is three, and the other leg must be √7.

EL: Oh.

KK: Oh, yeah, okay.

FN: Yeah, yeah. So it's just so beautiful.

EL: That is very clever!

FN: Yeah, it really was. So every time I think about the Pythagorean theorem, I think back on that lesson. The kids really tried. And then from √7, we tried other routes. And we had a great time and continued to the next day.

EL: Oh, nice. I really liked that. That brought a big smile to my face.

KK: Yeah.

FN: The fourth reason I love the Pythagorean theorem is it always makes me think of Fermat’s Last Theorem. You know, it looks familiar, similar enough, where it states that no three positive integers a, b, and c can satisfy the equation of an+bn=cn. So for any integer value of n greater than 2. So it works for the Pythagorean Theorem, but any integer, any exponent greater than two would work. So I love—whenever I can, I love the history of mathematics, and whatever, I try to bring that in with the kids. So I read the book on the Fermat’s Last Theorem, and I kind of bring it up into the students for them to realize, Oh, my gosh, this man, Andrew Wiles, who solved it—and it's, you know, it's an over 300 year old theorem. And yeah, for him to first learn about the theorem when he was 10, and then to spend his life devoted to it. I mean, I can't think of a more beautiful love story than that. And yeah, so bring that to the kids. And I actually showed them the first 10 minutes of the documentary by BBC on Andrew Wiles. And just right, when he tears up, and, you know, I cannot stop tearing up at the same time because it—I don't know, it's just, it's that kind of dedication and perseverance. It's magical, and it's what mathematicians do. And so, you know, hopefully that supports all this productive struggle, and just for the love for mathematics. So, kind of get all geeky on the kids.

EL: Yeah, that is a lovely documentary.

FN: Yeah. Yeah. It's beautiful. All right. My fifth and final reason for loving this theorem is Pythagoras himself. What a nut!

EL: Yeah, I was about to say, he was one weird dude.

FN: Yeah, yeah. So, I mean, he was a mathematician and philosopher, astronomer and who knows what else. And the whole mystery wrapped up in the Pythagorean school, right? He has all these students, devotees. I don't know, it's like a cult! It really is like a cult because they had a strict diet, their clothing, their behaviors are a certain way. They couldn't eat meat or beans, I heard.

EL: Yeah.

FN: Yeah. And something about farting. And they believed that each time you pass gas that part of your soul is gone.

EL: That’s pretty dire for a lot of us, I think.

FN: Yeah. And what's remarkable also was that, the very theorem that he's named, you know, that's where I guess one of his students—I don't remember his name—but apparently he discovered, you know, the hypotenuse of √2 on the simple 1x1 isosceles triangle, and √2 and what did that do to him. The story goes he was thrown overboard for speaking up. He said, hey, there might be this possibility. So that's always fun, right? Death and mathematics, right?

EL: Dire consequences. Give your students a good gory story to go with it.

FN: I always like that. Yeah. But it's the start of irrational numbers. And of course, the Greek geometry—that mathematics is continuous and not as discrete as they had thought.

EL: Well, and it is an interesting irony, then that the Pythagoras theorem is one really easy way to generate examples of irrational numbers, where you find rational sides and a whole lot of them give you irrational hypotenuses.

FN: Yes.

EL: And then, you know, this theorem is the downfall of this idea that all numbers must be rational.

FN: Right. And I mean, the whole cult, I mean, that revelation just completely, you know, turned their their belief upside down, turned the mathematical world at that time upside down. It jeopardized and just humiliated their thinking and their entire belief system. So I can just imagine at that time what that did. So I don't know if any modern story that has that kind of equivalent.

EL: Yeah, no one really based their religion on Fermat’s Last Theorem being untrue. Or something like this.

FN: Right, right. Exactly.

EL: Yeah. I like all of your reasons. And you've touched on some really great—like, I will definitely share some links to some of those proofs of the Pythagorean theorem you mentioned.

So another part of this podcast is that we ask our guests to pair their theorem with something. So what have you chosen for your pairing?

FN: I chose football.

KK; Okay, all right.

FN: I chose football. It's my love. I love all things football. And the reason I chose football is simply because of this one video. And I don't know if you've seen it. I don't know if anyone's mentioned it. But I think a lot of geometry teachers may have shown it. It's by Benjamin Watson doing a touchdown-saving tackle. So again, his name is Benjamin Watson. I don't know how many years ago this was, but he’s a tight end for the New England Patriots. So what happened was, he came out of nowhere. Well, there was an interception. So he came out of nowhere to stop a potential pick-six at the one-yard line.

EL: Oh, wow.

FN: I mean, it's the most beautiful thing! So yeah, so if you look at that clip, even the coach say something to remember for the rest—anybody who sees it, for the rest of their life just because he never gave up, obviously. But you know, the whole point is he ran the diagonal of the field is what happened.

KK: Okay.

EL: Yeah, so you’ve got the hypotenuse.

FN: You’ve got the hypotenuse going. The shortest distance is still that straight line, and he never gave up. Oh, I mean, this guy ran the whole way, 90 yards, whatever he needed from from the very one end to the other. No one saw Ben Watson coming, just because, we say literally out of nowhere. Didn’t expect it. And the camera, what's cool is, you know, the camera is just watching the runner, right, just following the runner. And so the camera didn’t see it until later. Later when they did film, yeah, they zoomed out and said, Oh, my God, that's where he was coming from, the other hypotenuse, I mean, the other end of the hypotenuse. Yeah. But I pair everything, every mathematics activity I do, I try to pair it with a nice Cabernet. How's that?

EL: Okay.

KK: Not during school, I hope.

FN: Absolutely not.

EL: Don’t share it with your students.

FN: I’m a one glass drinker anyway, I'm a very, very lightweight. I talk about drinking, but I'm a wuss, Asian flush. Yeah.

EL: Yeah. Well, so, I’m not really a football person. But my husband is a Patriots fan. And I must admit, I'm a little disappointed that you picked an example with the Patriots because he already has a big enough head about how good the Patriots are, and I take a lot of joy in them not doing well, which unfortunately doesn’t happen very much these days.

KK: Never happens.

EL: There are certain recent Super Bowls that I am not allowed to talk about.

FN: Oh, okay.

KK: I can think of one in particular.

EL: There are few. But I'll say no more. And now I'm just going to say it on this podcast that will be publicly available, and I'll instruct him not to listen to this episode.

FN: Yeah, now my new favorite team actually, pro—well, college is Ducks, of course, but pro would be Dallas Cowboys. Just because that’s the favorite team from my fiance. So we actually, yeah, for Christmas, this past Christmas, I gave him that gift. We flew to Dallas to watch a Cowboys game.

EL: Oh, wow. We might have been in Dallas around the same time. So I grew up in Dallas.

FN: Oh!

EL: And so if I were a football fan, that would be my team because I definitely have a strong association with Dallas Cowboys and my dad being in a good mood.

FN: There we go.

EL: And I grew up in in the Troy Aikman era, so luckily the Cowboys did well a lot.

FN: Well, they’re doing well this year, too. So this Saturday, big game, right? Is it? Yeah.

KK: I feel old. So when I was growing up, I used to, I loved pro football growing up, and I've sort of lost interest now. But growing up in the ‘70s, it was either you're a Cowboys fan or a Steelers fan. That was the big rivalry.

FN: Yeah.

KK: I was not a Cowboys fan, I’m sorry to say.

FN: I never was either until recently.

KK: I was born in Wisconsin, and my mother grew up there, so I’m contractually obligated to be a Green Bay fan. I mean, I’m not allowed to do anything else.

EL: Well, it's very good hearted, big hearted of you, Fawn, to support your fiance's team. I admire that. I, unfortunately, I'm not that good a person.

FN: I definitely benefit because yeah, the stadium. What an experience at AT&T Stadium. Amazing.

EL: Yeah, it is quite something. We went to a game for my late grandfather's birthday a few years before he passed away. My cousins, my husband and I, my dad and uncle a ton of people went to a game there. And that was our first game at that stadium. And yeah, that is quite an experience. I just, I don't even understand—like the screen that they've got so you can watch the game bigger than the game is like the biggest screen I've ever seen in my life. I don't even understand how it works.

FN: Same here, it’s huge. And yet somehow the camera, when you watch the game on television, that screen’s not there, and then you realize that it's really high up. Yeah.

KK: Cool. Well, we learned some stuff, right?

EL: Yeah.

KK: And this has been great fun.

EL: Yeah, we want to make sure to plug your stuff. So Fawn is active on Twitter. You can find her at—what is your handle?

FN: fawnpnguyen. So my middle initial, Fawn P Nguyen.

EL: And Nguyen is spelled N-G-U-Y-E-N?

FN: Very good. Yes.

EL: Okay. And you also have a blog? What's the title of that?

FN: fawnnguyen.com. It’s very original.

EL: But it's just lovely. You're writing on there is so lovely. And it, yeah, it's just such a human picture. Like you really, when you read that you really see the feeling you have for your students and everything, and it's really beautiful.

FN: Thank you. They are my love. And I just want to say, Evelyn, when you asked me to do this, I was freaking out, like oh my god, Evelyn the math queen. I mean, I was thinking God, can you ask me do something else like washing the windows? Make you some pho?

KK: Wait, we could have had pho?

FN: We could have had pho. Because this was terrifying. But you know, it's a joy. Pythagorean theorem, I can take on this one. Because it's just so much fun. I mean, I've been in the classroom for a long time, but I don't see myself leaving it anytime soon because yeah, I don't know what else I would be doing because this is my love. My love is to be with the kids.

KK: Well, bless you. It's hard work. My sister in law teaches eighth grade math in suburban Atlanta, and I know how hard she works, too. It's really—

FN: We’re really saints, I mean—

KK: You are. It’s a real challenge. And middle school especially, because, you know, the material is difficult enough, and then you're dealing with all these raging hormones. And it's really, it's a challenge.

EL: Well, thanks so much for joining us. I really enjoyed it.

KK: Thanks, Fawn.

FN: Thank you so much for asking me. It was a pleasure. Thank you so much.

[outro]

Episode 38 - Robert Ghrist

Kevin Knudson: Welcome to My Favorite Theorem, a podcast that starts with math and goes all kinds of weird places. I'm Kevin Knudson, professor of mathematics at the University of Florida and here is your other host.

Evelyn Lamb: Hi. I'm Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. Happy New Year, Kevin.

KK: And to you too, Evelyn. So this will happen, this will get out there in the public later. But today is January 1, 2019.

EL: Yes, it is.

KK: And Evelyn tells me that it was it's cold in Utah, and I have my air conditioning on.

EL: Yes.

KK: That seems about right.

EL: Yeah. Our high is supposed to be 20 today. And the low is 6 or 7. So we really, really don't have the air conditioner on.

KK: Yeah, it's going to be 82 here today in Gainesville. [For our listeners outside of the USA: temperatures are in Fahrenheit. Kevin does not live in a thermal vent.] I have flip flops on.

EL: Yeah. Yeah, I’m a little jealous.

KK: This is when it's not so bad to live in Florida, I’ve got to say.

EL: Yeah.

KK: Anyway, Well, today, we are pleased to welcome Robert Ghrist. Rob, you want to introduce yourself?

RG: Hello, this is Robert Ghrist.

KK: And say something about yourself a little, like who you are.

RG: Okay. All right, so I am a professor of mathematics and electrical and systems engineering at the University of Pennsylvania. This is in Philadelphia, Pennsylvania. I've been in this position at this wonderful school for a decade now. Previous to that I had tenured positions at University of Illinois in Urbana Champaign and Georgia Institute of Technology.

KK: So you've been around.

RG: A little bit.

EL: Oh, and so I was just wondering, so it that a joint appointment between two different departments, or is it all in the math department?

RG: This is a split appointment, not only between two different departments, but between two different schools.

EL: Oh, wow.

RG: The math appointment is in the School of Arts and Sciences, and the engineering appointment is in the School of Engineering. This is kind of a tricky sort of position to work out. This is one of the things that I love about the University of Pennsylvania is there are very low walls between the disciplines, and a sort of creative position like this is is very workable. And I love that.

KK: Yeah, and your undergraduate degree was actually in engineering, right?

RG: That’s correct I got turned on to math by my calculus professor, a swell guy by the name of Henry Wente, a geometer.

KK: Excellent. Well, I see you’re continuing the tradition. Well, we’ll talk about that later. And actually, you actually have an endowed chair name for someone rather famous, right?

RG: That’s true. The full title is The Andrea Mitchell PIK professor of mathematics, electrical and systems engineering. This is Andrea Mitchell from NBC News. She and her husband Alan Greenspan funded this position. Did not intend to hire me specifically, or a mathematician. I think she was rather surprised when the chair that she endowed wound up going to a mathematician, but there it is, we get along swell. She's great.

KK: Nice.

EL: Nice. That's really interesting.

KK: Yup. So, Rob, what is your favorite theorem?

RG: My favorite theorem is, I don't know. I don't know the name. I don't know that this theorem has a name, but I love this theorem. I use this theorem all the time in all the classes I teach, it seems. It's a, it's a funny thing about basically Taylor expansion, or Taylor series, but in an operator theoretic language. And the theorem, roughly speaking, goes like this: if you take the differentiation operator on functions, let's say just single input, single output functions, the kind of things you do in basic calculus class. Call the differentiation operator D. Consider the relationship between that operator and the shift operator. I’m going to call the shift operator E. This is the operator that takes a function f, and then just shifts the input by 1. So E(f(x)) is really— pardon me, E(x) is really f evaluated at x+1. We use shift—

EL: I need a pencil and paper.

RG: Yeah, I know, right? We use the shift operator all the time in signal processing, in all kinds of things in both mathematics and engineering. And here's the theorem, here's the theorem. There's this wonderful relationship between these two operators. And it's the following. If you exponentiate the differentiation operator, if you take e to the D, you get the shift operator.

KK: This is remarkable.

RG: What does this mean? What does this mean?

KK: Yeah, what does it mean? I actually did work this out once. So what our listeners don't know is that you and I actually had this conversation once in a bar in Philadelphia, and the audio quality was so bad, we're having to redo this.

EL: Yeah.

KK: So I went home, and I worked this out. And it's true, it does work out. But what does this mean, Rob? Sort of, you know, in a manifestation physically?

RG: Yeah, so let me back up. The first question that I ask students, when they show up in calculus class at my university is, is, what is e to the x? What does that even mean? What does that mean when x is an irrational number, or an imaginary number, or something like a square matrix, or an operator? And, of course, that takes us back to the the interpretation of exponentiation in terms of the Taylor series at zero, that I take that infinite series, and I use that to define what exponentiation means. And because things like operators, things like differentiations, or shifts, you can take powers of those by composition, by iteration, and you can rescale them. Then you can exponentiate them.

So I can talk about what it means to exponentiate the differentiation operator by taking first D to the 0, which of course, is the identity, the do-nothing operator, and then adding to it D, and then adding to that D squared divided by 2 factorial [n factorial is the product of the integers 1x2,…n], that's the second derivative, then D cubed divided by 3 factorial, that's the third derivative. If I can keep going, I’ve exponentiated the differentiation operator. And the theorem is that this is the shift operator in disguise. And the proof is one line. It's Taylor expansion. And there you go.

Now, this isn't your typical sort of my favorite theorem, in that I haven't listed all the hypotheses. I haven't been careful with anything at all. But one of the reasons that this is my favorite theorem is because it's so useful when I'm teaching calculus to students, when I'm teaching basic dynamical systems to students where, you know, in a more advanced class, yeah, we'd have a lot of hypotheses. And, oh, let's be careful. But when you're first starting out, first trying to figure out what is differentiation, what is exponentiation, this is a great little theorem.

EL: Yeah, this conceptual trip going between the Taylor series, or going between the idea of e to the x, or 2 to the x or something where we really have a, you know, a fairly good grasp of what exponentiation means in that case, if we, if we're talking about squares or something like that, and going then to the Taylor series, this very formal thing, I think that's a really hard conceptual shift. I know that was really hard for me.

RG: Agreed.

KK: Yeah. So I, I wonder, though, I mean, so what's a good application of this theorem, like, in a dynamics class, for example? Where does this pop up sort of naturally? And I can see that it works. And I also agree that this idea of—I start calculus there, too, by the way, when I say, you know, what does e to the .1 mean, what does that even mean?

RG: What does that even mean?

KK; Yeah, and that’s a good question that students have never really thought about. They’re just used to punching .1 into a calculator and hitting the e to the x key and calling it a day. So, but where would this actually show up in practice? Do you have a good example?

RG: Right. So when I teach dynamical systems, it's almost exclusively to engineering students. And they're really interested in getting to the practical applications, which is a great way to sneak in a bunch of interesting mathematics and really give them some good mathematics education. When doing dynamical systems from an applied point of view, stability is one of the most important things that you care about. And one of the big ideas that one has to ingest is that of stability criteria for, let's say, equilibria in a dynamical system. Now, there are two types of dynamical systems that people care about, depending on what notion of time you're using— continuous time or discrete time. Most books on the subject are written for one or the other type of system. I like to teach them both at once, but one of the challenges of doing that is that the stability criteria are are different, very different-looking. In continuous time, what characterizes a stable equilibrium is when you look at all of the eigenvalues of the linearization, the real parts are less than zero. When you move to a discrete time dynamical system, that is a mapping, then again, you're looking at eigenvalues of the linearization, but now you want the modulus to be less than 1. And I find that students always struggle with “Why is it different?” “Why is it this way here, and that way there?” And of course, of course, the reason is my favorite little theorem, because if I look at the evolution operator in continuous-time dynamics—that’s the derivative—versus the evolution operator in discrete-time dynamics—that is the shift, move forward one step in time—then if I want to know the relationship between the stability and, pardon me, the stable and unstable regions, it is exponentiation. If I exponentiate the left hand side of the complex plane, what do I get? I get the region in the plane with modulus less than 1.

KK: Right.

RG: I find that students have a real “aha” moment when they see that relationship, and when they can connect it to the relationship between the evolution operators.

EL: I’m having an “aha” moment about this right now, too. This isn't something I had really thought about before. So yeah, this is a really neat observation or theorem.

RG: Yeah, I never really see this written down in books.

KK: That’s—clearly now you should write a book.

RG: Another one?

KK: Well, we'll talk about how you spend your time in a little while here. But, no, Rob, I mean, so Rob has this—I don't know if it's famous, but well known—massive open online course through Coursera where he does calculus, and it's spectacular. If our listeners haven't seen it, is it on YouTube, Rob? Can you actually get it at YouTube now?

RG: Yes, yes. The the University of Pennsylvania has all the lectures posted on a YouTube channel.

KK: Well, I actually downloaded to my machine. I took the MOOC A few years ago, just for fun. And I passed! Remarkably.

RG: With flying colors, with flying colors.

KK: Yeah, I'm sure you graded my exam personally, Rob.

RG: Personally.

KK: And anyway, this is evidence for how lucky are students are, I think. Because, you know, you put so much time into this, and these these little “aha” moments. And the MOOC is full of these things. Just really remarkable stuff, especially that last chapter, which is so next level, the digital calculus stuff, which sort of reminds me of what we're talking about. Is there some connection there?

RG: Oh yes, it was creating that portion of the MOOC that really, really got me to do a deep dive into discrete analogs of continuous calculus, looking at the falling powers notation that is popular in computer science in Knuth’s work and others, thinking in terms of operators. Yeah, that portion of the MOOC really got me thinking a lot about these sorts of things.

KK: Yeah, I really can't recommend this highly enough. It’s really great.

EL: Yeah, so I have not had the benefit of this MOOC yet. So digital calculus, is that meaning, like, calculus for computers? Or what exactly is that? What does that mean?

RG: One of the things that I found students really got confused about in a basic single variable calculus class is, as soon as you hit sequences and series, their heads just explode because they get sequences and series confused with one another, and it all seems unmotivated. And why are we bothering with all these convergence tests? And where’d they come from? All this sort of thing.

EL: And why is it in a calculus class?

RG: Why is it even in a calculus class after all these derivatives and integrals? So the way that I teach it is when we get to sequences and series, you know, in the last quarter of the semester, I say, Okay, we've done calculus for functions with an analog input and an analog output. Now, we want to redo calculus for functions with a digital input and an analog output. And such functions we're going to call sequences. But I'm really just going to think of it as a function. How would you differentiate such a thing? How would you integrate such a thing? That leads one to think about finite differences, which leads to some some nice approaches to numerical methods. That leads one to looking at sums and numerical integration. And when you get to improper integrals over an unbounded domain? Well, that's series, and convergence tests matter.

KK: Yeah, it's super interesting. We will provide links to this. We’ll find the YouTube links and provide them.

EL: Yeah.

KK: So another fun part of this podcast, Rob, is that we ask our guests to pair their theorem with something, and I assume you're going to go with the same pairing from our conversation back in Philadelphia.

RG: Oh yes, that’s right.

KK: What is it?

RG: My work is fueled by a certain liquid beverage.

KK: Yeah.

RG: It’s not wine. It's not beer. It's not whiskey. It's not even coffee, although I drink a whole lot of coffee. What really gets me through to that next-level math is Monster. That's right. Monster Energy Drink, low carb if you please, because sugar is not so good for you. Monster, on the other hand, is pretty great for me, at any rate. I do not recommend it for people who are pregnant or have health problems, problems with hearts, anything like this, people under the age of 18, etc, etc. But for me, yeah, Monster.

KK: Yeah. There's lots of empties in your office, too like, up on the shelf there, which I'm sure have some significance.

RG: The wall of shame, that’s right. All those empty monster cans.

KK: See, I can't get into the energy drinks. I don't know. I mean, I know you're also fond of scotch. But does that does that help bring you down from the Monster, or is it…

RG: That’s a rare treat. That's a rare treat.

KK: Yeah, it should be. So when did your obsession with Monster start? Does this go back to grad school, or did it even exist when we were in grad school? Rob and I are roughly the same age. Were energy drinks a thing when we were in grad school? I don't remember.

RG: No, no. I didn't have them until, gosh, what is it, sometime within the past decade? I think it was when I was first working on that old calculus MOOC, like, what was that, six years ago? Six, seven years ago, is when I was doing that.

KK: Yup.

RG: That was difficult. That was difficult work. I had to make a lot of videos in a short amount of time. And, yup, the Monster was great. I would love to get some corporate sponsorship from them. You know, maybe, maybe try to pitch extreme math? I don't know. I don't think that's going to work.

KK: I don't know. I think it's a good angle, right? I mean, you know, they have this monster truck business, right? So there is this sort of whole extreme sports kind of thing. So why not? You know?

EL: Yeah, I'm sure they're just looking for a math podcast to sponsor. That's definitely next on their branding strategy.

KK: That’s right. Yeah. But not us. They should sponsor you, Rob. Because you're the true consumer.

RG: You know, fortune favors the bold. I'd be willing to hold up a can and say, “If you're not drinking Monster, you're only proving lemmas,” or something like that.

EL: You’ve thought this through. You've got their pitch already, or their slogan already made.

RG: That’s right. Yup.

KK: All right. Excellent. So we always like to give our guests a chance to to pitch their projects. Would you like to tell us about Calculus Blue?

RG: Oh, absolutely! This is—the thing that I am currently working on is a set of videos for multivariable calculus. I'm viewing this as something like a video text, a v-text instead of an e-text, where I have a bunch of videos explaining topics in multivariable calculus that are arranged in chapters. They’re broken up into small chunks, you know, roughly five minutes per video. These are up on my YouTube channel.There's another, I don't know, five or six hours worth of videos that are going to drop some time in the next week covering multivariate integration. This is a lot of fun. I'm having a ton of fun doing some 3d drawing, 3d animation. Multivariable calculus is just great for that kind of visualization. This semester, I'm going to use the videos to teach multivariable calculus at Penn in a flipped manner and experiment with how well that works. And then it'll be available for anyone to use.

KK; Yeah, I'm looking forward to these. I see the previews on Twitter, and they really are spectacular. How long does any one of those videos take you? It seems like, I mean, I know you've gotten really good at at the graphics packages that you need to create those things. But, you know, like a 10 minute video. How long does one of those things take to produce?

RG: I don't even want to say,

KK: Okay.

RG: I do not even want to say, no. I've been up since four o'clock this morning rendering video and compositing. Yeah, this is my day, pretty much. It's not easy. But it is worthwhile. Yeah.

KK: Well, I agree. I mean, I think, you know, so many of our colleagues, I think, kind of view calculus as this drudgery. But I still love teaching it. And I know you do, too.

RG: Absolutely.

KK: And I think it's important, because this is really a lot of what our job is, as academics, as professional mathematicians. Yes, we're proving theorems, all that stuff, that's great. But, you know, day in, day out, we're teaching undergraduates introductory mathematics. That's a lot of what we do. And I think it's really important to do it well.

EL: Well, and it can help, you know, bring people into math like it did for Rob.

KK: That’s right.

RG: Exactly. That's exactly right. Controversial opinion, but, you know, you get these people out there who say, oh, calculus, this is outdated, we don't need that anymore, just teach people data analysis or statistics. I think that's a colossal error. And that it's possible to take all of these classical ideas in calculus and just make them current, make them relevant, connect them to modern applications, and really reinvigorate the subject that you need to have a strong foundation in in order to proceed.

KK: Absolutely. And I, you know, I try to mix the two, I try to bring data into calculus and say, you know, look, engineering students, you’re mostly going to have data, but this stuff still applies. You know, calculus for me is a lot about approximation, right? That's what the whole Taylor Series business, that's what it's for.

RG: Definitely.

KK: And really trying to get students to understand that is one of my main goals. Well, this has been great fun. Thanks for taking time out from rendering video.

EL: Yeah, video rendering.

RG: Yes. I'm going to turn around and go right back to rendering as soon as we're done.

KK: That’s right, you basically have a professional quality studio in your in your basement, right?Is this how this works?

RG: This is how it works. Been renovating, oh, I don't know. It's about a year ago I started renovations and got a nice little studio up and running.

KK: Excellent. Do you have, like, foam on the walls and stuff like that?

RG: Yes, I'm touching the foam right now.

KK: All right. Yeah. So Evelyn I aren't that high-tech. We've just now gotten to the sort of like, multi-channel recording kind of thing.

RG: Ooh.

KK: Well, yeah, well, we're doing this now, right, where we’re each recording our own audio. I'm pleased with the results so far. Well, Rob, thanks again, and we appreciate your joining.

EL: Thanks for joining us.

RG: Thank you. It's been a pleasure chatting.

Episode 37 - Cynthia Flores

Evelyn Lamb: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians to tell us about their favorite theorems. I'm Evelyn Lamb. I'm one of your hosts. I am a freelance math and science writer in Salt Lake City, Utah. Here's your other host.

Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. How's it going, Evelyn?

All right. I am making some bread right now, and it smells really great in my house. So slightly torturous because I won't be able to eat it for a while.

KK: Sure. So I make my own pizza dough. But I always stop at bread. I never that extra step. I don't. I don't know why. Are you making baguettes? Are you doing the whole...

EL: No. I do it in the bread machine.

KK: Oh.

EL: Because I'm not going to make, yeah, I'm not going to knead and shape a loaf. So that's the compromise.

KK: Oh. That that's the fun part. So I've had a sourdough starter for, the same one for at least three or four years now. I've kept this thing going. And I make my own pizza crust, but I'm just lazy with bread. I don't eat a lot of bread.

EL: So yeah. And joining us to talk about--on BreadCast today--I'm very delighted to welcome Cynthia Flores. So hi. Tell us a little bit about yourself.

Cynthia Flores. Oh, thanks for having me on the show. I'm so grateful to join you today, Kevin and Evelyn. Well, I'm an assistant professor of mathematics and applied physics at California State University Channel Islands in the Department of Math and Physics. The main campus is not located on the Channel Islands.

EL: Oh, that's a bummer.

CF: It's actually located in Camarillo, California. It's one hour south of Santa Barbara, one hour north of downtown Los Angeles, roughly.

But the math department does get to have an annual research retreat at the research station located in the Santa Rosa Island. So that's kind of neat.

KK: Oh, how terrible for you.

EL: Yeah. That must be so beautiful.

KK: I was in Laguna Beach about a week and a half ago, which is, of course, further south from there, but still just spectacularly beautiful. Really nice.

CF: Yeah, I feel really fortunate to have the opportunity to stay in the Southern California area. I did my PhD at UC Santa Barbara, where I studied the intersections of mathematical physics, partial differential equations, and harmonic analysis and has motivated what I'm going to talk about today.

KK: Good, good, good.

EL: Yeah. Well, and Cynthia was on another podcast I host, the Lathisms podcast. And I really enjoyed talking with her then about the some of the research that she does. And she had some fun stories. So yeah, what is your favorite theorem? What do you want to share with us today?

CF: I'm glad you asked. I have several favorite theorems, and it was really hard to pick, and my students have heard me say repeatedly that my favorite theorem is the fundamental theorem of calculus.

EL: Great theorem.

KK: Sure.

CF: It's also a very, I find, intimidating theorem to talk about on this series, especially with so many creative individuals pairing their favorite theorems with awesome foods and activities. And so I just thought that one was maybe something to live up to. And I wanted to start with something that's a little closer to to my research area. So I found myself thinking of other favorites, and there was one in particular that does happen to lie at the intersection of my research area, which is mathematical physics, PDEs and harmonic analysis. And it's known as Heisenberg's Uncertainty Principle. That's how it's really known by the physics community. And in mathematics, it's most often referred to as Heisenberg's Uncertainty Inequality.

EL: Okay.

CF: So, is it familiar? I don't know.

EL: I feel like I've heard of it, but I don't--I feel like I've only heard of it from kind of the pop science point of view, not from the more technical point of view. So I'm very excited to learn more about it.

KK: So I actually have a story here. I taught a course in mathematics and literature a couple years back with a friend of mine in the in the foreign languages department. And we watched A Serious Man, this Coen Brothers movie, which, if you haven't seen is really interesting. But anyway, one of the things I made sure to talk about was Heisenberg's Uncertainty Principle, because that's sort of one of the themes, and of course now I forgotten what the inequality is. But I mean, I remember it involves Planck's constant, and there's some probability distribution, so let's hear it.

CF: Mm hmm. Yeah, yeah. So I was, I was like, this is what I'm going to pair it with. Like, I'm going to pair the conversation, like the mathematics description, physical description, with, basically I was thinking of pairing it with something Netflix and chill-like. I'm really glad that you brought that up, and I'll tell you more in a little bit about what I'm pairing it with. But first, I'll start mathematically. Mathematically, the theorem could be stated as follows. Given a function with sufficient regularity and decay assumptions, the L2 norm of the function is less than or equal to 2 over the dimension the function's defined on, multiplied by the product of the L2 norm of its first moment and the L2 norm of its gradient. And so mathematically, that's the inequality.

This wasn't stated this way by Heisenberg in the 1920s, which I believe he was recognized with a Nobel Prize for later on. Physically, Heisenberg described this in different ways it could be understood. Uncertainty might be understood as the lack of knowledge of a quantity taken by an observer, for example, or to the uncertainty due to experimental inaccuracy, or ambiguity in some definition, or statistical spread, as, as Kevin mentioned.

And actually, I'm going to recommend to the listeners to go to YouTube. There's a YouTuber named Veritasium, I'm not sure if I'm pronouncing that correctly.

EL: Oh, yeah, yeah.

CF: Yeah, he has a four minute demonstration of the original thought by Heisenberg and an experiment having to do with lasers that basically tells us it's impossible to simultaneously measure the position and momentum of a particle with infinite precision. The infinite precision part would be referring to something that we might call certainty. So in the experiment, that the YouTuber is recreating, a laser is shown through two plates that form a slit, and the split is becoming narrower and narrower. The laser is shone through the slit and then projected onto a screen. And as the slit is made narrower, the spot on the screen, as expected, is also becoming narrower and narrower. And at some point--you know, Veritasium does a really good job of creating this sort of like little "what's going to happen" excitement-- just when the slit seems to completely disappear and become infinitesimally small, the expectation might be that the that the laser projecting onto the screen would disappear too, but actually at a certain point, when the slit is so narrow, it's about to close, the spot on the screen becomes wider. We see spread. And this is because the photons of light have become so localized that the slit and their horizontal momentum has to become less well defined. And this is a demonstration of Heisenberg's Uncertainty Principle. And so according to Heisenberg--and this is from one of his manuscripts, and I wish I would have written down which one, I'm just going to read it-- "at the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum, and this change is the greater the smaller the wavelength of the light employed, in other words, the more exact determination of the position. At the instant at which the position of the electron is known, its momentum, therefore, can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely." And there's also, so momentum and position were sort of the original context in which Heisenberg's Uncertainty Principle was stated, mainly for quantum mechanics. The inequality theorem that I presented was really from an introductory book to nonlinear PDEs, which is really what I study, nonlinear dispersive PDEs specifically. So you could use a lot of Fourier transforms and stuff like that.

But it has multiple variations, one of which is Heisenberg's Uncertainty Principle of energy and time, which more or less is going to tell you the same thing, which is going to bring me to my pairing. Can I can I share my pairing?

KK: Sure.

CF: My pairing--I want to pair this with a Netflix and chill evening with friends who enjoy the Adult Swim animated show Rick and Morty.

KK: Okay.

CF: And an evening where you also have the opportunity to discuss sort of deep philosophical questions about uncertainty and chaos. And so really the show's on Hulu, so you could watch the show. I don't know if either one of you are familiar with the show.

KK: Oh, yeah. Yeah. I have a 19 year old son, how could I not be?

CF: And actually my students were the ones that brought this show and this specific episode to my attention, and I watched it, and so I'll see a little bit about the show. It's a bit, it's somewhat inspired by The Back to the Future movies. It's a comedy about total reckless and epic space adventures with lots of dark humor and a lot of almost real science. I mean, Rick is a mad scientist and Morty is in many ways opposite to Rick. Morty is Rick's grandson and sidekick. Rick uses a portal gun that he created. It allows him and Morty to travel to different realities where they go on some fun adventures. And there are several references to formulas and theorems describing our everyday life. And so particularly the season premiere of season two, Rick and Morty pays homage to Heisenberg's Uncertainty Principle as well as Schrodinger's cat paradox and includes a mathematical proof of Rick's impression of his grandkids.

I won't spoil what he proves about them. I'll let the reader, the listeners, go check that out. But basically season one ended with Rick freezing time for everyone except for him and his two grandkids Morty and Summer. In this premiere, Rick unfreezes time and causes a disturbance in their reference timeline, and any uncertainty introduced by the three individuals gets them removed entirely from time and causes a split in reality into multiple simultaneous realities. The entire episode is following Heisenberg's Uncertainty Principle for energy and time and alluding to the concept from the quantum world that chaos is found in the distribution of energy levels of certain atomic systems. So I'm going to back it up a little bit. We talked about Heisenberg's Uncertainty Principle in terms of momentum and position. Heisenberg's Uncertainty Principle for energy and time is for simultaneous measurements of energy and time, and specifically that the distribution of energy levels and uncertainty and its measurement is a metaphor to chaos within the system. So within a time interval, it's not possible to measure chaos precisely. There has to be uncertainty in the measure, so that the product of the uncertainty and energy and the uncertainty and the time remains larger than h over 4π, the Planck's constant. In other words, you cannot have both, you cannot simultaneously have both small uncertainty in both measurements. In other words, lots of certainty, right? You have small uncertainty, you have lots of certainty.

So you can't have that both happening. So less chaos leads to more uncertainty and vice versa. Less than certainty (or more certainty) leads to more chaos. And so this, you know, this episode, if you watch it, seems to present the common misconception that more uncertainty leads to more chaos. And this is where I've thought about this really hard and even tried to find someone who put it nicely, maybe on a video, I couldn't. But I think--this is just my opinion--I think the writers really got it right on this episode, because the moment that the timeline merges in the episode is the moment when the main character Rick has given up on his chances of fixing a broken tool he was counting on for fixing the timeline. So in fact, in this episode, he's shown doing something which is unlike him. He's shown praying and asking God, or his maker, for forgiveness, you know, as the timeline is, as all of these realities are collapsing. And in my opinion of Rick, this is the largest amount of uncertainty he's ever displayed throughout this series. And this happens right at the moment that he restores the timeline and therefore reduces the chaos. So I really think the writers got it right on that.

EL: That's really neat.

CF: Yeah, yeah, I loved it. And so for me, I also find this as a perfect time to, you know, hang out with friends, Netflix and chill it up, and then afterwards talk, you know. I would really like to challenge the listeners to observe this phenomenon in their real life. And for some people, it might be a stretch. But to some extent, I think we observe Heisenberg's Uncertainty Principle in our daily lives, like in the sense that the more sure that we are about something, or the more plan that we've made for something, the more likely were observe chaos, right? The more things we've gotten planned out, the more things that are actually likely to go wrong. I get to see this at the university, right, with so many young minds planning out their future. And I really see that the more certain a student feels about their plan, the more likely they're going to feel chaos in their life, if things don't go according to plan. So I really enjoy Heisenberg's Uncertainty Principle on so many levels, mathematically, physically, maybe even philosophically, and observing it in our real lives.

EL: Yeah, I really like the the metaphorical aspect you brought here. And if I can reveal how naive I am about Heisenberg's Uncertainty Principle, I didn't know that it was applicable in these different things other than just position and momentum. Maybe I'm the only one. But that's really interesting that there, so are there a lot of other places where this is also the case?

CF: Well, mathematically, it's just a function defined, for example, on Rn that has regularity and decay properties, and so that function's L2 norm--so the statement of the theorem is under those decay and regularity assumptions, that function's L2 norm has to remain less than or equal to 2 over n multiplied by the product of the L2 norm of the first moment of the function times the L2 norm of the gradient of the function. And so in some sense, we can view that as talking about momentum and position, and so that has applications to various physical systems, both momentum and position. But in some sense, whenever you have a gradient of a function, it can also relate to some system's energy. And so mathematically, I think we have the position to view this in a the more abstract way, whereas physically, you tend to only read about the the momentum and position version and less about the energy and time version. So that's why it took me a long time to think about did the writers of Rick and Morty get it right, because they're basically relying on, it seems they're giving the impression that uncertainty is leading to chaos. Because every time someone feels uncertainty, the timeline gets split and multiple, simultaneous versions of reality are going on at the same time, introducing more chaos into the system. And I kept thinking about it: "But mathematically, that's not what I learned, like what's happening?" And so I really think it's at the end where where Rick merges all the timelines together and basically reduces the chaos in the system. I really think that's the moment where we're seeing Heisenberg's Uncertainty Principle at play. We're seeing that in the moment where Rick was the least certain about himself and his abilities to fix this is the moment where the timeline were fixed. I really think someone had to be knowing about the energy and time version of Heisenberg's Uncertainty Principle.

KK: I need to go back and watch this. I've seen all the first two seasons, but I don't remember this one in particular. My son should be here. He could tell you all about it. We could be having this, oh yeah this yeah. It's a bit raw of a show, though, so listener warning, if you don't like obscenities and--

EL: Delicate ears beware.

KK: Really not politically correct humor very often. It's, you know, it's

CF: I agree.

KK: it's a little raw.

CF: Yeah.

KK: It's entertaining. But it's

CF: Yeah, it's definitely dark humor and lots of references to sci-fi horror. And, you know, some references are done well, some are just a little, I don't know.

KK: Yeah.

CF: But I definitely learned about this show from undergraduate students during a conference where we were, you know, stuck in a long commute. And students found themselves talking to me about all sorts of things. And they mentioned this episode where something was proven mathematically, and I'm a huge fan of Back to the Future, I hadn't watched--I only recently, you know, watched the show, even though it's been around for some time, apparently. And I'm a huge fan of Back to the Future. And they're telling me that there's mathematical proofs. And so I'm course I'm like, "Well, I'm gonna have to check out the mathematical proofs." Any mathematician that watches the show could see that the mathematical proof, I'm not sure that it's much of a mathematical proof.

So it got it got me to watch the episode. And once I was watching the episode, what really drew my attention was that I realized they're talking about chaos and uncertainty.

EL: So going back to the theorem itself, where did you encounter that the first time?

CF: First year grad school at UC Santa Barbara. And it's actually--I never told the professor who was teaching a course turned out to eventually go on and become my PhD advisor. And that first year that I was a graduate student at UC Santa Barbara, I was much more interested in differential geometry and topology than I was in analysis. And this was in one of our homework assignments sort of buried in there. And I don't remember exactly who, if it was the professor himself, or maybe one of his current graduate students, or a TA for the course, that explained that inequality and its physical relationship to chaos and uncertainty. And I'm pretty sure that the conversation with whoever it was, it was about chaos and uncertainty. And it wasn't about momentum and position, which I think would have turned me off at the time. But we were talking about this, relating it to uncertainty and measurements and chaos present in the system. And for me, since that moment, I think I've lived by this sort of mantra that if I plan things out, more things are going to go wrong the more planning that I do. But, you know, I kind of have to keep that in mind that I can only plan so much without introducing some chaos into the system. And so it made a huge impression on me. And I asked this professor who assigned this homework assignment, I'm sure it was the first semester, I mean, the first quarter, of graduate real analysis, if he had more reading for me to do. And he became my advisor, and I went into this area, mathematical physics, PDEs, and harmonic analysis. So it made a huge influence on me. And that's why I wanted to include it as my favorite theorem.

EL: Yeah, that's such a great story. It's like your superhero origin story, is this theorem.

KK: Yeah. So surely this L2 norm business, though, came after the fact. Like Heisenberg just sort of figured it out in the physics sense, and then some mathematician must have come up with the L2 norm business.

CF: Right. I actually think Heisenberg came up with it in the physical sense. There was someone who wrote down something mathematically, and I actually haven't gone--I should--I haven't gone through the literature to find out which mathematician wrote down the L2 norm statement of the inequality.

But in the book Introduction to Nonlinear Dispersive Equations by Felipe Linares and Gustavo Ponce-- Gustavo's my advisor--on page 60, it's Exercise 3.14. It proves Heisenberg's inequality and states it the way I've stated it here in this podcast, and it's a really neat analysis exercise. You know, you have to use the density of the Schwartz class functions, you use integration by parts. It's a really neat exercise and really helps you use those tools that PDE people use. And yeah, my advisor doesn't know how much that exercise influenced my decision to study the mathematical physics, PDEs, and harmonic analysis.

KK: Good, then now our listeners have an exercise, too. So,

EL: Yeah.

CF: That's right. Yeah. So my recommendations are watch Rick and Morty, try exercise 3.14 from Introduction to Nonlinear Dispersive Equations, and have a deep philosophical conversation about uncertainty and chaos with your good friends as you Netflix and chill it out.

EL: Nice. Yeah, wise words, definitely. Thanks a lot for joining us. I really need to brush up on some of my physics, I think, and think about this stuff.

CF: I'm happy to talk about it anytime you like. Thank you so much for the invitation. I've really enjoyed talking to you all.

KK: Thanks, Cynthia.

Episode 36 - Nikita Nikolaev & Beatriz Navarro Lameda

Kevin Knudson: Welcome to My Favorite Theorem, a special Valentine’s Day edition this year.

Evelyn Lamb: Yes.

KK: I’m one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.

EL: Hi. I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah.

KK: How’s it going, Evelyn.

EL: It’s going okay. We got, we probably had 15 inches of snow in the past day and a half or so.

KK: Oh. It’s sunny and 80 in Florida, so I’m not going to rub it in. This is a Valentine’s Day edition. Are you and your spouse doing anything special?

EL: We’re not big Valentine’s Day people.

KK: So here’s the nerdy thing I did. So Ellen, my wife, is an artist, and she loves pens and pencils. There’s this great website called CW Pencil Enterprise, and they have a little kit where you can make a bouquet of pencils for your significant other, so this is what I did. Because we’ve been married for almost 27 years. I mean, we don’t have to have the big show anymore. So that’s our Valentine’s Day.

EL: Yeah, we’re not big Valentine’s Day people, but I got very excited about doing a Valentine’s Day episode of My Favorite Theorem because of the guests that we have. Will you introduce them?

KK: I know! We’re pleased to have Nikita Nikolaev and Beatriz Navarro, and they had some popular press. So why don’t you guys introduce yourselves and tell everybody about you?

Nikita Nikolaev: Hi. My name is Nikita. I’m a postdoctoral fellow at the University of Geneva in Switzerland. I study algebraic geometry of singular differential equations.

Beatriz Navarro Lameda: And I’m Beatriz. I’m currently a Ph.D. student at the University of Toronto, but I’m doing an exchange program at the University of Geneva, so that’s why we’re here together. And I’m studying probability, in particular directed polymers in random environments.

EL: Okay, cool! So that is actually applicable in some way.

BNL: Yes, it is.

EL: Oh, great!

KK: So why don’t we talk about this whole thing with the wedding? So we had this conversation before we started recording, but I’m sure our listeners would love to hear this. So what exactly happened at your wedding?

NN: Both of us being mathematicians, of course, we had almost everybody was either a mathematician or somehow mathematically related, most of our guests. So we decided to have a little bit of fun at the wedding, sprinkle a little bit of maths here and there. And one of the ideas was to, when the guests arrive at the dinner, in order for them to find which table they’re sitting at, they would have to solve a small mathematical problem. They would arrive at the venue there, and they would open their name card, and the name card would contain a first coordinate.

BNL: And a question.

NN: And a question. And the questions were very bespoke. It really depended on what we know their mathematical background to be. We had many people in my former research group, so I pulled questions from their papers or some of the talks they’ve given.

EL: This is so great! Yeah.

NN: And there were some people who are, maybe they’re not mathematicians, they’re engineers or chemists or something, and we would have questions which are more mathematically flavored rather than actual mathematical questions just to make everyone feel like they’re at a wedding of mathematicians.

EL: Right.

NN: So right. They had to find out two coordinates. All the tables were named after regular polyhedra, and they had to find out what their polyhedron of the night was.

EL: Okay.

NN: In order to do that, there was a matrix of polyhedra. Each one had two coordinates, and once you find out what the two coordinates are, you look at that matrix, and it gives you what polyhedron you’ve got. So as a guest, you would open the name card, and it would contain your first coordinate and a question and a multiple choice answer. And the answers were,

BNL: Usually it was one right answer and two crazy answers that had nothing to do with the question. Most of them were 2019 because that’s the year we got married, and then some other options. And then once you choose your answer, you would be directed to some other card that had a name of some mathematical term or some theorem, and that one would give you the second coordinate.

NN: So we made this cool what we called maths tree. We had several of these manzanita trees, and we put little cards on them with names, with these mathematical terms, with the answers, with people’s questions, and we just had this tree with cards hanging down, more than 100 cards hanging down. What I liked, in mathematicians it induced this kind of hunting instinct. You somehow look at this tree, and there are all these terms that you recognize, and you’ve seen before in your mathematical career, and you’re searching for that one that you know is the correct one.

BNL: And of course we wanted to make sure everyone found their table, so if they for any reason chose the wrong answer, they would also be directed to some card with a mathematical term. And when they opened it, it would say, “Oops, try again.” So that way you knew, okay, I just have to go and try again and find what the correct coordinate would be.

KK: This is amazing.

NN: And then to foolproof the whole thing, during the cocktail hour, they would do this kind of hunting for mathematical terminology, but then the doors would open into the dinner room, and just like most textbooks, when you open the back of the textbook, there’s the answer key, in the dinner room we had the answer key, which was a poster with everyone’s names

BNL: And their polyhedra

NN: And their polyhedron of the night.

EL: Yeah.

NN: So it was foolproof. I think some commentators on the internet were very concerned that some guests wouldn’t find their seat and starve to death.

EL: Leave hungry.

NN: No, it was all thought through completely.

KK: Some of the internet comments, people were just incredulous. Like, “I can’t believe these people forced their guests to do this!” They don’t understand, we would think this was incredible. This is amazing!

EL: Yeah! So delightfully nerdy and thoughtful. So, yeah, we’ve mentioned, this did end up on the internet in some way, which is how I heard about it, because I sadly was not invited to your wedding. (Since this is the first time I have looked at you at all.) So yeah, how did it end up making the rounds on some weird corner of the internet?

NN: So basically a couple of weeks before the wedding, I made a post on Facebook. It was a private Facebook post, just to my Facebook friends. You know, a Facebook friend is a very general notion.

EL: Right.

NN: I kind of briefly explained that all our guests are mathematicians, so we’re going to do this cool thing, we’re going to come up with mathematical questions, and one of my Facebook “friends,” a Facebook acquaintance, I later found out who it was, they didn’t like it so much, and they did a screengrab, and then they posted, with our names redacted and everything redacted, they made a post on Reddit which was like, “Maths shaming, look, these people are forcing their guests to solve a mathematical question to find their seat, maths shaming.”

BNL: It was in the “bridezilla” thread. “This crazy bride is forcing their guests to solve mathematical problems,” and how evil she is. Which, funny because Nikita was the one who wrote that Facebook post.

NN: I actually was the one.

BNL: So it was not a bridezilla, it’s what I like to call a Groom Kong.

NN: That’s right. So then this Reddit thread kind of got very popular, and later some newspaper in Australia picked it up, and then it just snowballed from there. Fox News, Daily Mail, yeah.

KK: Well, this is great. This is good, now you’ve had your 15 minutes of fame, and now life can get back to normal.

EL: Yeah.

KK: This is a great story. Okay, but this is a podcast about theorems, so what is your favorite theorem?

NN: Right, yeah. So we kind of actually thought long and hard about what theorem to choose. Like, what is our favorite theorem is such a difficult question, actually.

KK: Sure.

NN: It's kind of like, you know, what is your favorite music piece? And it's, I mean, it's so many variables. Depends on the day, right?

EL: Yeah.

NN: But we ended up deciding that we were going to choose the intermediate value theorem.

KK: Oh, nice.

NN: As our favorite theorem.

KK: Good.

BNL: So, yes, the intermediate value theorem is probably one of the first theorems that you learn when you go to university, right? Like calculus you start learning basic calculus, and it's one of the first theorems that you see. Well, what it says is that well, suppose you start with a continuous function f, and you look at some interval (a, b), so the function f sends a to f(a), b to f(b), and then you pick any value y that is between f(a) and f(b). And then you know that you will find a point c that is between a and b, such that f(c) equals y.

So it looks like an incredibly simple statement. Obvious.

KK: Sure.

BNL: Right, but it is a quite powerful statement. Most students believe it without proof. They don't need it. It's, yes, absolutely obvious. But, well, we have lots of things that we like about the theorem.

NN: Yeah, I mean, it feels incredibly simple and completely obvious. You look at it, and, you know, it's the only thing that could possibly be true. And the cool thing about it, of course, is that it represents kind of the essence of what we mean by continuous function.

KK: Sure.

EL: Yeah.

NN: In fact, actually, if you look at the history, before our modern formal definition of continuity, that was part of the property that was a required property that people used to use as part of the definition of continuity. In fact, actually, if you look at the history, before we formalize the definition of continuity, people were very confused about what a continuous function actually should mean. And many thought, erroneously, that this intermediate value property was equivalent to continuity. In some sense, it's what you would want to believe because it really is the property that more or less formalizes what we normally tell our students, that heuristically, a continuous function is one that if you want to draw it, you can draw it without taking off your pencil off of a piece of paper, and that's what the Intermediate Value Theorem, this intermediate value property, it represents that really.

But for all its simplicity and triviality, if you actually look at it properly, if you look at our modern definition of continuity using epsilon-delta, then it becomes not obvious at all.

BNL: Yes. So if you see epsilon, you look at the definition of continuity, and have epsilons, deltas, how is it possible that from this thing, you get such an obvious statement, like the intermediate value theorem? So what the intermediate value theorem is telling us is that, well, continuous functions do exactly what we want them to do. They are what we intuitively think of a continuous function. So in a sense, what the intermediate value theorem is doing for us is serving as a bridge between this formal definition that we encounter in university. So we start first year calculus, and then our professor gives us this epsilon-delta definition of continuity. And it's like, oh, but in high school, I learned that a continuous function is one that I can draw without lifting my pencil. Well, the intermediate value theorem is precisely that. It's connecting the two ideas, just in a very powerful way.

NN: Yeah. And also, you know, it cannot be overstated how useful it is. I mean, we use it all the time. As a geometer, of course, you know, you use some generalization of it, that continuous functions send connected sets to connected, and we use it all the time, absolutely, without thinking, we take it absolutely for granted.

BNL: So even if you do analysis, you are using it all the time, because you can see that the intermediate value theorem is also equivalent to the least upper bound property, so the completeness axiom of the real numbers. Which is quite incredible, to see that just having the intermediate value theorem could be equivalent to such a fundamental axiom for the real numbers, right? So it appears everywhere. It's surprising. We know, it's very easy, when you see the proof of the intermediate value theorem, you see that it is a consequence of this least upper bound property, but the converse is also true. So in a sense, we have that very powerful notion there.

KK: I don't think I knew the converse. So I'm a topologist, right? So to me, this is just a statement that the continuous image of a connected set is connected. But then, of course, the hard part is showing that the connected subsets of the real line are precisely the intervals, which I guess is where the least upper bound property comes in.

BNL: Yes, indeed, yes. Exactly yes.

KK: Okay. I haven't thought about analysis in a while. As it turns out, we're hiring several people this year. And, for some of them, we've asked them to do a teaching demonstration. And we want them all to do the same thing. And as it so happens, it's a calculus one demonstration about continuity and the intermediate value theorem.

BNL: Oh.

EL: Nice.

KK: So in the last month, I've seen, you know, 10 presentations about the intermediate value theorem. And I've come to really appreciate it as a result. My favorite application is, though, that you can use it to prove that you can always make any table level, or not level, but all four legs touch the ground at the same time.

BNL: Yes.

KK: Yeah, that's, that's great fun. The table won't be level necessarily, but all four feet will be on the ground, so it won't wobble.

BNL: Yes.

EL: Right.

NN: If only it were actually applied in classrooms, right?

KK: Right.

EL: Yeah.

NN: The first thing you always do when you come to, you sit at a desk somewhere, is to pull out a piece of paper to actually level it.

EL: Yeah. So was this a theorem that you immediately really appreciated? Or do you feel like your appreciation has grown as you have matured mathematically?

BNL: In my case, I definitely learned to appreciate it more and more as I matured mathematically. The first time I saw the theorem, it's like, "Okay, yes, interesting, very cool theorem." But I didn't realize at the moment how powerful that theorem was. Then as my mathematical learning continued, then I realized, "Oh, this is happening because of the intermediate value theorem. And this is also a consequence of it." So there's so many important things that are a consequence, of the intermediate value theorem. That really makes me appreciate it.

NN: Well, there's also somehow, I think this also comes with maturity, when you realize that some very, what appear to be very hard theorems, if you strip away all the complexity, you realize that they may be really just some clever application of the intermediate value theorem.

BNL: Like Sharkovskii's theorem, for example, is a theorem about periodic points of continuous functions. And it just introduces some new ordering in the natural numbers. And it tells you that if you have a periodic point of some period m, then you will have periodic points of any period that comes after m in that ordering. You can also look at the famous "period three implies chaos."

KK: Right.

BNL: A big component of it is period three implies all other periods. And the proof of it is really just a clever use of the intermediate value theorem. It's so interesting, that such an important and famous theorem is just a very kind of immediate--though, you know, it takes some work to get it--but you can definitely do it with just the Intermediate Value Theorem. And I actually like to present that theorem to students in high school because they can believe the Intermediate Value Theorem.

EL: Yeah.

BNL: That's something that if you tell someone, "This is true," no one is going to question it is definitely true.

KK: Sure.

BNL: And then you tell them, "Oh, using this thing that is obvious, we can also prove these other things." And I've actually done with high school students to, you know, prove Sharkovskii's theorem just starting from the fact that they believe the intermediate value theorem. So they can get to higher-level theorems just from something very simple. I think that's beautiful.

NN: Yeah, that's kind of a very astonishing thing, that from something so simple, and what looks obvious, you can get statements which really are not obvious at all, like what she just explained, Sharkovskii's theorem, that's kind of a mind blowing thing.

EL: Yeah, you're making a pretty good case, I must say.

KK: That's right.

EL: So when we started this podcast, our very first episode was Kevin and I just talking about our own favorite theorems. And I have already since re-, you know, one of our other guests has taken my loyalty elsewhere. And I think you're kind of dragging me. So I think, I think my theorem love is quite fickle, it turns out. I can be persuaded.

KK: You know, in the beginning of our conversation, you pointed out, you know, how does one choose a favorite theorem, right? And, and it's sort of like, your favorite theorem du jour. It has to be.

BNL: Exactly, yes.

EL: Yeah.

KK: All right, so what does one pair with the intermediate value theorem?

BNL: So we thought about it. And to continue with the Valentine's Day theme, we want to pair the intermediate value theorem with love in a relationship.

KK: Ah, okay, good.

BNL: The reason why we want to pair it with love is because when you love someone, it's completely obvious to you. You just know it's true, you know you love someone.

KK: That's true.

BNL: You just feel like there's no proof required. It's just, you know it, you love this person.

NN: It's the only thing that can possibly be true, there's no reason to prove it.

BNL: But also, just like any good theorem, you can also prove, you can provide a proof of love, right? You can show someone that you love them.

NN: Any good mathematical theorem can always be supplied with a very rigorous, detailed to whatever required level proof. And if you truly, really truly love someone, you can prove it. And if someone questions, any part of that proof, you can always supply more details and a more detailed explanation for why why you love that person. And that's why there's a similarity between the intermediate value theorem and love in a relationship.

EL: Yeah, well, I'm thinking of the poem now, "How do I love thee? Let me count the ways," This is a slightly mathematically-flavored poem.

KK: But I think there must be at least, you know, the, the continuity of the continuum ways, right? Or the cardinality of the continuum ways.

NN: Absolutely.

KK: That's an excellent pairing.

EL: Yeah.

BNL: We also thought that love is something that we feel, we take it as an obvious statement, and then from love, we can build so many other things, right? Like in the intermediate value theorem case, we start from a theorem that looks obvious, and using it, we can prove so many other theorems. So it's the same, right, in a relationship. You start from love, and then you can build so many other great things.

EL: Yeah, a marriage for example.

BNL and NN: For example, yes.

EL: Yeah. And a ridiculously amazing wedding game as part of that.

NN: There were some other mathematical tidbits in the wedding. So one of them I'll mention is our rings. Our wedding bands are actually Möbius bands.

KK: Oh, I see.

EL: Okay, very nice.

NN: We had to work with a jeweler. And there's a bit of a trick, because if you just take a wedding band, and you do the twist to make it a Möbius band, than the place where it twists would stick out too much.

EL: Yeah.

NN: So the idea is to try to squish it. And that, of course, is a bit challenging if you want to make a good-looking ring, so that was part of the problem to be solved.

EL: Yeah. Well, my wedding ring is also--it's not a Möbius band. But it's one that I helped design with a particular somewhat math-ish design.

KK: My wife and I are on our second set of wedding bands. The first ones, because we were, I was a graduate student and poor, we got silver ones. And silver doesn't last as long, so we're on our second ones. But the first ones were handmade, and they were, they had sort of like a similar to Evelyn's sort of little crossing thing. So they were a little bit mathy, too. I guess that's a thing that we do, right?

EL: Yeah.

NN: It's inevitable.

KK: Yeah, excellent.

KK: So we like to give our guests a chance to plug anything. Do you have any websites, books, wedding registries that you want to plug?

NN: Actually, in terms of the wedding registry, lots of our guests, of course, were asking. We didn't have a wedding registry because given the career of a postdoc, where you travel from place to place every few years, a wedding registry isn't the most practical thing. Yeah, difficult.

BNL: Yes. So we said, well, you can just give us anything you like, we'll have a box where you can leave envelopes. And some of our guests were very creative. They gave us, some of them decided to give us money. But the amounts they chose were very interesting, because they were, like, some integer times e or times π, or some combination. They wrote the number and then they explained how they came up with that number. And that was very interesting and sweet.

NN: Some of them didn't explain it. But we kind of understood. We cracked the code, essentially, except one. So one of our friends wrote us a check with a very strange number. And to this day,

BNL: We still don't know what the number is.

NN: We kept trying to guess what it could be. But no, I don't know. Maybe eventually I'll just have to ask. I'd like to know.

KK: Maybe it was just random.

NN: Maybe it was just random.

BNL: Yeah, I think one of the best gifts people gave us was their reaction right after seeing their card. In particular, there is a very nice story of a guest who really, really loved the way we set up everything and maybe you can tell us about that.

NN: Yeah, so we, at the dinner we would approach tables to say hi to some guests, and so this particular, he's actually Bea's teaching mentor.

BNL: So I'm very much into teaching. And he's the one who taught me most of the things I know.

NN: So we approached him, and, and he looked at us, and he pulled out the name card out of his breast pocket like, "This. This is the most beautiful thing I've ever seen. This is incredible. It's from my last paper, isn't it?" Yes. Yeah, that's right. He's like, "I have to send it to my collaborator. He's going to love it." And just seeing that reaction, him telling us how much he loved the card, just made all those hours that I and Bea spent reading through papers and trying to come up with some kind of, you know, short sounding question to be put into multiple choice, made all of that worthwhile.

EL: Yeah, I'm just imagining it. Like, usually you don't have to, like, cram for your wedding. But yeah, you've got all these papers you've got to read.

BNL: Yeah, we spent days going through everyone's papers and trying to find questions that were short enough to put in a small card and also easy to answer as a multiple choice question.

NN: Yeah, some were easy. So for example, my former PhD advisor came to our wedding, and I basically gave him a question from my thesis, you know, just to make sure he'd read it.

EL: Yeah.

NN: So when we approached him at the dinner and I said, "Oh, did you like the question?" and he just looked at me like, "Yeah, well I gave you that question two years ago!"

EL: Yeah

NN: So, yes, some questions were easy to come up with. Some questions were a bit more difficult. So we had a number of people from set theory, and neither of us are in set theory. I'd never, ever before opened a paper in set theory. It was all very, very new to me.

EL: Nice.

KK: Well, this has been great fun. Thanks for being such good sports on short notice.

EL: Yeah.

KK: Thank you for joining us.

BNL: Yeah.

EL: Yeah, really fun to talk to you about this. It's so much better than even the Reddit post and weird news stories led me to believe.

KK: Well, congratulations. it's fun meeting you guys. And let me tell you, it's fun being married for 27 years, too.

NN: We're looking forward to that.

KK: All right, take care.

NN: Thank you. Bye bye.

BNL: Bye.

Episode 35 - Nira Chamberlain

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics, favorite theorems, and other random stuff that we never know what it’ll be. I’m one of your hosts, Kevin Knudson. I’m a professor of mathematics at the University of Florida. This is your other host.

Evelyn Lamb: Hi, I'm Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah, where I forgot to turn on the heat when I first woke up this morning. I've got separate systems. So it is very cold in the basement here where I am recording.

KK: Well, yeah, it's cold in Florida this morning. It was, you know, in the mid-60s. It's very pleasant. I'm still in short sleeves. Our listeners can't see this, but I'm in short sleeves. Evelyn’s in a sweater. And our guest is in a jacket in his attic.

EL: Yes.

KK: So today we are happy to welcome Nira Chamberlain all the way from the UK. Can you tell everyone about yourself a little bit?

Nira Chamberlain: Yes, hello. My name is doctor Nira Chamberlain. I’m a professional mathematical modeler. I'm also the president-designate of the Institute of Mathematics and its Applications.

KK: Fantastic. So tell us about the IMA a little bit. So we have one of those here, but it's a different thing. So what is it?

NC: Right. I mean, the Institute of Mathematics and its Applications is a professional body of mathematicians, of professional mathematicians, and it's a learned society. It's been around since 1964. And it is actually to make sure that UK has a strong mathematical culture and look after the interest of mathematicians by industry, government and academia.

KK: Oh, that's great. Maybe we should have one of them here. So the IMA here is something else. It’s a mathematics institute. But maybe the US should have one of these. We have the AMS, right, the American Mathematical Society.

EL: Or SIAM might be more similar because it does applications, applied math.

KK: Yeah, maybe.

EL: Yeah, we’ve kind of got some.

NC: So we asked you on for lots of reasons. One is, you know, you're just sort of an interesting guy. Two, because you’re an applied mathematician, and we like to have applied mathematicians on as much as we can, Three you actually won something called the Great Internet Math-Off this summer, of which Evelyn was a participant.

EL: Yes. So he has been ruled—he’s not just an interesting guy, he has been officially ruled—the most interesting mathematician in the world…among people who were in the competition. The person who ran it always put this very long disclaimer asterisk, but I think Nira definitely has some claim on the title here. So, yeah. Do you want to talk a little bit about the big, Great Internet Math-Off?

NC: Yes, we have let's say an organization, a group of mathematicians that do a blog, Aperiodical, and they decided to start this competition called the big internet math-off. And it’s a as a knockout tournament, 16 mathematicians, and they put up something interesting about mathematics. It was put up on the internet, it was there for 48 hours, the general public would vote for what they found was their most favorite or most interesting, and the winner would progress to the next round, and it was four rounds all together. And if you reach to the final end, and you win it, you get the title “World's Most Interesting Mathematician.” And when I was invited, I thought, “Oh, isn't this really for those mathematicians that are pure mathematicians and those public communicators and those into puzzles? I mean, I'm a mathematical modeler, I’m in applied mathematics, so what am I really going to talk about?” And then when I saw that when I was actually introduced as the applied mathematician and everybody else was, let's say the public communicator, and here's the applied mathematician. It was almost like: then here's the villain—boo!

I thought, “Okay, there you go.” I’m thinking, “All right, what we're going to do is I'm actually going to stick to being an applied mathematician.” So three out of the four topics I actually introduced were about applied mathematics, and yes, the fourth topic was actually about the history of mathematics. And I was fortunate enough to get through each of the rounds and win the overall competition. It was very interesting and very good.

EL: Yeah, and I do wish — I think you look very interesting right now—I wish our listeners could see that you've got headphones on that make you look a little bit like a pilot, and behind you are these V-shaped beams, I guess in the attic, where I can totally imagine you, like, piloting some ship here, so you're really looking the part this morning, or this afternoon for you.

NC: Thank you very much indeed. I mean that’s what I call my mathematics attack room, which is the attic, and I have 200 math books behind me. And I’ve got three whiteboards in front of me, quite a number of computer screens. And I’ve got all my mathematical resources all in one place.

KK: Okay, so I just took a screenshot. So maybe with your permission, we’ll put this up somewhere.

So this is a podcast about theorems. So, Nira what is your favorite theorem?

NC: Okay, my favorite theorem is actually to do with the Lorenz equation, the Lorenz attractor. Now it was done in the 1960s by a meteorologist called Edward Lorenz. And what he wanted to do was to take a a partial differential equation, see if he could make some simplifications to it, and he came up with three nonlinear ordinary differential equations to actually look at, let's say, the convection and the movement, to see where we can actually use that to do some meteorological predictions. And then he got this set of equations, went to work solving it numerically, and then he decided, “Actually, I’d better restart my computer game because I've done something wrong.” So he went back, he restarted the computer, but he actually changed the initial conditions by a little bit. And then when he came back, he actually saw that the trajectory of the solution was different from what he had started with. When he went back and started checking, he actually saw that the initial conditions only changed by a little bit, and what was this? It was probably one of the first examples of the “butterfly effect.” The butterfly effect is saying that if, let's say, a butterfly flaps its wings, then that will prevent a hurricane going into Florida — topical.

KK: Yeah, it’s been a rough month.

NC: Yeah, or, if, let's say, another butterfly flaps its wings, then maybe another hurricane may go into Salt Lake City, for example. And this is, like I said, an example of chaotic behavior once you choose certain parameters now. The reason why I like this theorem so much is I was actually introduced to this topic when I was in my final year of my mathematics degree. And it probably was one of the introductions to the field of mathematical modeling, recognizing that when you actually model reality, mathematics is powerful, but also has its limitations. And you’re just trying to find that boundary between what can be done and what can't be done. Mathematical modeling has a part to play in that.

KK: Right. What's so interesting about meteorological modeling is that I've noticed that forecasts are really good for about two days.

NC: Yeah.

KK: So with modern computing power, I mean, of course, as you pointed out, everything is so sensitive to initial conditions, that if you have good initial data, you can get a good forecast for a couple of days, but I never believe them beyond that. It's not because the models are bad. It's because the computation is so precise now that the errors can propagate, and you sort of get these problems. Do you have any sense of how we might extend those models out better, or is it just a lost cause, is it hopeless?

NC: It's probably a lost cause. I agree with you to a certain extent. But it's a case of when we're dealing with, let’s say, meteorological equations, if they have chaotic behavior, if you put down initial conditions, and it's changed, you know, it's going out and it's changing, it just shows that, yeah, we may have good predictions to begin with, but as we go on into the future, those rounding errors will come, those differences will come. And it's almost like, let's use an analogy, let's say you go to whatever computer algebra software you have, and you get π, and let’s say you square root it 10 times, and then you raise it to a power 10 times, and then if you square root it 100 times and then you raise it to a power 100 times, and if you keep on repeating that, then actually, when you come back to the figure, you're thinking, “Is this actually π?” No, it's not. And also different calculator and different computer algebra softwares, you’ll see that they will have actually their difference. It’s that point where in terms of when we're doing, predicting a weather system, because of the chaotic behavior of the actual nonlinear differential equations, coupled with those rounding errors, it is very difficult to do that long-term weather forecast. So nobody can really say to me, “By the way, in five years’ time, on the 17th of June, the weather will be this.” That’s very much a nonsense.

KK: Sure, sure. Well, I guess orbital mechanics are that way too, right? I mean, the planetary orbits. I mean, we understand them, but we also can't predict anything in some sense.

NC: Yeah.

KK: Right, right. Living in Florida, I pay a lot of attention to hurricane models. And it's actually really fascinating to go to these various sites. So windy.com is a good one of these. They show the wind field over the whole planet if you want. And they'll also, when there are hurricanes, they have the separate models. So the European model actually turns out to be better than the American one a lot, which is sort of interesting because hurricanes affect us a lot more than— I mean the remnants get to the UK and all of that. But so you’re right, it's sort of interesting, the different implementations—the same equations, essentially, right, that underlie everything get built into different models. And then different computing systems have the different rounding error. And the models, they’re sort of, they're usually pretty close, but they do diverge. It's really very fascinating.

NC: Yeah, I mean, over in the United Kingdom, we had an interesting case in 1987 where the French meteorology office says, “By the way, people in the north of France, they should be aware that there's going to be a hurricane approaching.” While the British meteorologic office was saying, “Oh, there's no way that there's going to be a hurricane. There's no hurricane. Our model says there’s going to be no hurricane.” So the French are saying there’s going to be a hurricane. The British say there’s not going to be a hurricane. And guess what? The French were right and a hurricane hit the United Kingdom.

And because of that what they did is that now the Met Office, which is the main weather place in Britain, what they've done is they put quite a number of boats out in the Atlantic to measure, to come up with a much more accurate measure of the weather system so that they can actually feed their models, and they also use more powerful models because he equation itself remains the same, it’s the information that actually goes into it which is which is the difference, yeah? So in terms of what you said in the American models, it's all dependent on who you get the measurements from because you may not get exactly the measurement from the same boat. You may get it from a different boat, from different boats in a different location, different people. This is where you come to that human factor. Some people will say, “Oh, round it to this significant figure,” while someone else will say, “Round it to that significant figure,” and guess what? All of that actually affects your final results.

EL: Yeah, that matters.

NC: Yes.

KK: So do you do this kind of modeling yourself, or are you in other applications?

NC: Oh, I'm very much in other applications. I mean, I'm still very much a a mathematical modeler. I mean, my research now is to do with—to minimize the probability of artificial intelligence takeover. That’s what my current research I'm doing at Loughborough University.

EL: Well that, you know, the robots will have you first in the line or something in the robot uprising.

NC: Well, we talk about robots, but this is quite interesting. When we're talking about, let's say, artificial intelligence takeover, everybody thinks about the Hollywood Terminator matrix, I Robot, you know, robots marching down the street. But there are different types of AI takeovers, and some of them are much more subtle than that. For instance, one scenario is, let's say for instance, you have a company, and they decide to really upgrade their artificial intelligence, their machine learning, to a certain sense it's more advanced than their competition is. And by doing so, they actually put all their competitors out of business. And so what you have is you have this one company almost running the world economy. Now the question is, would that company make decisions (based on its AI), would they make decisions that are conducive with social cohesion? And you can't put your hand on your heart and say, “Absolutely, yes, because a machine, it’s largely, like, 1-0, it doesn't really care about the consequences of social cohesion. So henceforth, we can actually to a model of that, saying could we ever get to a situation where one company actually dominates all different industrial sectors and ends up, let’s say, running the world economy? And if that's the case, what can what strategies can we actually implement to try and minimize that risk?

EL: It sounds not entirely hypothetical.

KK: No, no. Well, you know, of course the conspiracy theorists types in the US would have you believe that this already exists, right? The Deep State and the Illuminati run everything, right?

EL: But getting back to the Lorenz system and everything, you were saying that this is one of the earliest examples of mathematical modeling you saw. Was it one of the things that inspired you to go that direction when you got your PhD?

NC: Yes, so I was doing that as part of my final year mathematics degree, and I thought, well, this whole idea that, you know, here’s applied mathematics, using mathematics in the real world, saying that there are problems that some people say it's impossible, you can't use mathematics. And you're just trying to push the boundaries of mathematics and say, “This is how we actually model reality.” It was one of the things that actually did inspire me, so Edward Lorenz actually inspire me, just saying, wait a minute, applied mathematics is not necessarily about: here’s a problem, here’s an equation, put the numbers in the right places, and here's a solution. It's about gaining that insight into the real world, learning more about the world around you learning more about the universe around you through through mathematics. And that's what inspired me.

KK: And it's very imprecise, but that's sort of what makes it intriguing, right? I mean, you have to come up with simplifying assumptions to even build a model, and then how much information can we extract from that?

NC: That’s one of the key things about mathematical modeling. I mean, you're looking at the world. The world is complex, full of uncertainty, and it’s messy. And you are making some simplifying assumptions, but the key thing is: do you make simplifying assumptions to an extent that it actually corrupts and compromises your solution, or do you make simplifying assumptions that say, “Actually, this gives me insight into how the world actually works.”? And recognizing which factors do you include, which factors do you exclude, and bring a model that is what I call useful.

KK: Right. That’s the art, right?

NC: Yeah, that's the art. That's the art of mathematical modeling.

KK: So another thing we do on this podcast is we ask our guests to pair their theorem with something. So what pairs well with the Lorenz equation?

NC: I chose to pair it with the Jamaican dish called ackee yam and saltfish[16:48] . Now the reason why is with ackee yam and saltfish, if you cook it right, it is delicious, but if you cook it wrong, the Ackee turns out to be poisonous and that’s a bit like the Lorenz equation.

KK: What is ackee? I don't think I know what this is.

NC: Okay. Ackee is actually a vegetable, but if you actually were to look at it, it looks like scrambled egg, but it's actually a vegetable. It's like a yellow vegetable.

EL: Huh.

KK: Interesting.

NC: And yam, it’s like an overgrown, very hard potato. It’s looks like a very overgrown, hard potato.

KK: Sure, yeah.

NC: And saltfish is just a Jamican saying for cod. Even though you could really say ackee yam and cod, they don't call it cod, they call it saltfish.

KK: Okay. All right. So I've never heard ackee.

EL: Yeah, me neither.

KK: I mean, I knew that yams, so in the United States, most people will call sweet potatoes yams, they’ll use those two words interchangeably. But of course, yams are distinct, and I think they can be poisonous if you don't cook them right, right? Or some varieties can. But so ackee is something separate from the yam.

NC: Yeah.

EL: Also poisonous if you don't cook it right.

NC: Absolutely.

KK: So can you actually access this in England, or do you have to go to Jamaica to get this?

NC: Yes, we can access this this in England because in England we have a large West Indian diaspora community.

KK: Sure, right.

NC: And also we do get lots of variety of foods from different countries around the world. So we can, it's relatively easy to to access ackee yam. And also we’ve got quite a number of Caribbean restaurants, so definitely there they are going to cook it right.

KK: So it's interesting, we have a Caribbean restaurant here in town in Gainesville, which of course we're not as far away as you are, but they don't try to poison us. The food is delicious.

EL: That you know of.

KK: Well that's right. I love eating there. The food is really spectacular. But this is interesting.

EL: And is this a family recipe? Do you have roots in in the West Indies, or…

NC: Yes, my parents were from from Jamaica. I still have relatives in Jamaica, and my wife’s descent is Jamaican. Now and again we do have that Caribbean meal. I thought, “Well, what shall I say as a food? I thought, “Well, should I go for the British fish and chips?” I thought, “No, let's go for ackee yam and saltfish.”

KK: Sure, well and actually I think your jacket looks like a Jamaican-influenced thing, right? With the black, green, and yellow, right?

NC: Yes, absolutely. And that's because it's quite cold in the in the attic. This is the same style of jacket as the Jamaican bobsled team, so I decided to wear it, as it’s quite cold up here.

EL: Yeah, Cool Runnings, the movie about that, was an integral part of my childhood. My brother and sister and I watched that movie a lot. So I’m curious about this ackee vegetable, like how sensitive are we talking for the dependence on initial conditions, the dependence on cooking this correctly to be safe? Is it pretty good, or do you have to be pretty careful?

NC: You have to be pretty good, you have to be pretty careful. As long as you follow the instructions you’re okay, but in this case, if you don't cook it long enough, you don't cook it at a high enough temperature, whatever you do, please do not eat it cold, do not eat it raw.

EL: Okay.

KK: Like actually it might kill you, or it just makes you really sick?

NC: It will make you really sick. I haven't heard— well let’s put it this way, I do not wish to carry out the experiments to see what would happen.

KK: Understood.

EL: Yes.

KK: Well this has been great fun. I've learned a lot.

EL: Yeah

KK: Thanks for joining us, Nira.

NC: Thank you very much indeed for inviting me.

Episode 34 - Skip Garibaldi

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics, theorems, and, I don't know, just about anything else under the sun, apparently. I'm Kevin Knudson. I'm one of your hosts. I'm a professor of mathematics at the University of Florida. This is your other host.

Evelyn Lamb: Hi, I'm Evelyn lamb. I'm a freelance math and science writer based in Salt Lake City. So how are things going?

KK: It's homecoming weekend. We're recording this on a Friday, and for people who might not be familiar with Southeastern Conference football, it is an enormous thing here. And so today is is a university holiday. Campus is closed. In fact, the local schools are closed. There's a big parade that starts in about 20 minutes. My son marched in it for four years. So I've seen it. I don't need to go again.

EL: Yeah.

KK: I had brunch at the president's house this morning, you know. It's a big festive time. I hope it doesn't get rained out, though. It's looking kind of gross outside. How are things for you?

EL: All right. Yeah, thankfully, no parades going on near me. Far too much of a misanthrope to enjoy that. Things are fight here. My alarm clock-- we're also recording in the the week in between the last Sunday of October and the first Sunday of November.

KK: Right.

EL: In 2007, the US switched when it went away from Daylight Saving back to Standard Time to the first Sunday of November. But my alarm clock, which automatically adjusts, was manufactured before 2007.

KK: I have one of those too.

EL: Yeah, so it's constantly teasing me this week. Like, "Oh, wouldn't it be nice if it were only 7am now?" So yeah.

KK: All right. Well, yeah, first world problems, right?

EL: Yes. Very, very much.

KK: All right. So today, we are thrilled to have Skip Garibaldi join us. Skip, why don't you introduce yourself?

Skip Garibaldi: My name is Skip Garibaldi. I'm the director at the Center for Communications Research in La Jolla.

KK: You're from San Diego, aren't you?

SG: Well, I got my PhD there.

KK: Ish?

SG: Yeah, ish.

KK: Okay.

SG: So I actually grew up in Northern California. But once I went to San Diego to get my degree, I decided that that was really the place to be.

KK: Well, who can blame you, really?

EL: Yeah, a lot to love there.

KK: It's hard to argue with San Diego. Yeah. So you've been all over. For a while you're at the Institute for Pure and Applied Math at UCLA.

SG: Yeah, that was my job before I came to the Center for Communications Research. I was associate director there. That was an amazing experience. So their job is to host conferences and workshops which bring together mathematicians in areas where there's application, or maybe mathematicians with different kinds of mathematicians where the two groups don't really talk to each other. And so the fact that they have this vision of how to do that in an effective way is pretty amazing. So that was a great experience for me.

KK: Yeah, and you even got in the news for a while. Didn't you and a reporter, like, uncover some crime syndicate? What am I remembering?

SG: That's right. Somehow, I became known for writing things about the lottery. And so a reporter who was doing an investigative piece on lottery crime in Florida contacted me, and I worked closely with him and some other mathematicians, and some people got arrested. The FBI got involved and it was a big adventure.

KK: So Florida man got arrested. Never heard of that. That's so weird.

SG: There's a story about someone in Gainesville in the newspaper article. You could take a look.

KK: It wasn't me. It wasn't me, I promise.

EL: Whoever said math wasn't an exciting field?

KK: That's right.

Alright, so, you must have a favorite theorem, Skip, what is it?

SG: I do. So you know, I listened to some of your other podcasts. And I have to confess, my favorite theorem is a little bit different from what your other guests picked.

EL: Good. We like the the great range of things that we get on here.

SG: So my favorite theorem for this podcast answers a question that I had when I was young. It's not something that is part of my research today. It's never helped me prove another theorem. But it answers some question I had from being junior high. And so the way it goes, I'm going to call it the unknowability of irrational numbers.

So let me explain. When you're a kid, and you're in school, you probably had a number line on the wall in your classroom. And so it's just a line going left to right on the wall. And it's got some markings on it for your integers, your 0,1,2,3, your -1,-2,-3, maybe it has some rational numbers, like 1/2 and 3/4 marked, but there's all these other points on that number line. And we know some of them, like the square root of two or e. Those are irrational, they're decimals that when you write them down as a number-- like π is 3.14, we know that you can't really write it down that way because the decimal keeps on going, it never repeats. So wherever you stop writing, you still haven't quite captured π.

So what I wondered about was like, "Can we name all those points on the number line?

EL: Yeah.

SG: Are π and e and the square root of two special? Or can we get all of them? And it comes up because your teacher assigns you these math problems. And it's like "x^2+3x+5=0. Tell me what x is." And then you name the answer. And it's something involving a square root and division and addition, and you use the quadratic formula, and you get the answer.

So that's the question. How many of those irrational can you actually name? And the answer is, well, it's hard.

EL: Yeah.

SG: Right?

KK: Like weirdly, like a lot of them, but not many.

SG: Exactly!

EL: Yeah.

SG: So if we just think about it, what would it mean to name one of those numbers? It would mean that, well, you'd have to write down some symbols into a coherent math problem, or a sentence or something, like π is the circumference of a circle over a diameter. And when you think about that, well, there's only finitely many choices for that first letter and finitely many choices for that second letter. So it doesn't matter how many teachers there are, and students, or planets with people on them, or alternate universes with extra students. There's only so many of those numbers you can name. And in fact, there's countably many.

EL: Right.

KK: Right. Yeah. So are we talking about just the class of algebraic numbers? Or are we even thinking a little more expansively?

SG: Absolutely more expansive than that. So for your audience members with more sophisticated tastes, you know, maybe you want to talk about periods where you can talk about the value of any integral over some kind of geometric object.

KK: Oh, right. Okay.

SG: You still have to describe the object, and you have to describe the function that you're integrating. And you have to take the integral. So it's still a finite list of symbols. And once you end up in that realm, numbers that we can describe explicitly with our language, or with an alien language, you're stuck with only a countable number of things you can name precisely.

EL: Yeah.

KK: Well, yeah, that makes sense, I suppose.

SG: Yeah. And so, Kevin, you asked about algebraic numbers. There are other classes of numbers you can think about, which, the ones I'm talking about include all of those. You can talk about something called closed form numbers, which means, like, you can take roots of polynomials and take exp and log.

KK: Right.

SG: That doesn't change the setup. That doesn't give you anything more than what I'm talking about.

EL: Yeah. And just to back up a sec, algebraic numbers, basically, it's like roots of polynomials, and then doing, like, multiplication and division with them. That kind of thing. So, like, closed form, then you're expanding that a little bit, but still in a sort of countable way.

SG: Yes. Like, what kinds of numbers could you express precisely if you had a calculator with sort of infinite precision, right? You're going to start with an integer. You can take it square root, maybe you can take its sine, you know. You can think about those kinds of numbers. That's another notion, and you still end up with a countable list of numbers.

KK: Right. So this sounds like a logic problem.

SG: Yes, it does feel that way.

KK: Yeah.

SG: So, Kevin and Evelyn, I can already imagine what you're thinking. But let me say it for the benefit of the people for whom the word "countable" is maybe a new thing thing. It means that you can imagine there's a way to order these in a list so that it makes sense to talk about the next one. And if you march down that list, you'll eventually reach all of them. That's what it means. But the interesting thing is, if you think about the numbers on the number line, we know going back to Cantor in the 1800s that those are not countable. You use the so-called diagonalization argument, if you happen to have seen that.

KK: Right.

EL: Yeah. Which is just a beautiful, beautiful thing. Just, I have to put a plug in for diagonalization.

KK: Oh, it's wonderful.

SG: I've been thinking about it a lot in preparation for this podcast. I agree.

KK: Sure.

SG: So what that means is that that's the statement, these irrational numbers, you can't name all of them, because there are uncountably many of them, but only countably many numbers you can name.

It sort of has a hideous consequence that I want to mention. And it's why this is my favorite theorem. Because it says, it's not just that you can't name all of them. It's just much worse than that. So the reason I love this theorem is not just that it answers a question from my childhood. But it tells you something kind of shocking about the universe. So when you--if you could somehow magically pick a specific point on the number line, which you can't, because you know, there's--

KK: Right.

SG: You have finite resolution when you pick points in the real world. But pretend you could, then the statement is the chance that the number you picked was a number you could name precisely is very low. Exactly. It's essentially zero.

KK: Yeah.

SG: So the technical way to say this is that the countable subset of real numbers has Lebesgue measure zero.

KK: Right.

SG: So I was feeling a little awkward about using this as my theorem for your podcast, because, you know, the proof is not much. If you know about countable and uncountable, I just told you the whole proof. And you might ask, "What else can I prove using this fact?" And the answer is, I don't know. But we've just learned something about irrational numbers that I think some of your listeners haven't known before. And I think it's a little shocking.

EL: Yeah, yeah. Well, it sounds like I was maybe more of a late bloomer on thinking about this than you, because I remember being in grad school, and just feeling really frustrated one day. I was like, you know, transcendental numbers, the non-algebraic numbers are, you know, 100% of the number line, Lebesgue measure one, and I know like, three of them, essentially. I know, like, e, π, and natural log two. And, you know, really, two of them are already kind of, in a relationship with each other. They're both related to e or the natural log idea. It's just like, okay, 2π. Oh, that's kind of a cheap transcendental number.

Like there's, there's really not that much difference. I mean, I guess then, in a sense, I only know, like, one irrational number, which is square root of 2, like, any other roots of things are non-transcendental, and then I know the rationals, but yeah, it's just like, there are all these numbers, and I know so few of them.

SG: Yeah.

KK: Right. And these other these other things, of course, when you start dealing with infinite series, and you know, you realize that, say, the Sierpinski carpet has area zero, right? But it's uncountable, and you're like, wait a minute, this can't be right. I mean, this is, I think why Cantor was so ridiculed in his time, because it does just seem ridiculous. So you were sitting around in middle school just thinking about this, and your teacher led you down this path? Or was it much later that you figured this out?

SG: Well, I figured out the answer much later. But I worried about it a lot as a child. I used to worry about a lot of things like, your classic question is--if you really want to talk about things I worried about as a child--back in seventh grade, I was really troubled about .99999 with all the nines and whether or not that was one.

EL: Oh yeah.

SG: And I have a terrible story about my eighth grade education regarding that. But in the end, I discovered that they are they are actually equal.

KK: Well, if you make some assumptions, right? I mean, there are number systems, where they're not equal.

SG: Ah, yeah, I'd be happy--I'm not prepared to get into a detailed discussion of the hyperreals.

KK: Neither am I. But what's nice about that idea is that, of course, a lot depends on our assumptions. We we set up rules, and then with the rules that we're used to, .999 repeating is equal to one. But you know, mathematicians like sandboxes, right? Okay, let's go into this sandbox and throw out this rule and see what happens. And then you get non Euclidean geometry, right, or whatever.

SG: Right.

KK: Really beautiful stuff.

SG: I have an analogy for this statement about real numbers that I don't know if your listeners will find compelling or not, but I do, so I'm going to say it unless you stop me.

KK: Okay.

EL: Go for it.

SG: Exactly. So one of the things I find totally amazing about geology is that, you know, we can see rocks that are on the surface of the earth and inspect them, and we can drill down in mines, and we can look at some rocks down there. But fundamentally, most of the geology of the earth, we can't see directly. We've never seen the mantle, we're never going to see the core. And that's most Earth. So nonetheless, there's a lot of great science you can do indirectly by analyzing as an aggregate, by studying the way, earthquake waves propagate and so on. But we're not able to look at things directly. And I think that has an analogy here with the number line, where the rocks can see on the surface are the integers and rationals. You drill down, and you can find some gems or something, and there's your irrational numbers you can name, and then all the ones you'll never be able to name, no matter how hard you try, how much time there is, how many alternate universes filled with people there are, you'll never be able to name, somehow that's like the core because you can't ever actually get directly at them.

EL: Yeah. I like this analogy a lot, because I was just reading about Inge Lehmann who is the Danish seismologist (who I think of as an applied mathematician) who was one of the people who found these different seismic waves that showed that the inner core had the liquid part--or I guess the core had the liquid part and then the solid inner core. She determined that it couldn't all be uniform, basically by doing inverse problems where, like, "Oh, these waves would not have come from this." So that's very relevant to something I just read. Christiane Rousseau actually wrote a really cool article about Inge Lehmann.

SG: Yes, that's a great article.

EL: So yeah, people should look that up.

KK: I'll have to find this.

EL: great analogy. Yeah.

KK: So, we know now that this, this is a long time there for you. So that's another question we've already answered. So, okay, what does one pair with this unknowability?

SG: Ah, so I I think I'm going to have to pair it with one of my favorite TV shows, which is Twin Peaks.

EL: Okay.

SG: So I watch the show, I really enjoy it. But there's a lot of stuff in there that just is impossible to understand.

And you can go read the stuff the people wrote about it on the side, and you can understand a little bit of it. But you know, most of it's clearly never meant to be understood. You're supposed to enjoy it as an aggregate.

KK: That's true. So you and I are the same age, roughly. We were in college when Twin Peaks was a thing. Did you did you watch it then?

SG: No, I just remember that personal ads in the school paper saying, "Anyone who has a video recording of Twin Peaks last week, please tell me. I'll bring doughnuts."

EL: You grew up in a dark time.

SG: Before DVRs, yeah.

KK: That's right. Well, yeah. Before Facebook or anything like that. You had to put an ad in the paper for stuff like this, yeah.

EL: Yeah, I'm really, really understanding the angst of your generation now.

KK: You know what, I kind of preferred it. I kind of like not being reached. Cell phones are kind of a nuisance that way. Although I don't miss paying for phone calls. Remember that, staying up till 11 to not have to pay long distance?

SG: Yeah.

KK: Alright, so Twin Peaks. So you like pie.

SG: Yeah, clearly. And coffee.

KK: And coffee.

SG: And Snoqualmie.

KK: Very good.

SG: I don't know if you--

KK: Sure. I only sort of vaguely remember-- what I remember most about that show is just being frustrated by it, right? Sometimes you'd watch it and a lot would happen. It's like, "Wow, this is bizarre and weird, and David Lynch is a genius." And then there'd be other shows where nothing would happen.

SG: Yes.

KK: I mean, nothing! And, you know, also see Book II of Game of Thrones, for example, where nothing happens, right? Yeah. And David Lynch, of course, was sort of at his peak at that time.

SG: Right.

KK: All right. So Twin Peaks. That's a good pairing because you're right, you'll never figure that out. I think a lot of it was meant to be unknowable.

SG: Yes. Yeah. Have you seen season three of Twin Peaks? The one that was out recently?

KK: No, I don't have cable anymore.

SG: About halfway through that season, there's an episode that is intensely hard to watch because so little happened on it. And if you look at the list the viewership ratings for each episode, there's a steep drop-off in the series at that episode. So this is like the most unknowable part of the number line if you if you follow the analogy.

KK: Okay. All right. That's interesting. So I assume that these these knowable numbers are probably fairly evenly distributed. I guess the rationals are pretty evenly distributed. So yeah.

So So our listeners might wonder if there's some sort of weird distribution to these things, like the ones that you can't name, do they live in certain parts? And the answer is no, they live everywhere.

SG: Yes. That's absolutely right.

EL: I wonder, though, if you can kind of--I'm thinking of continued fraction representations, where there is an explicit definition of number that's well-approximable versus badly-approximable numbers. I guess those are approximable by rationales, not by finite operations or closed form. So maybe that's a bad analogy.

KK: Mm hmm.

SG: Well, if you or your listeners are interested in, thinking about this question some more, then you can Google closed-form number. There's a Wikipedia entry to get people started. And there are a couple of references in there to some really well-written articles on the subject, one by my friend Tim Chow, that was an American Mathematical Monthly, and another one by Borwein and Crandall that was in the Notices of the AMS that's for free on the internet.

EL: Oh, great.

KK: Okay, great. We'll link to those.

EL: And actually, here's, this question, I'm not sure, so I'll just say, is this the same as computable or is closed form a different thing from computable numbers?

SG: Yeah, that's a good question. So there's not a widely-agreed upon definition of the term closed form number. So that's already a question. And then I'm not sure what your definition of computable is.

EL: Me neither.

SG: Okay.

EL: No, I've just heard of the term computable. But yeah, I guess the nice thing is no matter how you define it, your theorem will still be true.

SG: That's right. Exactly.

EL: There's still only countable.

KK: And now we've found something else unknowable: are these the same thing?

SG: Those are really hard questions in general. Yeah. That's the main question plumbed in those articles and referred to is: if you define them in these different ways, how different are they?

EL: Oh, cool.

SG: If you take a particular number, does it sit in which set? Those kinds of questions. Yeah, those are really hard usually, much like you said, what are the transcendental numbers that are--are certain numbers transcendental or not can be a hard question to answer.

EL: Yeah, yeah, even if you think, "Oh yeah, this certainly has to be transcendental, it takes a while to actually prove it."

SG: Yes.

KK: Or maybe you can't. I wonder if some of those statements are even actually undecidable, but again, we don't know. All right, we're going down weird rabbit holes here. Maybe David Lynch could just do a show.

SG: That would be great.

KK: Yeah, there would just be a lot of mathematicians, and nothing would happen

SG: And maybe owls.

KK: And maybe owls. Well, this has been great fun. Thanks for joining us before you head off to work, Skip. Our listeners don't know that it's you know, well, it's now nine in the morning where you are. So thanks for joining us, and I hope your traffic isn't so bad in La Jolla today.

SG: Every day's a great day here. Thank you so much for having me.

KK: Yeah. Thanks, Skip.

Episode 33 - Michele Audin

Evelyn Lamb: Hello and welcome to My Favorite Theorem, a math podcast where we ask mathematicians what their favorite theorem is. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

EL: I’m all right. It’s fall here, or hopefully getting to be fall soon.

KK: Never heard of it.

EL: Yeah. Florida doesn’t have that so much. But yeah, things are going well here. We had a major plumbing emergency earlier this month that is now solved.

KK: My big news is that I’m now the chair of the math department here at the university.

EL: Oh yes, that’s right.

KK: So my volume of email has increased substantially, but it’s and exciting time. We’re hiring more people, and I’m really looking forward to this new phase of my career. So good times.

EL: Great.

KK: But let’s talk about math.

EL: Yes, let’s talk about math. We’re very happy today to have Michèle Audin. Yeah, welcome, Michèle. Can you tell us a little bit about yourself?

Michèle Audin: Hello. I’m Michèle Audin. I used to be a mathematician. I’m retired now. But I was working on symplectic geometry, mainly, and I was interested also in the history of mathematics. More precisely, in the history of mathematicians.

EL: Yeah, and I came across you through, I was reading about Kovalevskaya, and I just loved your book about Kovalevskaya. It took me a little while to figure out what it was. It’s not a traditional biography. But I just loved it, and I was like, “I really want to talk to this person.” Yeah, I loved it.

MA: I wanted to write a book where there would be history and mathematics and literature also. Because she was a mathematician, but she was also a novelist. She wrote novels and things like that. I thought her mathematics were very beautiful. love her mathematics very much. But she was a very complete kind of person, so I wanted to have a book like that.

KK: So now I need to read this.

EL: Yeah. The English title is Remembering Sofya Kovalevskaya. Is that right?

MA: Yeah.

KK: I’ll look this up.

EL: Yeah. So, what is your favorite theorem.

MA: My favorite theorem today is Stokes’ formula.

EL: Oh, great!

KK: Oh, Stokes’ theorem. Great.

EL: Can you tell our listeners a little bit about it?

MA: Okay, so, it’s a theorem, okay. Why I love this theorem: Usually when you are a mathematician, you are forced to face the question, what is it useful for? Usually I’ll try to explain that I’m doing very pure mathematics and maybe it will be useful someday, but I don’t know when and for what. And this theorem is quite the opposite in some sense. It just appeared at the beginning of the 19th century as a theorem on hydrodynamics and electrostatics, some things like that. It was very applied mathematics at the very beginning. The theorem became, after one century, became a very abstract thing, the basis of abstract mathematics, like algebraic topology and things like that. So this just inverts the movement of what we are thinking usually about applied and pure mathematics. So that’s the reason why I like this theorem. Also the fact that it has many different aspects. I mean, it’s a formula, but you have a lot of different ways to write it with integrals, so that’s nice. It’s like a character in a novel.

KK: Yeah, so the general version, of course, is that the integral of, what, d-omega over the manifold is the same as the integral of omega over the boundary. But that’s not how we teach it to students.

MA: Yeah, sure. That’s how it became at the very end of the story. But at the very beginning of the story, it was not like that. It was three integrals with a very complicated thing. It is equal to something with a different number of integrals. There are a lot of derivatives and integrals. It’s quite complicated. At the very end, it became something very abstract and very beautiful.

KK: So I don’t know that I know my history. When we teach this to calculus students anymore, we show them Green’s theorem, and there are two versions of Green’s theorem that we show them, even though they’re the same. Then we show them something we call Stokes’ theorem, which is about surface integrals and then the integral around the boundary. And then there’s Gauss’s divergence theorem, which relates a triple integral to a surface integral. The fact that Gauss’s name is attached to that is probably false, right? Did Gauss do it first?

MA: Gauss had this theorem about the flux—do you say flux?

KK: Yeah.

MA: The flux of the electric—there are charges inside the surface, and you have the flux of the electric field. This was a theorem of Gauss at the very beginning. That was the first occurrence of the Stokes’ formula. Then there was this Ostrogradsky formula, which is related to water flowing from somewhere. So he just replaced the electric charges by water.

KK: Sort of the same difference, right? Electricity, water, whatever.

MA: Yes, it’s how you come to abstraction.

KK: That’s right.

MA: Then there was the Green theorem, then there is Stokes’ formula that Stokes never proved. There was this very beautiful history. And then in the 20th century, it became the basis for De Rahm theory. That’s very interesting, and moreover there were very interesting people working on that in the various countries in Europe. At the time mathematics were made in Europe, I’m sorry about that.

KK: Well, that’s how it was.

MA: And so there are many interesting mathematicians, many characters, different characters. So it's like it's like a novel. The main character is the formula, and the others are the mathematicians.

EL: Yeah. And so who are some of your favorite mathematicians from that story? Anyone that stands out to you?

MA: Okay, there are two of them: Ostrogradsky and Green. Do you know who was Green?

EL: I don't know about him as a person really.

MA: Yeah, really? Do you know, Kevin? No.

KK: No, I don't.

MA: Okay. So nobody knows, by the way. So he was he was just the son of a baker in Nottingham. And this baker became very rich and decided to buy a mill and then to put his son to be the miller. The son was Green. Nobody knows where he learned anything. He spent one year in primary school in Nottingham, and that’s it. And he was a member of some kind of, you know, there are books…it’s not a library, but morally it’s a library. Okay. And that’s it. And then appears a book, which is called, let me remember how it is called. It’s called An essay on the application of mathematical analysis to the theories of electricity and magnetism. And this appears in 1828.

EL: And this is just out of nowhere?

MA: Out of nowhere. And then the professors in Cambridge say, “Okay, it’s impossible. We have to bring here that guy.” So they take the miller from his mill and they put him in the University of Cambridge. So he was about, I don’t know, 30 or 40. and of course, it was not very convenient for the son of a baker to be a student with the sons of the gentlemen of England.

KK: Sure.

MA: Okay. So he didn't us stay there. He left, and then he died and nobody knew about that. There was this book, and that’s it.

KK: So he was he was 13 or 14 years old when he wrote this? [Ed. note: Kevin and Evelyn had misheard Dr. Audin. Green was about 35 when he wrote it. The joys of international video call reception!]

MA: Yeah. And then and then he died, and nobody knew except that—

KK: Wow.

MA: Wow. And then appears a guy called Thomson, Lord Kelvin later. This was a very young guy, and he decided to go to Paris to speak with a French mathematicians like Cauchy, Liouville. And then it was a very long trip, and he took with him a few books to read during the journey. And among these books was this Green book, and he was completely excited about that. And he arrived in Paris and decided to speak of this Green theorem and this work to everybody in Paris. There are letters and lots of documentation about that. And then this is how the Green formula appeared in the mathematics.

EL: Interesting! Yeah, I didn't know about that story at all. Thanks.

KK: It’s fascinating.

MA: Nobody knows. Yeah, that's very interesting.

KK: Isn’t what we know about Stokes’ theorem, wasn't it set as an exam problem at Cambridge?

MA: Yeah, exactly. So it began It began with a letter of Lord Kelvin to Stokes. They were very friendly together, on the same age and doing say mathematics and physics and they were very friendly together and and they were but they were not at the same at the same place of the world writing letters. And once Thomson, Kelvin, sent a letter to Stokes speaking of mathematics, and at the very end a postscript where he said: You know that this formula should be very interesting. And he writes something which is what we now know as the Stokes theorem.

And then the guy Stokes, he had to make a problem for an exam, and he gave this as an examination. You know, it was in Cambridge, they have to be very strong.

KK: Sure.

MA: And this is why it’s called the Stokes’ formula.

EL: Wow.

KK: Wow. Yeah ,I sort of knew that story. I didn't know exactly how it came to be. I knew somewhere in the back of my mind that it had been set as an exam problem.

MA: It’s written in a book of Maxwell.

KK: Okay.

EL: And so the second person you mentioned, I forget the name,

MA: Ostrogradsky. Well, I don’t know how to pronounce it in Russian, and even in English, but Ostrogradsky, something like that. So he was a student in mathematics in Ukraine at that time, which was Russia at that time, by the way. And he was passing his exams, and the among the examination topics there was religion. So he didn't go for that, so he was expelled from the university, and he decided to go to Paris. So it was in 1820, something like that. He went to Paris. He arrived there, and had no exams, and he knew nobody, and he made connections with a lot of people, especially with Cauchy, who was a was not a very nice guy, but he was very nice to Ostrogradsky.

And then he came back to Russia and he was the director of all the professors teaching mathematics in military schools in in Russia. So it was quite important. And he wrote books about differential calculus—what we call differential calculus in France but you call calculus in the U.S. He wrote a book like that, and for instance, because we were speaking of Kovalevskaya when she was a child, on the walls of her bedroom there were the sheets of the course of Ostrogradsky on the wall, and she read that when she was a little girl. She was very good in calculus.

This is another story, I’m sorry.

KK: No, this is the best part.

MA: And so, next question.

KK: Okay, so now I’ve got to know: What does one pair with Stokes’ theorem?

MA: Ah, a novel, of course.

EL: Of course!

KK: A novel. Which one?

MA: Okay, I wrote one, so I’m doing my own advertisement.

EL: Yeah, I was hoping we could talk about this. So yeah, tell us more about this novel.

MA: Okay, this is called Stokes’ Formula, a novel—La formule de Stokes, roman. I mean, the word “novel” is in the title. In this book I tell lots of stories about the mathematicians, but also about the about the formula itself, the theorem itself. How to say that? It’s not written like the historians of mathematics like, or want you to write. There are people speaking and dialogues and things like that. For instance, at the end of the book there is a first meeting of the Bourbaki mathematicians, the Boubaki group. They are in a restaurant, and they are having a small talk, like you have in a restaurant. There are six of them, and they order the food and they discuss mathematics. It looks like just small talk like that, but actually everything they say comes from the Bourbaki archives.

EL: Oh wow.

MA: Well, this is a way to write. And also this is a book. How to say that? I decided it would be very boring if the history of Stokes’ formula was told from a chronological point of view, so it doesn’t start at the beginning, and it does not end at the end of the story. All the chapters, the title is a date: first of January, second of January, and they are ordered according to the dates. So you have for instance, it starts with the first of January, and then you have first of February, and so on, until the end, which is in December, of course. But it’s not during the same year.

EL: Right.

MA: Well, the first of January is in 1862, and the fifth of January is in 1857, and so on. I was very, very fortunate, I was very happy, that the very end of the story is in December because the first Bourbaki meeting was in December, and I wanted to have the end there. Okay, so there are different stories, and they are told on different dates, but not using the chronology. And also in the book I explain what the formula means. You are comparing things inside the volume, and what happens on the surface face of the volume. I tried to explain the mathematics.

Also, in every chapter there is a formula, a different formula. I think it’s very important to show that formulas can be beautiful. And some are more beautiful than others. And the reader can just skip the formula, but look at it and just points out that it's beautiful, even if I don't understand it completely.

There were different constraints I used to write the book, and one of them was to have a formula, exactly one formula in every chapter.

EL; Yeah, and one of the reasons we wanted to talk to you—not just that I read your book about Kovalevskaya and kind of fell in love with it—but also because since leaving math academia, you've been doing a lot more literature, including being part of the Oulipo group, right, in France?

MA: Yes. You want me to explain what it is?

EL: Yeah, I don't really know what it is, so it'd be great if you could tell us a little more about that.

KK: Okay. It's a group—for mathematicians, I should say it’s a set—of writers and a few mathematicians. It was founded in 1960 by Raymond Queneau and François Le Lionnais. The idea is to the idea is to find constraints to write some literary texts. For instance, the most famous may be the novel by George Perec, La Disparition. It was translated in English with the title A Void, which is a rather long novel which doesn’t use the letter e. In French, it is really very difficult.

EL: Yeah.

MA: In English also, but in French even more.

EL: Oh, wow.

MA: Because you cannot use the feminine, for instance.

EL: Oh, right. That is kind of a problem.

MA: Okay, so some of the constraints have a mathematical background. For instance, this is not the case for La Disparition, but this is a case for for some other constraints, like I don't know, using permutations or graph theory to construct a text.

KK: I actually know a little about this. I taught a class in mathematics and literature a few years ago, and I did talk about Oulipo. We did some of these—there are these generators on the internet where you can, one rule is where you pick a number, say five, and you look at every noun and replace it by the one that is five entries later than that in the dictionary, for example. And there are websites that will, you feed it text, and it's a bit imperfect because it doesn't classify things as nouns properly sometimes, it's an interesting exercise. Or there was another one where—sonnets. So you would you would create sonnets. Sonnets have 14 lines, but you would do it sort of as an Exquisite Corpse, where you would write all these different lines for sonnets, and then you could remove them one at a time to get a really large number, I forget now however many you do so, yeah, things like that, right?

MA: Yeah, this is cent mille milliards, which is 10 to the 14.

KK: That’s right, yeah. So 10 different sonnets. But yeah, it’s really really interesting.

MA: The first example you gave then, which is called in French “X plus sept,” X plus seven, you do you start from a substantive, a noun, you take the seventh in a dictionary following it.

KK: That’s right.

MA: It depends on the dictionary you use, of course.

KK: Sure.

EL: Right.

MA: So that's what they did at the beginning, but now they're all different.

KK: Sure.

EL: Yeah, it's a really neat creative exercise to try to do that kind of constraint writing.

MA: That forms a constraint, the calendar constraint I used to in this book, is based on books by Michelle Grangaud, who is a poet from the Oulipo also, and she wrote Calendars, which were books of poetry. That's where the idea comes from.

EL: Yeah, and I assume this, your novel has been translated into English?

MA: Not yet.

EL: Oh, okay.

MA: Somebody told me she would do it, and she started, and I have no news now. I don’t know if she were thinking of a published or not. If she can do something, I will be very grateful.

EL: Yeah, so it’s a good reason to brush up your French, then, to read this novel.

And where can people find—is there writing work of yours that people can find on a website or something that has it all together?

MA: Okay, there is a website of the Oulipo, first of all, oulipo.net or something like that. Very easy to find.

KK: We’ll find it.

MA: Also, I have a webpage myself, but what I write is usually on the Oulipo site. I have also a site, a history site. It’s about history but not about mathematics. It’s about the Paris Commune in 1871. It has nothing to do with mathematics, but this is one of the things I am working on.

EL: Okay. Yeah, we'll share that with with people so they can find out more of this stuff.

MA: Thank you.

KK: Alright, this has been great fun. I learned a lot today. This is this is the best part of doing this podcast, actually, that Evelyn and I really learn all kinds of cool stuff and talk to interesting people. So we really appreciate you to appreciate you taking the time to talk to us today, and thanks for persevering through the technical difficulties.

MA: Yes. So we are finished? Okay. Goodbye.

EL: Bye.